Biblio

Found 433 results

Filters: Keyword is Neural networks  [Clear All Filters]
2022-08-26
Xia, Hongbing, Bao, Jinzhou, Guo, Ping.  2021.  Asymptotically Stable Fault Tolerant Control for Nonlinear Systems Through Differential Game Theory. 2021 17th International Conference on Computational Intelligence and Security (CIS). :262—266.
This paper investigates an asymptotically stable fault tolerant control (FTC) method for nonlinear continuous-time systems (NCTS) with actuator failures via differential game theory (DGT). Based on DGT, the FTC problem can be regarded as a two-player differential game problem with control player and fault player, which is solved by utilizing adaptive dynamic programming technique. Using a critic-only neural network, the cost function is approximated to obtain the solution of the Hamilton-Jacobi-Isaacs equation (HJIE). Then, the FTC strategy can be obtained based on the saddle point of HJIE, and ensures the satisfactory control performance for NCTS. Furthermore, the closed-loop NCTS can be guaranteed to be asymptotically stable, rather than ultimately uniformly bounded in corresponding existing methods. Finally, a simulation example is provided to verify the safe and reliable fault tolerance performance of the designed control method.
2022-02-07
Yang, Chen, Yang, Zepeng, Hou, Jia, Su, Yang.  2021.  A Lightweight Full Homomorphic Encryption Scheme on Fully-connected Layer for CNN Hardware Accelerator achieving Security Inference. 2021 28th IEEE International Conference on Electronics, Circuits, and Systems (ICECS). :1–4.
The inference results of neural network accelerators often involve personal privacy or business secrets in intelligent systems. It is important for the safety of convolutional neural network (CNN) accelerator to prevent the key data and inference result from being leaked. The latest CNN models have started to combine with fully homomorphic encryption (FHE), ensuring the data security. However, the computational complexity, data storage overhead, inference time are significantly increased compared with the traditional neural network models. This paper proposed a lightweight FHE scheme on fully-connected layer for CNN hardware accelerator to achieve security inference, which not only protects the privacy of inference results, but also avoids excessive hardware overhead and great performance degradation. Compared with state-of-the-art works, this work reduces computational complexity by approximately 90% and decreases ciphertext size by 87%∼95%.
2022-06-09
Fang, Shiwei, Huang, Jin, Samplawski, Colin, Ganesan, Deepak, Marlin, Benjamin, Abdelzaher, Tarek, Wigness, Maggie B..  2021.  Optimizing Intelligent Edge-clouds with Partitioning, Compression and Speculative Inference. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :892–896.
Internet of Battlefield Things (IoBTs) are well positioned to take advantage of recent technology trends that have led to the development of low-power neural accelerators and low-cost high-performance sensors. However, a key challenge that needs to be dealt with is that despite all the advancements, edge devices remain resource-constrained, thus prohibiting complex deep neural networks from deploying and deriving actionable insights from various sensors. Furthermore, deploying sophisticated sensors in a distributed manner to improve decision-making also poses an extra challenge of coordinating and exchanging data between the nodes and server. We propose an architecture that abstracts away these thorny deployment considerations from an end-user (such as a commander or warfighter). Our architecture can automatically compile and deploy the inference model into a set of distributed nodes and server while taking into consideration of the resource availability, variation, and uncertainties.
2022-06-08
Wang, Runhao, Kang, Jiexiang, Yin, Wei, Wang, Hui, Sun, Haiying, Chen, Xiaohong, Gao, Zhongjie, Wang, Shuning, Liu, Jing.  2021.  DeepTrace: A Secure Fingerprinting Framework for Intellectual Property Protection of Deep Neural Networks. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :188–195.

Deep Neural Networks (DNN) has gained great success in solving several challenging problems in recent years. It is well known that training a DNN model from scratch requires a lot of data and computational resources. However, using a pre-trained model directly or using it to initialize weights cost less time and often gets better results. Therefore, well pre-trained DNN models are valuable intellectual property that we should protect. In this work, we propose DeepTrace, a framework for model owners to secretly fingerprinting the target DNN model using a special trigger set and verifying from outputs. An embedded fingerprint can be extracted to uniquely identify the information of model owner and authorized users. Our framework benefits from both white-box and black-box verification, which makes it useful whether we know the model details or not. We evaluate the performance of DeepTrace on two different datasets, with different DNN architectures. Our experiment shows that, with the advantages of combining white-box and black-box verification, our framework has very little effect on model accuracy, and is robust against different model modifications. It also consumes very little computing resources when extracting fingerprint.

2022-06-07
He, Weiyu, Wu, Xu, Wu, Jingchen, Xie, Xiaqing, Qiu, Lirong, Sun, Lijuan.  2021.  Insider Threat Detection Based on User Historical Behavior and Attention Mechanism. 2021 IEEE Sixth International Conference on Data Science in Cyberspace (DSC). :564–569.
Insider threat makes enterprises or organizations suffer from the loss of property and the negative influence of reputation. User behavior analysis is the mainstream method of insider threat detection, but due to the lack of fine-grained detection and the inability to effectively capture the behavior patterns of individual users, the accuracy and precision of detection are insufficient. To solve this problem, this paper designs an insider threat detection method based on user historical behavior and attention mechanism, including using Long Short Term Memory (LSTM) to extract user behavior sequence information, using Attention-based on user history behavior (ABUHB) learns the differences between different user behaviors, uses Bidirectional-LSTM (Bi-LSTM) to learn the evolution of different user behavior patterns, and finally realizes fine-grained user abnormal behavior detection. To evaluate the effectiveness of this method, experiments are conducted on the CMU-CERT Insider Threat Dataset. The experimental results show that the effectiveness of this method is 3.1% to 6.3% higher than that of other comparative model methods, and it can detect insider threats in different user behaviors with fine granularity.
2022-03-01
Li, Xiaojian, Chen, Jing, Jiang, Yiyi, Hu, Hangping, Yang, Haopeng.  2021.  An Accountability-Oriented Generation approach to Time-Varying Structure of Cloud Service. 2021 IEEE International Conference on Services Computing (SCC). :413–418.
In the current cloud service development, during the widely used of cloud service, it can self organize and respond on demand when the cloud service in phenomenon of failure or violation, but it may still cause violation. The first step in forecasting or accountability for this situation, is to generate a dynamic structure of cloud services in a timely manner. In this research, it has presented a method to generate the time-varying structure of cloud service. Firstly, dependencies between tasks and even instances within a job of cloud service are visualized to explore the time-varying characteristics contained in the cloud service structure. And then, those dependencies are discovered quantitatively using CNN (Convolutional Neural Networks). Finally, it structured into an event network of cloud service for tracing violation and other usages. A validation to this approach has been examined by an experiment based on Alibaba’s dataset. A function integrity of this approach may up to 0.80, which is higher than Bai Y and others which is no more than 0.60.
2022-03-15
Aghakhani, Hojjat, Meng, Dongyu, Wang, Yu-Xiang, Kruegel, Christopher, Vigna, Giovanni.  2021.  Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability. 2021 IEEE European Symposium on Security and Privacy (EuroS P). :159—178.
A recent source of concern for the security of neural networks is the emergence of clean-label dataset poisoning attacks, wherein correctly labeled poison samples are injected into the training dataset. While these poison samples look legitimate to the human observer, they contain malicious characteristics that trigger a targeted misclassification during inference. We propose a scalable and transferable clean-label poisoning attack against transfer learning, which creates poison images with their center close to the target image in the feature space. Our attack, Bullseye Polytope, improves the attack success rate of the current state-of-the-art by 26.75% in end-to-end transfer learning, while increasing attack speed by a factor of 12. We further extend Bullseye Polytope to a more practical attack model by including multiple images of the same object (e.g., from different angles) when crafting the poison samples. We demonstrate that this extension improves attack transferability by over 16% to unseen images (of the same object) without using extra poison samples.
2022-02-03
Huang, Chao, Luo, Wenhao, Liu, Rui.  2021.  Meta Preference Learning for Fast User Adaptation in Human-Supervisory Multi-Robot Deployments. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :5851—5856.
As multi-robot systems (MRS) are widely used in various tasks such as natural disaster response and social security, people enthusiastically expect an MRS to be ubiquitous that a general user without heavy training can easily operate. However, humans have various preferences on balancing between task performance and safety, imposing different requirements onto MRS control. Failing to comply with preferences makes people feel difficult in operation and decreases human willingness of using an MRS. Therefore, to improve social acceptance as well as performance, there is an urgent need to adjust MRS behaviors according to human preferences before triggering human corrections, which increases cognitive load. In this paper, a novel Meta Preference Learning (MPL) method was developed to enable an MRS to fast adapt to user preferences. MPL based on meta learning mechanism can quickly assess human preferences from limited instructions; then, a neural network based preference model adjusts MRS behaviors for preference adaption. To validate method effectiveness, a task scenario "An MRS searches victims in an earthquake disaster site" was designed; 20 human users were involved to identify preferences as "aggressive", "medium", "reserved"; based on user guidance and domain knowledge, about 20,000 preferences were simulated to cover different operations related to "task quality", "task progress", "robot safety". The effectiveness of MPL in preference adaption was validated by the reduced duration and frequency of human interventions.
2022-03-09
Bo, Xihao, Jing, Xiaoyang, Yang, Xiaojian.  2021.  Style Transfer Analysis Based on Generative Adversarial Networks. 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). :27—30.
Style transfer means using a neural network to extract the content of one image and the style of the other image. The two are combined to get the final result, broadly applied in social communication, animation production, entertainment items. Using style transfer, users can share and exchange images; painters can create specific art styles more readily with less creation cost and production time. Therefore, style transfer is widely concerned recently due to its various and valuable applications. In the past few years, the paper reviews style transfer and chooses three representative works to analyze in detail and contrast with each other, including StyleGAN, CycleGAN, and TL-GAN. Moreover, what function an ideal model of style transfer should realize is discussed. Compared with such a model, potential problems and prospects of different methods to achieve style transfer are listed. A couple of solutions to these drawbacks are given in the end.
2022-04-26
Loya, Jatan, Bana, Tejas.  2021.  Privacy-Preserving Keystroke Analysis using Fully Homomorphic Encryption amp; Differential Privacy. 2021 International Conference on Cyberworlds (CW). :291–294.

Keystroke dynamics is a behavioural biometric form of authentication based on the inherent typing behaviour of an individual. While this technique is gaining traction, protecting the privacy of the users is of utmost importance. Fully Homomorphic Encryption is a technique that allows performing computation on encrypted data, which enables processing of sensitive data in an untrusted environment. FHE is also known to be “future-proof” since it is a lattice-based cryptosystem that is regarded as quantum-safe. It has seen significant performance improvements over the years with substantially increased developer-friendly tools. We propose a neural network for keystroke analysis trained using differential privacy to speed up training while preserving privacy and predicting on encrypted data using FHE to keep the users' privacy intact while offering sufficient usability.

2021-06-24
Ali, Muhammad, Hu, Yim-Fun, Luong, Doanh Kim, Oguntala, George, Li, Jian-Ping, Abdo, Kanaan.  2020.  Adversarial Attacks on AI based Intrusion Detection System for Heterogeneous Wireless Communications Networks. 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC). :1–6.
It has been recognized that artificial intelligence (AI) will play an important role in future societies. AI has already been incorporated in many industries to improve business processes and automation. Although the aviation industry has successfully implemented flight management systems or autopilot to automate flight operations, it is expected that full embracement of AI remains a challenge. Given the rigorous validation process and the requirements for the highest level of safety standards and risk management, AI needs to prove itself being safe to operate. This paper addresses the safety issues of AI deployment in an aviation network compatible with the Future Communication Infrastructure that utilizes heterogeneous wireless access technologies for communications between the aircraft and the ground networks. It further considers the exploitation of software defined networking (SDN) technologies in the ground network while the adoption of SDN in the airborne network can be optional. Due to the nature of centralized management in SDN-based network, the SDN controller can become a single point of failure or a target for cyber attacks. To countermeasure such attacks, an intrusion detection system utilises AI techniques, more specifically deep neural network (DNN), is considered. However, an adversary can target the AI-based intrusion detection system. This paper examines the impact of AI security attacks on the performance of the DNN algorithm. Poisoning attacks targeting the DSL-KDD datasets which were used to train the DNN algorithm were launched at the intrusion detection system. Results showed that the performance of the DNN algorithm has been significantly degraded in terms of the mean square error, accuracy rate, precision rate and the recall rate.
2021-09-21
bin Asad, Ashub, Mansur, Raiyan, Zawad, Safir, Evan, Nahian, Hossain, Muhammad Iqbal.  2020.  Analysis of Malware Prediction Based on Infection Rate Using Machine Learning Techniques. 2020 IEEE Region 10 Symposium (TENSYMP). :706–709.
In this modern, technological age, the internet has been adopted by the masses. And with it, the danger of malicious attacks by cybercriminals have increased. These attacks are done via Malware, and have resulted in billions of dollars of financial damage. This makes the prevention of malicious attacks an essential part of the battle against cybercrime. In this paper, we are applying machine learning algorithms to predict the malware infection rates of computers based on its features. We are using supervised machine learning algorithms and gradient boosting algorithms. We have collected a publicly available dataset, which was divided into two parts, one being the training set, and the other will be the testing set. After conducting four different experiments using the aforementioned algorithms, it has been discovered that LightGBM is the best model with an AUC Score of 0.73926.
2021-03-09
MATSUNAGA, Y., AOKI, N., DOBASHI, Y., KOJIMA, T..  2020.  A Black Box Modeling Technique for Distortion Stomp Boxes Using LSTM Neural Networks. 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). :653–656.
This paper describes an experimental result of modeling stomp boxes of the distortion effect based on a machine learning approach. Our proposed technique models a distortion stomp box as a neural network consisting of LSTM layers. In this approach, the neural network is employed for learning the nonlinear behavior of the distortion stomp boxes. All the parameters for replicating the distortion sound are estimated through its training process using the input and output signals obtained from some commercial stomp boxes. The experimental result indicates that the proposed technique may have a certain appropriateness to replicate the distortion sound by using the well-trained neural networks.
2021-06-24
Połap, Dawid, Srivastava, Gautam, Jolfaei, Alireza, Parizi, Reza M..  2020.  Blockchain Technology and Neural Networks for the Internet of Medical Things. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :508–513.
In today's technological climate, users require fast automation and digitization of results for large amounts of data at record speeds. Especially in the field of medicine, where each patient is often asked to undergo many different examinations within one diagnosis or treatment. Each examination can help in the diagnosis or prediction of further disease progression. Furthermore, all produced data from these examinations must be stored somewhere and available to various medical practitioners for analysis who may be in geographically diverse locations. The current medical climate leans towards remote patient monitoring and AI-assisted diagnosis. To make this possible, medical data should ideally be secured and made accessible to many medical practitioners, which makes them prone to malicious entities. Medical information has inherent value to malicious entities due to its privacy-sensitive nature in a variety of ways. Furthermore, if access to data is distributively made available to AI algorithms (particularly neural networks) for further analysis/diagnosis, the danger to the data may increase (e.g., model poisoning with fake data introduction). In this paper, we propose a federated learning approach that uses decentralized learning with blockchain-based security and a proposition that accompanies that training intelligent systems using distributed and locally-stored data for the use of all patients. Our work in progress hopes to contribute to the latest trend of the Internet of Medical Things security and privacy.
2021-02-23
Shah, A., Clachar, S., Minimair, M., Cook, D..  2020.  Building Multiclass Classification Baselines for Anomaly-based Network Intrusion Detection Systems. 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA). :759—760.
This paper showcases multiclass classification baselines using different machine learning algorithms and neural networks for distinguishing legitimate network traffic from direct and obfuscated network intrusions. This research derives its baselines from Advanced Security Network Metrics & Tunneling Obfuscations dataset. The dataset captured legitimate and obfuscated malicious TCP communications on selected vulnerable network services. The multiclass classification NIDS is able to distinguish obfuscated and direct network intrusion with up to 95% accuracy.
2021-01-15
Korolev, D., Frolov, A., Babalova, I..  2020.  Classification of Websites Based on the Content and Features of Sites in Onion Space. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1680—1683.
This paper describes a method for classifying onion sites. According to the results of the research, the most spread model of site in onion space is built. To create such a model, a specially trained neural network is used. The classification of neural network is based on five different categories such as using authentication system, corporate email, readable URL, feedback and type of onion-site. The statistics of the most spread types of websites in Dark Net are given.
2021-09-21
Brzezinski Meyer, Maria Laura, Labit, Yann.  2020.  Combining Machine Learning and Behavior Analysis Techniques for Network Security. 2020 International Conference on Information Networking (ICOIN). :580–583.
Network traffic attacks are increasingly common and varied, this is a big problem especially when the target network is centralized. The creation of IDS (Intrusion Detection Systems) capable of detecting various types of attacks is necessary. Machine learning algorithms are widely used in the classification of data, bringing a good result in the area of computer networks. In addition, the analysis of entropy and distance between data sets are also very effective in detecting anomalies. However, each technique has its limitations, so this work aims to study their combination in order to improve their performance and create a new intrusion detection system capable of well detect some of the most common attacks. Reliability indices will be used as metrics to the combination decision and they will be updated in each new dataset according to the decision made earlier.
Wu, Qiang, Zhang, Jiliang.  2020.  CT PUF: Configurable Tristate PUF against Machine Learning Attacks. 2020 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.
Strong physical unclonable function (PUF) is a promising lightweight hardware security primitive for device authentication. However, it is vulnerable to machine learning attacks. This paper demonstrates that even a recently proposed dual-mode PUF is still can be broken. In order to improve the security, this paper proposes a highly flexible machine learning resistant configurable tristate (CT) PUF which utilizes the response generated in the working state of Arbiter PUF to XOR the challenge input and response output of other two working states (ring oscillator (RO) PUF and bitable ring (BR) PUF). The proposed CT PUF is implemented on Xilinx Artix-7 FPGAs and the experiment results show that the modeling accuracy of logistic regression and artificial neural network is reduced to the mid-50%.
2021-06-24
Dang, Tran Khanh, Truong, Phat T. Tran, Tran, Pi To.  2020.  Data Poisoning Attack on Deep Neural Network and Some Defense Methods. 2020 International Conference on Advanced Computing and Applications (ACOMP). :15–22.
In recent years, Artificial Intelligence has disruptively changed information technology and software engineering with a proliferation of technologies and applications based-on it. However, recent researches show that AI models in general and the most greatest invention since sliced bread - Deep Learning models in particular, are vulnerable to being hacked and can be misused for bad purposes. In this paper, we carry out a brief review of data poisoning attack - one of the two recently dangerous emerging attacks - and the state-of-the-art defense methods for this problem. Finally, we discuss current challenges and future developments.
2022-11-08
Wshah, Safwan, Shadid, Reem, Wu, Yuhao, Matar, Mustafa, Xu, Beilei, Wu, Wencheng, Lin, Lei, Elmoudi, Ramadan.  2020.  Deep Learning for Model Parameter Calibration in Power Systems. 2020 IEEE International Conference on Power Systems Technology (POWERCON). :1–6.
In power systems, having accurate device models is crucial for grid reliability, availability, and resiliency. Existing model calibration methods based on mathematical approaches often lead to multiple solutions due to the ill-posed nature of the problem, which would require further interventions from the field engineers in order to select the optimal solution. In this paper, we present a novel deep-learning-based approach for model parameter calibration in power systems. Our study focused on the generator model as an example. We studied several deep-learning-based approaches including 1-D Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU), which were trained to estimate model parameters using simulated Phasor Measurement Unit (PMU) data. Quantitative evaluations showed that our proposed methods can achieve high accuracy in estimating the model parameters, i.e., achieved a 0.0079 MSE on the testing dataset. We consider these promising results to be the basis for further exploration and development of advanced tools for model validation and calibration.
2021-11-30
Li, Gangqiang, Wu, Sissi Xiaoxiao, Zhang, Shengli, Li, Qiang.  2020.  Detect Insider Attacks Using CNN in Decentralized Optimization. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8758–8762.
This paper studies the security issue of a gossip-based distributed projected gradient (DPG) algorithm, when it is applied for solving a decentralized multi-agent optimization. It is known that the gossip-based DPG algorithm is vulnerable to insider attacks because each agent locally estimates its (sub)gradient without any supervision. This work leverages the convolutional neural network (CNN) to perform the detection and localization of the insider attackers. Compared to the previous work, CNN can learn appropriate decision functions from the original state information without preprocessing through artificially designed rules, thereby alleviating the dependence on complex pre-designed models. Simulation results demonstrate that the proposed CNN-based approach can effectively improve the performance of detecting and localizing malicious agents, as compared with the conventional pre-designed score-based model.
2021-11-29
Yin, Yifei, Zulkernine, Farhana, Dahan, Samuel.  2020.  Determining Worker Type from Legal Text Data Using Machine Learning. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :444–450.
This project addresses a classic employment law question in Canada and elsewhere using machine learning approach: how do we know whether a worker is an employee or an independent contractor? This is a central issue for self-represented litigants insofar as these two legal categories entail very different rights and employment protections. In this interdisciplinary research study, we collaborated with the Conflict Analytics Lab to develop machine learning models aimed at determining whether a worker is an employee or an independent contractor. We present a number of supervised learning models including a neural network model that we implemented using data labeled by law researchers and compared the accuracy of the models. Our neural network model achieved an accuracy rate of 91.5%. A critical discussion follows to identify the key features in the data that influence the accuracy of our models and provide insights about the case outcomes.
2021-05-13
Wenhui, Sun, Kejin, Wang, Aichun, Zhu.  2020.  The Development of Artificial Intelligence Technology And Its Application in Communication Security. 2020 International Conference on Computer Engineering and Application (ICCEA). :752—756.
Artificial intelligence has been widely used in industries such as smart manufacturing, medical care and home furnishings. Among them, the value of the application in communication security is very important. This paper makes a further exploration of the artificial intelligence technology and its application, and gives a detailed analysis of its development, standardization and the application.
2021-11-29
Hu, Shengze, He, Chunhui, Ge, Bin, Liu, Fang.  2020.  Enhanced Word Embedding Method in Text Classification. 2020 6th International Conference on Big Data and Information Analytics (BigDIA). :18–22.
For the task of natural language processing (NLP), Word embedding technology has a certain impact on the accuracy of deep neural network algorithms. Considering that the current word embedding method cannot realize the coexistence of words and phrases in the same vector space. Therefore, we propose an enhanced word embedding (EWE) method. Before completing the word embedding, this method introduces a unique sentence reorganization technology to rewrite all the sentences in the original training corpus. Then, all the original corpus and the reorganized corpus are merged together as the training corpus of the distributed word embedding model, so as to realize the coexistence problem of words and phrases in the same vector space. We carried out experiment to demonstrate the effectiveness of the EWE algorithm on three classic benchmark datasets. The results show that the EWE method can significantly improve the classification performance of the CNN model.
2022-11-08
Yang, Shaofei, Liu, Longjun, Li, Baoting, Sun, Hongbin, Zheng, Nanning.  2020.  Exploiting Variable Precision Computation Array for Scalable Neural Network Accelerators. 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS). :315–319.
In this paper, we present a flexible Variable Precision Computation Array (VPCA) component for different accelerators, which leverages a sparsification scheme for activations and a low bits serial-parallel combination computation unit for improving the efficiency and resiliency of accelerators. The VPCA can dynamically decompose the width of activation/weights (from 32bit to 3bit in different accelerators) into 2-bits serial computation units while the 2bits computing units can be combined in parallel computing for high throughput. We propose an on-the-fly compressing and calculating strategy SLE-CLC (single lane encoding, cross lane calculation), which could further improve performance of 2-bit parallel computing. The experiments results on image classification datasets show VPCA can outperforms DaDianNao, Stripes, Loom-2bit by 4.67×, 2.42×, 1.52× without other overhead on convolution layers.