Visible to the public Biblio

Filters: Keyword is DNN  [Clear All Filters]
2023-07-21
Avula, Himaja, R, Ranjith, S Pillai, Anju.  2022.  CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions. 2022 6th International Conference on Electronics, Communication and Aerospace Technology. :1360—1365.
The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.
2023-03-31
Moraffah, Raha, Liu, Huan.  2022.  Query-Efficient Target-Agnostic Black-Box Attack. 2022 IEEE International Conference on Data Mining (ICDM). :368–377.
Adversarial attacks have recently been proposed to scrutinize the security of deep neural networks. Most blackbox adversarial attacks, which have partial access to the target through queries, are target-specific; e.g., they require a well-trained surrogate that accurately mimics a given target. In contrast, target-agnostic black-box attacks are developed to attack any target; e.g., they learn a generalized surrogate that can adapt to any target via fine-tuning on samples queried from the target. Despite their success, current state-of-the-art target-agnostic attacks require tremendous fine-tuning steps and consequently an immense number of queries to the target to generate successful attacks. The high query complexity of these attacks makes them easily detectable and thus defendable. We propose a novel query-efficient target-agnostic attack that trains a generalized surrogate network to output the adversarial directions iv.r.t. the inputs and equip it with an effective fine-tuning strategy that only fine-tunes the surrogate when it fails to provide useful directions to generate the attacks. Particularly, we show that to effectively adapt to any target and generate successful attacks, it is sufficient to fine-tune the surrogate with informative samples that help the surrogate get out of the failure mode with additional information on the target’s local behavior. Extensive experiments on CIFAR10 and CIFAR-100 datasets demonstrate that the proposed target-agnostic approach can generate highly successful attacks for any target network with very few fine-tuning steps and thus significantly smaller number of queries (reduced by several order of magnitudes) compared to the state-of-the-art baselines.
2022-03-14
Basnet, Manoj, Poudyal, Subash, Ali, Mohd. Hasan, Dasgupta, Dipankar.  2021.  Ransomware Detection Using Deep Learning in the SCADA System of Electric Vehicle Charging Station. 2021 IEEE PES Innovative Smart Grid Technologies Conference - Latin America (ISGT Latin America). :1—5.
The Supervisory control and data acquisition (SCADA) systems have been continuously leveraging the evolution of network architecture, communication protocols, next-generation communication techniques (5G, 6G, Wi-Fi 6), and the internet of things (IoT). However, SCADA system has become the most profitable and alluring target for ransomware attackers. This paper proposes the deep learning-based novel ransomware detection framework in the SCADA controlled electric vehicle charging station (EVCS) with the performance analysis of three deep learning algorithms, namely deep neural network (DNN), 1D convolution neural network (CNN), and long short-term memory (LSTM) recurrent neural network. All three-deep learning-based simulated frameworks achieve around 97% average accuracy (ACC), more than 98% of the average area under the curve (AUC) and an average F1-score under 10-fold stratified cross-validation with an average false alarm rate (FAR) less than 1.88%. Ransomware driven distributed denial of service (DDoS) attack tends to shift the state of charge (SOC) profile by exceeding the SOC control thresholds. Also, ransomware driven false data injection (FDI) attack has the potential to damage the entire BES or physical system by manipulating the SOC control thresholds. It's a design choice and optimization issue that a deep learning algorithm can deploy based on the tradeoffs between performance metrics.
2021-09-30
Mahmoud, Loreen, Praveen, Raja.  2020.  Network Security Evaluation Using Deep Neural Network. 2020 15th International Conference for Internet Technology and Secured Transactions (ICITST). :1–4.
One of the most significant systems in computer network security assurance is the assessment of computer network security. With the goal of finding an effective method for performing the process of security evaluation in a computer network, this paper uses a deep neural network to be responsible for the task of security evaluating. The DNN will be built with python on Spyder IDE, it will be trained and tested by 17 network security indicators then the output that we get represents one of the security levels that have been already defined. The maj or purpose is to enhance the ability to determine the security level of a computer network accurately based on its selected security indicators. The method that we intend to use in this paper in order to evaluate network security is simple, reduces the human factors interferences, and can obtain the correct results of the evaluation rapidly. We will analyze the results to decide if this method will enhance the process of evaluating the security of the network in terms of accuracy.
2021-06-01
Xu, Lei, Gao, Zhimin, Fan, Xinxin, Chen, Lin, Kim, Hanyee, Suh, Taeweon, Shi, Weidong.  2020.  Blockchain Based End-to-End Tracking System for Distributed IoT Intelligence Application Security Enhancement. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1028–1035.
IoT devices provide a rich data source that is not available in the past, which is valuable for a wide range of intelligence applications, especially deep neural network (DNN) applications that are data-thirsty. An established DNN model provides useful analysis results that can improve the operation of IoT systems in turn. The progress in distributed/federated DNN training further unleashes the potential of integration of IoT and intelligence applications. When a large number of IoT devices are deployed in different physical locations, distributed training allows training modules to be deployed to multiple edge data centers that are close to the IoT devices to reduce the latency and movement of large amounts of data. In practice, these IoT devices and edge data centers are usually owned and managed by different parties, who do not fully trust each other or have conflicting interests. It is hard to coordinate them to provide end-to-end integrity protection of the DNN construction and application with classical security enhancement tools. For example, one party may share an incomplete data set with others, or contribute a modified sub DNN model to manipulate the aggregated model and affect the decision-making process. To mitigate this risk, we propose a novel blockchain based end-to-end integrity protection scheme for DNN applications integrated with an IoT system in the edge computing environment. The protection system leverages a set of cryptography primitives to build a blockchain adapted for edge computing that is scalable to handle a large number of IoT devices. The customized blockchain is integrated with a distributed/federated DNN to offer integrity and authenticity protection services.
2021-05-13
Venceslai, Valerio, Marchisio, Alberto, Alouani, Ihsen, Martina, Maurizio, Shafique, Muhammad.  2020.  NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.

2021-04-27
Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., Shafique, M..  2020.  Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.
2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
2020-11-02
Huang, S., Chen, Q., Chen, Z., Chen, L., Liu, J., Yang, S..  2019.  A Test Cases Generation Technique Based on an Adversarial Samples Generation Algorithm for Image Classification Deep Neural Networks. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :520–521.

With widely applied in various fields, deep learning (DL) is becoming the key driving force in industry. Although it has achieved great success in artificial intelligence tasks, similar to traditional software, it has defects that, once it failed, unpredictable accidents and losses would be caused. In this paper, we propose a test cases generation technique based on an adversarial samples generation algorithm for image classification deep neural networks (DNNs), which can generate a large number of good test cases for the testing of DNNs, especially in case that test cases are insufficient. We briefly introduce our method, and implement the framework. We conduct experiments on some classic DNN models and datasets. We further evaluate the test set by using a coverage metric based on states of the DNN.

2020-08-28
Gopinath, Divya, S. Pasareanu, Corina, Wang, Kaiyuan, Zhang, Mengshi, Khurshid, Sarfraz.  2019.  Symbolic Execution for Attribution and Attack Synthesis in Neural Networks. 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :282—283.

This paper introduces DeepCheck, a new approach for validating Deep Neural Networks (DNNs) based on core ideas from program analysis, specifically from symbolic execution. DeepCheck implements techniques for lightweight symbolic analysis of DNNs and applies them in the context of image classification to address two challenging problems: 1) identification of important pixels (for attribution and adversarial generation); and 2) creation of adversarial attacks. Experimental results using the MNIST data-set show that DeepCheck's lightweight symbolic analysis provides a valuable tool for DNN validation.

2020-08-03
Kobayashi, Hiroyuki.  2019.  CEPHEID: the infrastructure-less indoor localization using lighting fixtures' acoustic frequency fingerprints. IECON 2019 - 45th Annual Conference of the IEEE Industrial Electronics Society. 1:6842–6847.
This paper deals with a new indoor localization scheme called “CEPHEID” by using ceiling lighting fixtures. It is based on the fact that each lighting fixture has its own characteristic flickering pattern. Then, the author proposes a technique to identify individual light by using simple instruments and DNN classifier. Thanks to the less requirements for hardware, CEPHEID can be implemented by a few simple discrete electronic components and an ordinary smartphone. A prototype “CEPHEID dongle” is also introduced in this paper. Finally, the validity of the author's method is examined by indoor positioning experiments.
2020-07-13
Andrew, J., Karthikeyan, J., Jebastin, Jeffy.  2019.  Privacy Preserving Big Data Publication On Cloud Using Mondrian Anonymization Techniques and Deep Neural Networks. 2019 5th International Conference on Advanced Computing Communication Systems (ICACCS). :722–727.

In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.

2020-05-08
Vigneswaran, Rahul K., Vinayakumar, R., Soman, K.P., Poornachandran, Prabaharan.  2018.  Evaluating Shallow and Deep Neural Networks for Network Intrusion Detection Systems in Cyber Security. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—6.
Intrusion detection system (IDS) has become an essential layer in all the latest ICT system due to an urge towards cyber safety in the day-to-day world. Reasons including uncertainty in finding the types of attacks and increased the complexity of advanced cyber attacks, IDS calls for the need of integration of Deep Neural Networks (DNNs). In this paper, DNNs have been utilized to predict the attacks on Network Intrusion Detection System (N-IDS). A DNN with 0.1 rate of learning is applied and is run for 1000 number of epochs and KDDCup-`99' dataset has been used for training and benchmarking the network. For comparison purposes, the training is done on the same dataset with several other classical machine learning algorithms and DNN of layers ranging from 1 to 5. The results were compared and concluded that a DNN of 3 layers has superior performance over all the other classical machine learning algorithms.
2019-12-16
Karve, Shreya, Nagmal, Arati, Papalkar, Sahil, Deshpande, S. A..  2018.  Context Sensitive Conversational Agent Using DNN. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA). :475–478.
We investigate a method of building a closed domain intelligent conversational agent using deep neural networks. A conversational agent is a dialog system intended to converse with a human, with a coherent structure. Our conversational agent uses a retrieval based model that identifies the intent of the input user query and maps it to a knowledge base to return appropriate results. Human conversations are based on context, but existing conversational agents are context insensitive. To overcome this limitation, our system uses a simple stack based context identification and storage system. The conversational agent generates responses according to the current context of conversation. allowing more human-like conversations.