Visible to the public Biblio

Found 1611 results

Filters: Keyword is security of data  [Clear All Filters]
2020-08-17
Yang, Shiman, Shi, Yijie, Guo, Fenzhuo.  2019.  Risk Assessment of Industrial Internet System By Using Game-Attack Graphs. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1660–1663.
In this paper, we propose a game-attack graph-based risk assessment model for industrial Internet system. Firstly, use non-destructive asset profiling to scan components and devices included in the system and their open services and communication protocols. Further compare the CNVD and CVE to find the vulnerability through the search engine keyword segment matching method, and generate an asset threat list. Secondly, build the attack rule base based on the network information, and model the system using the attribute attack graph. Thirdly, combine the game theory with the idea of the established model. Finally, optimize and quantify the analysis to get the best attack path and the best defense strategy.
Al Ghazo, Alaa T., Kumar, Ratnesh.  2019.  Identification of Critical-Attacks Set in an Attack-Graph. 2019 IEEE 10th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :0716–0722.
SCADA/ICS (Supervisory Control and Data Acqui-sition/Industrial Control Systems) networks are becoming targets of advanced multi-faceted attacks, and use of attack-graphs has been proposed to model complex attacks scenarios that exploit interdependence among existing atomic vulnerabilities to stitch together the attack-paths that might compromise a system-level security property. While such analysis of attack scenarios enables security administrators to establish appropriate security measurements to secure the system, practical considerations on time and cost limit their ability to address all system vulnerabilities at once. In this paper, we propose an approach that identifies label-cuts to automatically identify a set of critical-attacks that, when blocked, guarantee system security. We utilize the Strongly-Connected-Components (SCCs) of the given attack graph to generate an abstracted version of the attack-graph, a tree over the SCCs, and next use an iterative backward search over this tree to identify set of backward reachable SCCs, along with their outgoing edges and their labels, to identify a cut with a minimum number of labels that forms a critical-attacks set. We also report the implementation and validation of the proposed algorithm to a real-world case study, a SCADA network for a water treatment cyber-physical system.
2020-08-14
Zolfaghari, Majid, Salimi, Solmaz, Kharrazi, Mehdi.  2019.  Inferring API Correct Usage Rules: A Tree-based Approach. 2019 16th International ISC (Iranian Society of Cryptology) Conference on Information Security and Cryptology (ISCISC). :78—84.
The lack of knowledge about API correct usage rules is one of the main reasons that APIs are employed incorrectly by programmers, which in some cases lead to serious security vulnerabilities. However, finding a correct usage rule for an API is a time-consuming and error-prone task, particularly in the absence of an API documentation. Existing approaches to extract correct usage rules are mostly based on majority API usages, assuming the correct usage is prevalent. Although statistically extracting API correct usage rules achieves reasonable accuracy, it cannot work correctly in the absence of a fair amount of sample usages. We propose inferring API correct usage rules independent of the number of sample usages by leveraging an API tree structure. In an API tree, each node is an API, and each node's children are APIs called by the parent API. Starting from lower-level APIs, it is possible to infer the correct usage rules for them by utilizing the available correct usage rules of their children. We developed a tool based on our idea for inferring API correct usages rules hierarchically, and have applied it to the source code of Linux kernel v4.3 drivers and found 24 previously reported bugs.
Gu, Zuxing, Wu, Jiecheng, Liu, Jiaxiang, Zhou, Min, Gu, Ming.  2019.  An Empirical Study on API-Misuse Bugs in Open-Source C Programs. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:11—20.
Today, large and complex software is developed with integrated components using application programming interfaces (APIs). Correct usage of APIs in practice presents a challenge due to implicit constraints, such as call conditions or call orders. API misuse, i.e., violation of these constraints, is a well-known source of bugs, some of which can cause serious security vulnerabilities. Although researchers have developed many API-misuse detectors over the last two decades, recent studies show that API misuses are still prevalent. In this paper, we provide a comprehensive empirical study on API-misuse bugs in open-source C programs. To understand the nature of API misuses in practice, we analyze 830 API-misuse bugs from six popular programs across different domains. For all the studied bugs, we summarize their root causes, fix patterns and usage statistics. Furthermore, to understand the capabilities and limitations of state-of-the-art static analysis detectors for API-misuse detection, we develop APIMU4C, a dataset of API-misuse bugs in C code based on our empirical study results, and evaluate three widely-used detectors on it qualitatively and quantitatively. We share all the findings and present possible directions towards more powerful API-misuse detectors.
Ge, Jingquan, Gao, Neng, Tu, Chenyang, Xiang, Ji, Liu, Zeyi.  2019.  More Secure Collaborative APIs Resistant to Flush+Reload and Flush+Flush Attacks on ARMv8-A. 2019 26th Asia-Pacific Software Engineering Conference (APSEC). :410—417.
With the popularity of smart devices such as mobile phones and tablets, the security problem of the widely used ARMv8-A processor has received more and more attention. Flush+Reload and Flush+Flush cache attacks have become two of the most important security threats due to their low noise and high resolution. In order to resist Flush+Reload and Flush+Flush attacks, researchers proposed many defense methods. However, these existing methods have various shortcomings. The runtime defense methods using hardware performance counters cannot detect attacks fast enough, effectively detect Flush+Flush or avoid a high false positive rate. Static code analysis schemes are powerless for obfuscation techniques. The approaches of permanently reducing the resolution can only be utilized on browser products and cannot be applied in the system. In this paper, we design two more secure collaborative APIs-flush operation API and high resolution time API-which can resist Flush+Reload and Flush+Flush attacks. When the flush operation API is called, the high resolution time API temporarily reduces its resolution and automatically restores. Moreover, the flush operation API also has the ability to detect and handle suspected Flush+Reload and Flush+Flush attacks. The attack and performance comparison experiments prove that the two APIs we designed are safer and the performance losses are acceptable.
Singleton, Larry, Zhao, Rui, Song, Myoungkyu, Siy, Harvey.  2019.  FireBugs: Finding and Repairing Bugs with Security Patterns. 2019 IEEE/ACM 6th International Conference on Mobile Software Engineering and Systems (MOBILESoft). :30—34.

Security is often a critical problem in software systems. The consequences of the failure lead to substantial economic loss or extensive environmental damage. Developing secure software is challenging, and retrofitting existing systems to introduce security is even harder. In this paper, we propose an automated approach for Finding and Repairing Bugs based on security patterns (FireBugs), to repair defects causing security vulnerabilities. To locate and fix security bugs, we apply security patterns that are reusable solutions comprising large amounts of software design experience in many different situations. In the evaluation, we investigated 2,800 Android app repositories to apply our approach to 200 subject projects that use javax.crypto APIs. The vision of our automated approach is to reduce software maintenance burdens where the number of outstanding software defects exceeds available resources. Our ultimate vision is to design more security patterns that have a positive impact on software quality by disseminating correlated sets of best security design practices and knowledge.

Mitra, Joydeep, Ranganath, Venkatesh-Prasad, Narkar, Aditya.  2019.  BenchPress: Analyzing Android App Vulnerability Benchmark Suites. 2019 34th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW). :13—18.
In recent years, various benchmark suites have been developed to evaluate the efficacy of Android security analysis tools. Tool developers often choose such suites based on the availability and popularity of suites and not on their characteristics and relevance due to the lack of information about them. In this context, based on a recent effort, we empirically evaluated four Android-specific benchmark suites: DroidBench, Ghera, ICCBench, and UBCBench. For each benchmark suite, we identified the APIs used by the suite that were discussed on Stack Overflow in the context of Android app development and measured the usage of these APIs in a sample of 227K real-world apps (coverage). We also identified security-related APIs used in real-world apps but not in any of the above benchmark suites to assess the opportunities to extend benchmark suites (gaps).
Hussain, Fatima, Li, Weiyue, Noye, Brett, Sharieh, Salah, Ferworn, Alexander.  2019.  Intelligent Service Mesh Framework for API Security and Management. 2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). :0735—0742.
With the advancements in enterprise-level business development, the demand for new applications and services is overwhelming. For the development and delivery of such applications and services, enterprise businesses rely on Application Programming Interfaces (APIs). API management and classification is a cumbersome task considering the rapid increase in the number of APIs, and API to API calls. API Mashups, domain APIs and API service mesh are a few recommended techniques for ease of API creation, management, and monitoring. API service mesh is considered as one of the techniques in this regard, in which the service plane and the control plane are separated for improving efficiency as well as security. In this paper, we propose and implement a security framework for the creation of a secure API service mesh using Istio and Kubernetes. Afterwards, we propose an smart association model for automatic association of new APIs to already existing categories of service mesh. To the best of our knowledge, this smart association model is the first of its kind.
2020-08-13
Razaque, Abdul, Frej, Mohamed Ben Haj, Yiming, Huang, Shilin, Yan.  2019.  Analytical Evaluation of k–Anonymity Algorithm and Epsilon-Differential Privacy Mechanism in Cloud Computing Environment. 2019 IEEE Cloud Summit. :103—109.

Expected and unexpected risks in cloud computing, which included data security, data segregation, and the lack of control and knowledge, have led to some dilemmas in several fields. Among all of these dilemmas, the privacy problem is even more paramount, which has largely constrained the prevalence and development of cloud computing. There are several privacy protection algorithms proposed nowadays, which generally include two categories, Anonymity algorithm, and differential privacy mechanism. Since many types of research have already focused on the efficiency of the algorithms, few of them emphasized the different orientation and demerits between the two algorithms. Motivated by this emerging research challenge, we have conducted a comprehensive survey on the two popular privacy protection algorithms, namely K-Anonymity Algorithm and Differential Privacy Algorithm. Based on their principles, implementations, and algorithm orientations, we have done the evaluations of these two algorithms. Several expectations and comparisons are also conducted based on the current cloud computing privacy environment and its future requirements.

Zhou, Kexin, Wang, Jian.  2019.  Trajectory Protection Scheme Based on Fog Computing and K-anonymity in IoT. 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS). :1—6.
With the development of cloud computing technology in the Internet of Things (IoT), the trajectory privacy in location-based services (LBSs) has attracted much attention. Most of the existing work adopts point-to-point and centralized models, which will bring a heavy burden to the user and cause performance bottlenecks. Moreover, previous schemes did not consider both online and offline trajectory protection and ignored some hidden background information. Therefore, in this paper, we design a trajectory protection scheme based on fog computing and k-anonymity for real-time trajectory privacy protection in continuous queries and offline trajectory data protection in trajectory publication. Fog computing provides the user with local storage and mobility to ensure physical control, and k-anonymity constructs the cloaking region for each snapshot in terms of time-dependent query probability and transition probability. In this way, two k-anonymity-based dummy generation algorithms are proposed, which achieve the maximum entropy of online and offline trajectory protection. Security analysis and simulation results indicate that our scheme can realize trajectory protection effectively and efficiently.
Zhang, Yueqian, Kantarci, Burak.  2019.  Invited Paper: AI-Based Security Design of Mobile Crowdsensing Systems: Review, Challenges and Case Studies. 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE). :17—1709.
Mobile crowdsensing (MCS) is a distributed sensing paradigm that uses a variety of built-in sensors in smart mobile devices to enable ubiquitous acquisition of sensory data from surroundings. However, non-dedicated nature of MCS results in vulnerabilities in the presence of malicious participants to compromise the availability of the MCS components, particularly the servers and participants' devices. In this paper, we focus on Denial of Service attacks in MCS where malicious participants submit illegitimate task requests to the MCS platform to keep MCS servers busy while having sensing devices expend energy needlessly. After reviewing Artificial Intelligence-based security solutions for MCS systems, we focus on a typical location-based and energy-oriented DoS attack, and present a security solution that applies ensemble techniques in machine learning to identify illegitimate tasks and prevent personal devices from pointless energy consumption so as to improve the availability of the whole system. Through simulations, we show that ensemble techniques are capable of identifying illegitimate and legitimate tasks while gradient boosting appears to be a preferable solution with an AUC performance higher than 0.88 in the precision-recall curve. We also investigate the impact of environmental settings on the detection performance so as to provide a clearer understanding of the model. Our performance results show that MCS task legitimacy decisions with high F-scores are possible for both illegitimate and legitimate tasks.
Augusto, Cristian, Morán, Jesús, De La Riva, Claudio, Tuya, Javier.  2019.  Test-Driven Anonymization for Artificial Intelligence. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :103—110.
In recent years, data published and shared with third parties to develop artificial intelligence (AI) tools and services has significantly increased. When there are regulatory or internal requirements regarding privacy of data, anonymization techniques are used to maintain privacy by transforming the data. The side-effect is that the anonymization may lead to useless data to train and test the AI because it is highly dependent on the quality of the data. To overcome this problem, we propose a test-driven anonymization approach for artificial intelligence tools. The approach tests different anonymization efforts to achieve a trade-off in terms of privacy (non-functional quality) and functional suitability of the artificial intelligence technique (functional quality). The approach has been validated by means of two real-life datasets in the domains of healthcare and health insurance. Each of these datasets is anonymized with several privacy protections and then used to train classification AIs. The results show how we can anonymize the data to achieve an adequate functional suitability in the AI context while maintaining the privacy of the anonymized data as high as possible.
Wang, Tianyi, Chow, Kam Pui.  2019.  Automatic Tagging of Cyber Threat Intelligence Unstructured Data using Semantics Extraction. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :197—199.
Threat intelligence, information about potential or current attacks to an organization, is an important component in cyber security territory. As new threats consecutively occurring, cyber security professionals always keep an eye on the latest threat intelligence in order to continuously lower the security risks for their organizations. Cyber threat intelligence is usually conveyed by structured data like CVE entities and unstructured data like articles and reports. Structured data are always under certain patterns that can be easily analyzed, while unstructured data have more difficulties to find fixed patterns to analyze. There exists plenty of methods and algorithms on information extraction from structured data, but no current work is complete or suitable for semantics extraction upon unstructured cyber threat intelligence data. In this paper, we introduce an idea of automatic tagging applying JAPE feature within GATE framework to perform semantics extraction upon cyber threat intelligence unstructured data such as articles and reports. We extract token entities from each cyber threat intelligence article or report and evaluate the usefulness of them. A threat intelligence ontology then can be constructed with the useful entities extracted from related resources and provide convenience for professionals to find latest useful threat intelligence they need.
Sadeghi, Koosha, Banerjee, Ayan, Gupta, Sandeep K. S..  2019.  An Analytical Framework for Security-Tuning of Artificial Intelligence Applications Under Attack. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :111—118.
Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.
2020-08-10
Kwon, Hyun, Yoon, Hyunsoo, Park, Ki-Woong.  2019.  Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :136–139.

Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.

Hajdu, Gergo, Minoso, Yaclaudes, Lopez, Rafael, Acosta, Miguel, Elleithy, Abdelrahman.  2019.  Use of Artificial Neural Networks to Identify Fake Profiles. 2019 IEEE Long Island Systems, Applications and Technology Conference (LISAT). :1–4.
In this paper, we use machine learning, namely an artificial neural network to determine what are the chances that Facebook friend request is authentic or not. We also outline the classes and libraries involved. Furthermore, we discuss the sigmoid function and how the weights are determined and used. Finally, we consider the parameters of the social network page which are utmost important in the provided solution.
Yohanes, Banu Wirawan, Suryadi, David Yusuf, Susilo, Deddy.  2019.  SIMON Lightweight Encryption Benchmarking on Wireless Aquascape Preservation. 2019 IEEE International Conference on Internet of Things and Intelligence System (IoTaIS). :30–35.
In pervasive computing, the human-computer interaction emphasizes on information and communication technology and user experience. Now it is possible to communicate scientific and engineering technique informally through leisure activities, for instance aquascaping. It is necessary to keep the aquascape environment fresh and healthy, and the fish have to be feed regularly. This paper proposes an autonomous aquascape preservation system based on Arduino controller connected to a remote Android smartphone. However, it is widely known that the wireless communication is not as reliable as the wired counterpart. An unauthorized party should not be able to take control of the wireless aquascape preservation system. SIMON lightweight cryptography is used to tackle security issues in constrained devices. From experiments result, the DS18B20 sensor is able to measure aquascape temperature precisely with approximately 0.5% tolerance. The Android graphical user interface application is user-friendly. Moreover, the SIMON lightweight encryption SIMON64/128 is able to secure wireless communication channel efficiently with small hardware footprints.
Wu, Sha, Liu, Jiajia.  2019.  Overprivileged Permission Detection for Android Applications. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1–6.
Android applications (Apps) have penetrated almost every aspect of our lives, bring users great convenience as well as security concerns. Even though Android system adopts permission mechanism to restrict Apps from accessing important resources of a smartphone, such as telephony, camera and GPS location, users face still significant risk of privacy leakage due to the overprivileged permissions. The overprivileged permission means the extra permission declared by the App but has nothing to do with its function. Unfortunately, there doesn't exist any tool for ordinary users to detect the overprivileged permission of an App, hence most users grant any permission declared by the App, intensifying the risk of private information leakage. Although some previous studies tried to solve the problem of permission overprivilege, their methods are not applicable nowadays because of the progress of App protection technology and the update of Android system. Towards this end, we develop a user-friendly tool based on frequent item set mining for the detection of overprivileged permissions of Android Apps, which is named Droidtector. Droidtector can operate in online or offline mode and users can choose any mode according to their situation. Finally, we run Droidtector on 1000 Apps crawled from Google Play and find that 479 of them are overprivileged, accounting for about 48% of all the sample Apps.
2020-08-07
Hasan, Kamrul, Shetty, Sachin, Ullah, Sharif.  2019.  Artificial Intelligence Empowered Cyber Threat Detection and Protection for Power Utilities. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :354—359.
Cyber threats have increased extensively during the last decade, especially in smart grids. Cybercriminals have become more sophisticated. Current security controls are not enough to defend networks from the number of highly skilled cybercriminals. Cybercriminals have learned how to evade the most sophisticated tools, such as Intrusion Detection and Prevention Systems (IDPS), and Advanced Persistent Threat (APT) is almost invisible to current tools. Fortunately, the application of Artificial Intelligence (AI) may increase the detection rate of IDPS systems, and Machine Learning (ML) techniques can mine data to detect different attack stages of APT. However, the implementation of AI may bring other risks, and cybersecurity experts need to find a balance between risk and benefits.
Dilmaghani, Saharnaz, Brust, Matthias R., Danoy, Grégoire, Cassagnes, Natalia, Pecero, Johnatan, Bouvry, Pascal.  2019.  Privacy and Security of Big Data in AI Systems: A Research and Standards Perspective. 2019 IEEE International Conference on Big Data (Big Data). :5737—5743.

The huge volume, variety, and velocity of big data have empowered Machine Learning (ML) techniques and Artificial Intelligence (AI) systems. However, the vast portion of data used to train AI systems is sensitive information. Hence, any vulnerability has a potentially disastrous impact on privacy aspects and security issues. Nevertheless, the increased demands for high-quality AI from governments and companies require the utilization of big data in the systems. Several studies have highlighted the threats of big data on different platforms and the countermeasures to reduce the risks caused by attacks. In this paper, we provide an overview of the existing threats which violate privacy aspects and security issues inflicted by big data as a primary driving force within the AI/ML workflow. We define an adversarial model to investigate the attacks. Additionally, we analyze and summarize the defense strategies and countermeasures of these attacks. Furthermore, due to the impact of AI systems in the market and the vast majority of business sectors, we also investigate Standards Developing Organizations (SDOs) that are actively involved in providing guidelines to protect the privacy and ensure the security of big data and AI systems. Our far-reaching goal is to bridge the research and standardization frame to increase the consistency and efficiency of AI systems developments guaranteeing customer satisfaction while transferring a high degree of trustworthiness.

Mehta, Brijesh B., Gupta, Ruchika, Rao, Udai Pratap, Muthiyan, Mukesh.  2019.  A Scalable (\$\textbackslashtextbackslashalpha, k\$)-Anonymization Approach using MapReduce for Privacy Preserving Big Data Publishing. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—6.
Different tools and sources are used to collect big data, which may create privacy issues. k-anonymity, l-diversity, t-closeness etc. privacy preserving data publishing approaches are used data de-identification, but as multiple sources is used to collect the data, chance of re-identification is very high. Anonymization large data is not a trivial task, hence, privacy preserving approaches scalability has become a challenging research area. Researchers explore it by proposing algorithms for scalable anonymization. We further found that in some scenarios efficient anonymization is not enough, timely anonymization is also required. Hence, to incorporate the velocity of data with Scalable k-Anonymization (SKA) approach, we propose a novel approach, Scalable ( α, k)-Anonymization (SAKA). Our proposed approach outperforms in terms of information loss and running time as compared to existing approaches. With best of our knowledge, this is the first proposed scalable anonymization approach for the velocity of data.
Liu, Bo, Xiong, Jian, Wu, Yiyan, Ding, Ming, Wu, Cynthia M..  2019.  Protecting Multimedia Privacy from Both Humans and AI. 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). :1—6.
With the development of artificial intelligence (AI), multimedia privacy issues have become more challenging than ever. AI-assisted malicious entities can steal private information from multimedia data more easily than humans. Traditional multimedia privacy protection only considers the situation when humans are the adversaries, therefore they are ineffective against AI-assisted attackers. In this paper, we develop a new framework and new algorithms that can protect image privacy from both humans and AI. We combine the idea of adversarial image perturbation which is effective against AI and the obfuscation technique for human adversaries. Experiments show that our proposed methods work well for all types of attackers.
Lou, Xin, Tran, Cuong, Yau, David K.Y., Tan, Rui, Ng, Hongwei, Fu, Tom Zhengjia, Winslett, Marianne.  2019.  Learning-Based Time Delay Attack Characterization for Cyber-Physical Systems. 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1—6.
The cyber-physical systems (CPSes) rely on computing and control techniques to achieve system safety and reliability. However, recent attacks show that these techniques are vulnerable once the cyber-attackers have bypassed air gaps. The attacks may cause service disruptions or even physical damages. This paper designs the built-in attack characterization scheme for one general type of cyber-attacks in CPS, which we call time delay attack, that delays the transmission of the system control commands. We use the recurrent neural networks in deep learning to estimate the delay values from the input trace. Specifically, to deal with the long time-sequence data, we design the deep learning model using stacked bidirectional long short-term memory (LSTM) units. The proposed approach is tested by using the data generated from a power plant control system. The results show that the LSTM-based deep learning approach can work well based on data traces from three sensor measurements, i.e., temperature, pressure, and power generation, in the power plant control system. Moreover, we show that the proposed approach outperforms the base approach based on k-nearest neighbors.
2020-08-03
Juuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N..  2019.  PRADA: Protecting Against DNN Model Stealing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :512–527.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
Nakayama, Kiyoshi, Muralidhar, Nikhil, Jin, Chenrui, Sharma, Ratnesh.  2019.  Detection of False Data Injection Attacks in Cyber-Physical Systems using Dynamic Invariants. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1023–1030.

Modern cyber-physical systems are increasingly complex and vulnerable to attacks like false data injection aimed at destabilizing and confusing the systems. We develop and evaluate an attack-detection framework aimed at learning a dynamic invariant network, data-driven temporal causal relationships between components of cyber-physical systems. We evaluate the relative performance in attack detection of the proposed model relative to traditional anomaly detection approaches. In this paper, we introduce Granger Causality based Kalman Filter with Adaptive Robust Thresholding (G-KART) as a framework for anomaly detection based on data-driven functional relationships between components in cyber-physical systems. In particular, we select power systems as a critical infrastructure with complex cyber-physical systems whose protection is an essential facet of national security. The system presented is capable of learning with or without network topology the task of detection of false data injection attacks in power systems. Kalman filters are used to learn and update the dynamic state of each component in the power system and in-turn monitor the component for malicious activity. The ego network for each node in the invariant graph is treated as an ensemble model of Kalman filters, each of which captures a subset of the node's interactions with other parts of the network. We finally also introduce an alerting mechanism to surface alerts about compromised nodes.