Biblio
Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.
The Internet of Things (IoT) is rapidly evolving, while introducing several new challenges regarding security, resilience and operational assurance. In the face of an increasing attack landscape, it is necessary to cater for the provision of efficient mechanisms to collectively detect sophisticated malware resulting in undesirable (run-time) device and network modifications. This is not an easy task considering the dynamic and heterogeneous nature of IoT environments; i.e., different operating systems, varied connected networks and a wide gamut of underlying protocols and devices. Malicious IoT nodes or gateways can potentially lead to the compromise of the whole IoT network infrastructure. On the other hand, the SDN control plane has the capability to be orchestrated towards providing enhanced security services to all layers of the IoT networking stack. In this paper, we propose an SDN-enabled control plane based orchestration that leverages emerging Long Short-Term Memory (LSTM) classification models; a Deep Learning (DL) based architecture to combat malicious IoT nodes. It is a first step towards a new line of security mechanisms that enables the provision of scalable AI-based intrusion detection focusing on the operational assurance of only those specific, critical infrastructure components,thus, allowing for a much more efficient security solution. The proposed mechanism has been evaluated with current state of the art datasets (i.e., N\_BaIoT 2018) using standard performance evaluation metrics. Our preliminary results show an outstanding detection accuracy (i.e., 99.9%) which significantly outperforms state-of-the-art approaches. Based on our findings, we posit open issues and challenges, and discuss possible ways to address them, so that security does not hinder the deployment of intelligent IoT-based computing systems.
Internet of Things devices and data sources areseeing increased use in various application areas. The pro-liferation of cheaper sensor hardware has allowed for widerscale data collection deployments. With increased numbers ofdeployed sensors and the use of heterogeneous sensor typesthere is increased scope for collecting erroneous, inaccurate orinconsistent data. This in turn may lead to inaccurate modelsbuilt from this data. It is important to evaluate this data asit is collected to determine its validity. This paper presents ananalysis of data quality as it is represented in Internet of Things(IoT) systems and some of the limitations of this representation. The paper discusses the use of trust as a heuristic to drive dataquality measurements. Trust is a well-established metric that hasbeen used to determine the validity of a piece or source of datain crowd sourced or other unreliable data collection techniques. The analysis extends to detail an appropriate framework forrepresenting data quality effectively within the big data modeland why a trust backed framework is important especially inheterogeneously sourced IoT data streams.
One important aspect in protecting Cyber Physical System (CPS) is ensuring that the proper control and measurement signals are propagated within the control loop. The CPS research community has been developing a large set of check blocks that can be integrated within the control loop to check signals against various types of attacks (e.g., false data injection attacks). Unfortunately, it is not possible to integrate all these “checks” within the control loop as the overhead introduced when checking signals may violate the delay constraints of the control loop. Moreover, these blocks do not completely operate in isolation of each other as dependencies exist among them in terms of their effectiveness against detecting a subset of attacks. Thus, it becomes a challenging and complex problem to assign the proper checks, especially with the presence of a rational adversary who can observe the check blocks assigned and optimizes her own attack strategies accordingly. This paper tackles the inherent state-action space explosion that arises in securing CPS through developing DeepBLOC (DB)-a framework in which Deep Reinforcement Learning algorithms are utilized to provide optimal/sub-optimal assignments of check blocks to signals. The framework models stochastic games between the adversary and the CPS defender and derives mixed strategies for assigning check blocks to ensure the integrity of the propagated signals while abiding to the real-time constraints dictated by the control loop. Through extensive simulation experiments and a real implementation on a water purification system, we show that DB achieves assignment strategies that outperform other strategies and heuristics.
Experts often design security and privacy technology with specific use cases and threat models in mind. In practice however, end users are not aware of these threats and potential countermeasures. Furthermore, mis-conceptions about the benefits and limitations of security and privacy technology inhibit large-scale adoption by end users. In this paper, we address this challenge and contribute a qualitative study on end users' and security experts' perceptions of threat models and potential countermeasures. We follow an inductive research approach to explore perceptions and mental models of both security experts and end users. We conducted semi-structured interviews with 8 security experts and 13 end users. Our results suggest that in contrast to security experts, end users neglect acquaintances and friends as attackers in their threat models. Our findings highlight that experts value technical countermeasures whereas end users try to implement trust-based defensive methods.
The use of biometrics in security applications may be vulnerable to several challenges of hacking. Thus, the emergence of cancellable biometrics becomes a suitable solution to this problem. This paper presents a one-way cancellable biometric transform that depends on 3D chaotic maps for face and fingerprint encryption. It aims to avoid cloning of original biometrics and allow the templates used by each user in different applications to be variable. The permutations achieved with the chaotic maps guarantee high security of the biometric templates, especially with the 3D implementation of the encryption algorithm. In addition, the paper presents a hardware implementation for this framework. The proposed algorithm also achieves good performance in the presence of low and moderate levels of noise. An experimental version of the proposed cancellable biometric system has been applied on FPGA model. The obtained results achieve a powerful performance of the proposed cancellable biometric system.
We leverage deep learning algorithms on various user behavioral information gathered from end-user devices to classify a subject of interest. In spite of the ability of these techniques to counter spoofing threats, they are vulnerable to adversarial learning attacks, where an attacker adds adversarial noise to the input samples to fool the classifier into false acceptance. Recently, a handful of mature techniques like Fast Gradient Sign Method (FGSM) have been proposed to aid white-box attacks, where an attacker has a complete knowledge of the machine learning model. On the contrary, we exploit a black-box attack to a behavioral biometric system based on gait patterns, by using FGSM and training a shadow model that mimics the target system. The attacker has limited knowledge on the target model and no knowledge of the real user being authenticated, but induces a false acceptance in authentication. Our goal is to understand the feasibility of a black-box attack and to what extent FGSM on shadow models would contribute to its success. Our results manifest that the performance of FGSM highly depends on the quality of the shadow model, which is in turn impacted by key factors including the number of queries allowed by the target system in order to train the shadow model. Our experimentation results have revealed strong relationships between the shadow model and FGSM performance, as well as the effect of the number of FGSM iterations used to create an attack instance. These insights also shed light on deep-learning algorithms' model shareability that can be exploited to launch a successful attack.
Rapidly growing shared information for threat intelligence not only helps security analysts reduce time on tracking attacks, but also bring possibilities to research on adversaries' thinking and decisions, which is important for the further analysis of attackers' habits and preferences. In this paper, we analyze current models and frameworks used in threat intelligence that suited to different modeling goals, and propose a three-layer model (Goal, Behavior, Capability) to study the statistical characteristics of APT groups. Based on the proposed model, we construct a knowledge network composed of adversary behaviors, and introduce a similarity measure approach to capture similarity degree by considering different semantic links between groups. After calculating similarity degrees, we take advantage of Girvan-Newman algorithm to discover community groups, clustering result shows that community structures and boundaries do exist by analyzing the behavior of APT groups.
Robotic Operating System(ROS) security research is currently in a preliminary state, with limited research in tools or models. Considering the trend of digitization of robotic systems, this lack of foundational knowledge increases the potential threat posed by security vulnerabilities in ROS. In this article, we present a new tool to assist further security research in ROS, ROSploit. ROSploit is a modular two-pronged offensive tool covering both reconnaissance and exploitation of ROS systems, designed to assist researchers in testing exploits for ROS.
In cloud computing environments with many virtual machines, containers, and other systems, an epidemic of malware can be crippling and highly threatening to business processes. In this vision paper, we introduce a hierarchical approach to performing malware detection and analysis using several recent advances in machine learning on graphs, hypergraphs, and natural language. We analyze individual systems and their logs, inspecting and understanding their behavior with attentional sequence models. Given a feature representation of each system's logs using this procedure, we construct an attributed network of the cloud with systems and other components as vertices and propose an analysis of malware with inductive graph and hypergraph learning models. With this foundation, we consider the multicloud case, in which multiple clouds with differing privacy requirements cooperate against the spread of malware, proposing the use of federated learning to perform inference and training while preserving privacy. Finally, we discuss several open problems that remain in defending cloud computing environments against malware related to designing robust ecosystems, identifying cloud-specific optimization problems for response strategy, action spaces for malware containment and eradication, and developing priors and transfer learning tasks for machine learning models in this area.
Congestion diffusion resulting from the coupling by resource competing is a kind of typical failure propagation in network systems. The existing models of failure propagation mainly focused on the coupling by direct physical connection between nodes, the most efficiency path, or dependence group, while the coupling by resource competing is ignored. In this paper, a model of network congestion diffusion with resource competing is proposed. With the analysis of the similarities to resource competing in biomolecular network, the model describing the dynamic changing process of biomolecule concentration based on titration mechanism provides reference for our model. Then the innovation on titration mechanism is proposed to describe the dynamic changing process of link load in networks, and a novel congestion model is proposed. By this model, the global congestion can be evaluated. Simulations show that network congestion with resource competing can be obtained from our model.
With the development of new technologies in the world, governments have tendency to make a communications with people and business with the help of such technologies. Electronic government (e-government) is defined as utilizing information technologies such as electronic networks, Internet and mobile phones by organizations and state institutions in order to making wide communication between citizens, business and different state institutions. Development of e-government starts with making website in order to share information with users and is considered as the main infrastructure for further development. Website assessment is considered as a way for improving service quality. Different international researches have introduced various indexes for website assessment, they only see some dimensions of website in their research. In this paper, the most important indexes for website quality assessment based on accurate review of previous studies are "Web design", "navigation", services", "maintenance and Support", "Citizens Participation", "Information Quality", "Privacy and Security", "Responsiveness", "Usability". Considering mentioned indexes in designing the website facilitates user interaction with the e-government websites.
The article analyzes the concept of "Resilience" in relation to the development of computing. The strategy for reacting to perturbations in this process can be based either on "harsh Resistance" or "smarter Elasticity." Our "Models" are descriptive in defining the path of evolutionary development as structuring under the perturbations of the natural order and enable the analysis of the relationship among models, structures and factors of evolution. Among those, two features are critical: parallelism and "fuzziness", which to a large extent determine the rate of change of computing development, especially in critical applications. Both reversible and irreversible development processes related to elastic and resistant methods of problem solving are discussed. The sources of perturbations are located in vicinity of the resource boundaries, related to growing problem size with progress combined with the lack of computational "checkability" of resources i.e. data with inadequate models, methodologies and means. As a case study, the problem of hidden faults caused by the growth, the deficit of resources, and the checkability of digital circuits in critical applications is analyzed.
In this work a platform-aware model-driven engineering process for building component-based embedded software systems using annotated analysis models is described. The process is supported by a framework, called MICOBS, that allows working with different component technologies and integrating different tools that, independently of the component technology, enable the analysis of non-functional properties based on the principles of composability and compositionality. An actor, called Framework Architect, is responsible for this integration. Three other actors take a relevant part in the analysis process. The Component Provider supplies the components, while the Component Tester is in charge of their validation. The latter also feeds MICOBS with the annotated analysis models that characterize the extra-functional properties of the components for the different platforms on which they can be deployed. The Application Architect uses these components to build new systems, performing the trade-off between different alternatives. At this stage, and in order to verify that the final system meets the extra-functional requirements, the Application Architect uses the reports generated by the integrated analysis tools. This process has been used to support the validation and verification of the on-board application software for the Instrument Control Unit of the Energetic Particle Detector of the Solar Orbiter mission.