Biblio
During the development and expansion of Internet of Things (IoT), main challenges needing to be addressed are the heterogeneity, interoperability, scalability, flexibility and security of IoT applications. In this paper, we view IoT as a large-scale distributed cyber-physical-social complex network. From that perspective, the above challenges are analyzed. Then, we propose a distributed multi-agent architecture to unify numbers of different IoT applications by designing the software-defined sensors, auctuators and controllers. Furthermore, we analyze the proposed architecture and clarify why and how it can tackle the heterogeneity of IoT applications, enable them to interoperate with each other, make it efficient to introduce new applications, and enhance the flexibility and security of different applications. Finally, the use case of smart home with multiple applications is applied to verify the feasibility of the proposed solution for IoT architecture.
Over the past few years we have articulated theory that describes ‘encrypted computing’, in which data remains in encrypted form while being worked on inside a processor, by virtue of a modified arithmetic. The last two years have seen research and development on a standards-compliant processor that shows that near-conventional speeds are attainable via this approach. Benchmark performance with the US AES-128 flagship encryption and a 1GHz clock is now equivalent to a 433MHz classic Pentium, and most block encryptions fit in AES's place. This summary article details how user data is protected by a system based on the processor from being read or interfered with by the computer operator, for those computing paradigms that entail trust in data-oriented computation in remote locations where it may be accessible to powerful and dishonest insiders. We combine: (i) the processor that runs encrypted; (ii) a slightly modified conventional machine code instruction set architecture with which security is achievable; (iii) an ‘obfuscating’ compiler that takes advantage of its possibilities, forming a three-point system that provably provides cryptographic "semantic security" for user data against the operator and system insiders.
Cognitive radio network (CRN) is regarded as an emerging technology for better spectrum efficiency where unlicensed secondary users (SUs) sense RF spectrum to find idle channels and access them opportunistically without causing any harmful interference to licensed primary users (PUs). However, RF spectrum sensing and sharing along with reconfigurable capabilities of SUs bring severe security vulnerabilities in the network. In this paper, we analyze physical-layer security (secrecy rates) of SUs in CRN in the presence of eavesdroppers, jammers and PU emulators (PUEs) where SUs compete not only with jammers and eavesdroppers who are trying to reduce SU's secrecy rates but also against PUEs who are trying to compel the SUs from their current channel by imitating the behavior of PUs. In addition, a legitimate SU competes with other SUs with a sharing attitude for dynamic spectrum access to gain a high secrecy rate, however, the malicious users (i.e., attackers) attempt to abuse the channels egotistically. The main contribution of this work is the design of a game theoretic approach to maximize utilities (that is proportional to secrecy rates) of SUs in the presence of eavesdroppers, jammers and PUEs. Furthermore, SUs use signal energy and cyclostationary feature detection along with location verification technique to detect PUEs. As the proposed approach is generic and considers different attackers, it can be particularized to a situation with eavesdroppers only, jammers only or PUEs only while evaluating physical-layer security of SUs in CRN. We evaluate the performance of the proposed approach using results obtained from simulations. The results show that the proposed approach outperforms other existing methods.
We evaluated the support proposed by the RSO to represent graphically our EAM-ISSRM (Enterprise Architecture Management - Information System Security Risk Management) integrated model. The evaluation of the RSO visual notation has been done at two different levels: completeness with regards to the EAM-ISSRM integrated model (Section III) and cognitive effectiveness, relying on the nine principles established by D. Moody ["The 'Physics' of Notations: Toward a Scientific Basis for Constructing Visual Notations in Software Engineering," IEEE Trans. Softw. Eng., vol. 35, no. 6, pp. 756-779, Nov. 2009] (Section IV). Regarding completeness, the coverage of the EAMISSRM integrated model by the RSO is complete apart from 'Event'. As discussed in Section III, this lack is negligible and we can consider the RSO as an appropriate notation to support the EAM-ISSRM integrated model from a completeness point of view. Regarding cognitive effectiveness, many gaps have been identified with regards to the nine principle established by Moody. Although no quantitative analysis has been performed to objectify this conclusion, the RSO can decently not be considered as an appropriate notation from a cognitive effectiveness point of view and there is room to propose a notation better on this aspect. This paper is focused on assessing the RSO without suggesting improvements based on the conclusions drawn. As a consequence, our objective for future work is to propose a more cognitive effective visual notation for the EAM-ISSRM integrated model. The approach currently considered is to operationalize Moody's principles into concrete metrics and requirements, taking into account the needs and profile of the target group of our notation (information security risk managers) through personas development and user experience map. With such an approach, we will be able to make decisions on the necessary trade-offs about our visual syntax, taking care of a specific context. We also aim at valida- ing our proposal(s) with the help of tools and approaches extracted from cognitive psychology research applied to HCI domain (e.g., eye tracking, heuristic evaluation, user experience evaluation…).
In the production process of embedded device, due to the frequent reuse of third-party libraries or development kits, there are large number of same vulnerabilities that appear in more than one firmware. Homology analysis is often used in detecting this kind of vulnerabilities caused by code reuse or third-party reuse and in the homology analysis, the widely used methods are mainly Binary difference analysis, Normalized compression distance, String feature matching and Fuzz hash. But when we use these methods for homology analysis, we found that the detection result is not ideal and there is a high false positive rate. Focusing on this problem, we analyzed the application scenarios of these four methods and their limitations by combining different methods and different types of files and the experiments show that the combination of methods and files have a better performance in homology analysis.
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar
Semiconductor design houses are increasingly becoming dependent on third party vendors to procure intellectual property (IP) and meet time-to-market constraints. However, these third party IPs cannot be trusted as hardware Trojans can be maliciously inserted into them by untrusted vendors. While different approaches have been proposed to detect Trojans in third party IPs, their limitations have not been extensively studied. In this paper, we analyze the limitations of the state-of-the-art Trojan detection techniques and demonstrate with experimental results how to defeat these detection mechanisms. We then propose a Trojan detection framework based on information flow security (IFS) verification. Our framework detects violation of IFS policies caused by Trojans without the need of white-box knowledge of the IP. We experimentally validate the efficacy of our proposed technique by accurately identifying Trojans in the trust-hub benchmarks. We also demonstrate that our technique does not share the limitations of the previously proposed Trojan detection techniques.
Physically unclonable functions (PUFs) are used to uniquely identify electronic devices. Here, we introduce a hybrid silicon CMOS-nanotube PUF circuit that uses the variations of nanotube transistors to generate a random response. An analog silicon circuit subsequently converts the nanotube response to zero or one bits. We fabricate an array of nanotube transistors to study and model their device variability. The behavior of the hybrid CMOS-nanotube PUF is then simulated. The parameters of the analog circuit are tuned to achieve the desired normalized Hamming inter-distance of 0.5. The co-design of the nanotube array and the silicon CMOS is an attractive feature for increasing the immunity of the hybrid PUF against an unauthorized duplication. The heterogeneous integration of nanotubes with silicon CMOS offers a new strategy for realizing security tokens that are strong, low-cost, and reliable.
As demonstrated recently, Wireless Physical Layer Security (WPLS) has the potential to offer substantial advantages for key management for small resource-constrained and, therefore, low-cost IoT-devices, e.g., the widely applied 8-bit MCU 8051. In this paper, we present a WPLS testbed implementation for independent performance and security evaluations. The testbed is based on off-the-shelf hardware and utilizes the IEEE 802.15.4 communication standard for key extraction and secret key rate estimation in real-time. The testbed can include generically multiple transceivers to simulate legitimate parties or eavesdropper. We believe with the testbed we provide a first step to make experimental-based WPLS research results comparable. As an example, we present evaluation results of several test cases we performed, while for further information we refer to https://pls.rub.de.
Gaussian random attacks that jointly minimize the amount of information obtained by the operator from the grid and the probability of attack detection are presented. The construction of the attack is posed as an optimization problem with a utility function that captures two effects: firstly, minimizing the mutual information between the measurements and the state variables; secondly, minimizing the probability of attack detection via the Kullback-Leibler (KL) divergence between the distribution of the measurements with an attack and the distribution of the measurements without an attack. Additionally, a lower bound on the utility function achieved by the attacks constructed with imperfect knowledge of the second order statistics of the state variables is obtained. The performance of the attack construction using the sample covariance matrix of the state variables is numerically evaluated. The above results are tested in the IEEE 30-Bus test system.
Integer errors in C/C++ are caused by arithmetic operations yielding results which are unrepresentable in certain type. They can lead to serious safety and security issues. Due to the complicated semantics of C/C++ integers, integer errors are widely harbored in real-world programs and it is error-prone to repair them even for experts. An automatic tool is desired to 1) automatically generate fixes which assist developers to correct the buggy code, and 2) provide sufficient hints to help developers review the generated fixes and better understand integer types in C/C++. In this paper, we present a tool IntPTI that implements the desired functionalities for C programs. IntPTI infers appropriate types for variables and expressions to eliminate representation issues, and then utilizes the derived types with fix patterns codified from the successful human-written patches. IntPTI provides a user-friendly web interface which allows users to review and manage the fixes. We evaluate IntPTI on 7 real-world projects and the results show its competitive repair accuracy and its scalability on large code bases. The demo video for IntPTI is available at: https://youtu.be/9Tgd4A\_FgZM.
Internet of Things (IoT) is characterized by heterogeneous devices that interact with each other on a collaborative basis to fulfill a common goal. In this scenario, some of the deployed devices are expected to be constrained in terms of memory usage, power consumption and processing resources. To address the specific properties and constraints of such networks, a complete stack of standardized protocols has been developed, among them the Routing Protocol for Low-Power and lossy networks (RPL). However, this protocol is exposed to a large variety of attacks from the inside of the network itself. To fill this gap, this paper focuses on the design and the integration of a novel Link reliable and Trust aware model into the RPL protocol. Our approach aims to ensure Trust among entities and to provide QoS guarantees during the construction and the maintenance of the network routing topology. Our model targets both node and link Trust and follows a multidimensional approach to enable an accurate Trust value computation for IoT entities. To prove the efficiency of our proposal, this last has been implemented and tested successfully within an IoT environment. Therefore, a set of experiments has been made to show the high accuracy level of our system.
The online portion of modern life is growing at an astonishing rate, with the consequence that more of the user's critical information is stored online. This poses an immediate threat to privacy and security of the user's data. This work will cover the increasing dangers and security risks of adware, adware injection, and malware injection. These programs increase in direct proportion to the number of users on the Internet. Each of these programs presents an imminent threat to a user's privacy and sensitive information, anytime they utilize the Internet. We will discuss how current ad blockers are not the actual solution to these threats, but rather a premise to our work. Current ad blocking tools can be discovered by the web servers which often requires suppression of the ad blocking tool. Suppressing the tool creates vulnerabilities in a user's system, but even when the tool is active their system is still susceptible to peril. It is possible, even when an ad blocking tool is functioning, for it to allow adware content through. Our solution to the contemporary threats is our tool, MalFire.
Software Defined Networking (SDN) has proved to be a promising approach for creating next generation software based network ecosystems. It has provided us with a centralized network provision, a holistic management plane and a well-defined level of abstraction. But, at the same time brings forth new security and management challenges. Research in the field of SDN is primarily focused on reconfiguration, forwarding and network management issues. However in recent times the interest has moved to tackling security and maintenance issues. This work is based on providing a means to mitigate security challenges in an SDN environment from a DDoS attack based point of view. This paper introduces a Multi-Agent based intrusion prevention and mitigation architecture for SDN. Thus allowing networks to govern their behavior and take appropriate measures when the network is under attack. The architecture is evaluated against filter based intrusion prevention architectures to measure efficiency and resilience against DDoS attacks and false policy based attacks.
This paper introduces a hardware Trojan detection method using Chip ID which is generated by Relative Time-Delays (RTD) of sensor chains and the effectiveness of RTD is verified by post-layout simulations. The rank of time-delays of the sensor chains would be changed in Trojan-inserted chip. RTD is an accurate approach targeting to all kinds of Trojans, since it is based on the RELATIVE relationship between the time-delays rather than the absolute values, which are hard to be measured and will change with the fabricate process. RTD needs no golden chip, because the RELATIVE values would not change in most situations. Thus the genuine ID can be generated by simulator. The sensor chains can be inserted into a layout utilizing unused spaces, so RTD is a low-cost solution. A Trojan with 4x minimum NMOS is placed in different places of the chip. The behavior of the chip is obtained by using transient based post-layout simulation. All the Trojans are detected AND located, thus the effectiveness of RTD is verified.
Deep learning techniques have demonstrated the ability to perform a variety of object recognition tasks using visible imager data; however, deep learning has not been implemented as a means to autonomously detect and assess targets of interest in a physical security system. We demonstrate the use of transfer learning on a convolutional neural network (CNN) to significantly reduce training time while keeping detection accuracy of physical security relevant targets high. Unlike many detection algorithms employed by video analytics within physical security systems, this method does not rely on temporal data to construct a background scene; targets of interest can halt motion indefinitely and still be detected by the implemented CNN. A key advantage of using deep learning is the ability for a network to improve over time. Periodic retraining can lead to better detection and higher confidence rates. We investigate training data size versus CNN test accuracy using physical security video data. Due to the large number of visible imagers, significant volume of data collected daily, and currently deployed human in the loop ground truth data, physical security systems present a unique environment that is well suited for analysis via CNNs. This could lead to the creation of algorithmic element that reduces human burden and decreases human analyzed nuisance alarms.
As an extension of cloud computing, fog computing is proving itself more and more potentially useful nowadays. Fog computing is introduced to overcome the shortcomings of cloud computing paradigm in handling the massive amount of traffic caused by the enormous number of Internet of Things devices being increasingly connected to the Internet on daily basis. Despite its advantages, fog architecture introduces new security and privacy threats that need to be studied and solved as soon as possible. In this work, we explore two privacy issues posed by the fog computing architecture and we define privacy challenges according to them. The first challenge is related to the fog's design purposes of reducing the latency and improving the bandwidth, where the existing privacy-preserving methods violate these design purposed. The other challenge is related to the proximity of fog nodes to the end-users or IoT devices. We discuss the importance of addressing these challenges by putting them in the context of real-life scenarios. Finally, we propose a privacy-preserving fog computing paradigm that solves these challenges and we assess the security and efficiency of our solution.
Dual Connectivity(DC) is one of the key technologies standardized in Release 12 of the 3GPP specifications for the Long Term Evolution (LTE) network. It attempts to increase the per-user throughput by allowing the user equipment (UE) to maintain connections with the MeNB (master eNB) and SeNB (secondary eNB) simultaneously, which are inter-connected via non-ideal backhaul. In this paper, we focus on one of the use cases of DC whereby the downlink U-plane data is split at the MeNB and transmitted to the UE via the associated MeNB and SeNB concurrently. In this case, out-of-order packet delivery problem may occur at the UE due to the delay over the non-ideal backhaul link, as well as the dynamics of channel conditions over the MeNB-UE and SeNB-UE links, which will introduce extra delay for re-ordering the packets. As a solution, we propose to adopt the RaptorQ FEC code to encode the source data at the MeNB, and then the encoded symbols are separately transmitted through the MeNB and SeNB. The out-of-order problem can be effectively eliminated since the UE can decode the original data as long as it receives enough encoded symbols from either the MeNB or SeNB. We present detailed protocol design for the RaptorQ code based concurrent transmission scheme, and simulation results are provided to illustrate the performance of the proposed scheme.
We address security and trust in the context of a commercial IP camera. We take a hands-on approach, as we not only define abstract vulnerabilities, but we actually implement the attacks on a real camera. We then discuss the nature of the attacks and the root cause; we propose a formal model of trust that can be used to address the vulnerabilities by explicitly constraining compositionality for trust relationships.
Security of control systems have become a new and important field of research since malicious attacks on control systems indeed occurred including Stuxnet in 2011 and north eastern electrical grid black out in 2003. Attacks on sensors and/or actuators of control systems cause malfunction, instability, and even system destruction. The impact of attack may differ by which instrumentation (sensors and/or actuators) is being attacked. In particular, for control systems with multiple sensors, attack on each sensor may have different impact, i.e., attack on some sensors leads to a greater damage to the system than those for other sensors. To investigate this, we consider sensor bias injection attacks in linear control systems equipped with anomaly detector, and quantify the maximum impact of attack on sensors while the attack remains undetected. Then, we introduce a notion of sensor security index for linear dynamic systems to quantify the vulnerability under sensor attacks. Method of reducing system vulnerability is also discussed using the notion of sensor security index.
Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.