Biblio
Wearable and mobile medical devices provide efficient, comfortable, and economic health monitoring, having a wide range of applications from daily to clinical scenarios. Health data security becomes a critically important issue. Electrocardiogram (ECG) has proven to be a potential biometric in human recognition over the past decade. Unlike conventional authentication methods using passwords, fingerprints, face, etc., ECG signal can not be simply intercepted, duplicated, and enables continuous identification. However, in many of the studies, algorithms developed are not suitable for practical application, which usually require long ECG data for authentication. In this work, we introduce a two-phase authentication using artificial neural network (NN) models. This algorithm enables fast authentication within only 3 seconds, meanwhile achieves reasonable performance in recognition. We test the proposed method in a controlled laboratory experiment with 50 subjects. Finger ECG signals are collected using a mobile device at different times and physical statues. At the first stage, a ``General'' NN model is constructed based on data from the cohort and used for preliminary screening, while at the second stage ``Personal'' NN models constructed from single individual's data are applied as fine-grained identification. The algorithm is tested on the whole data set, and on different sizes of subsets (5, 10, 20, 30, and 40). Results proved that the proposed method is feasible and reliable for individual authentication, having obtained average False Acceptance Rate (FAR) and False Rejection Rate (FRR) below 10% for the whole data set.
Semiconductor design houses are increasingly becoming dependent on third party vendors to procure intellectual property (IP) and meet time-to-market constraints. However, these third party IPs cannot be trusted as hardware Trojans can be maliciously inserted into them by untrusted vendors. While different approaches have been proposed to detect Trojans in third party IPs, their limitations have not been extensively studied. In this paper, we analyze the limitations of the state-of-the-art Trojan detection techniques and demonstrate with experimental results how to defeat these detection mechanisms. We then propose a Trojan detection framework based on information flow security (IFS) verification. Our framework detects violation of IFS policies caused by Trojans without the need of white-box knowledge of the IP. We experimentally validate the efficacy of our proposed technique by accurately identifying Trojans in the trust-hub benchmarks. We also demonstrate that our technique does not share the limitations of the previously proposed Trojan detection techniques.
Integer errors in C/C++ are caused by arithmetic operations yielding results which are unrepresentable in certain type. They can lead to serious safety and security issues. Due to the complicated semantics of C/C++ integers, integer errors are widely harbored in real-world programs and it is error-prone to repair them even for experts. An automatic tool is desired to 1) automatically generate fixes which assist developers to correct the buggy code, and 2) provide sufficient hints to help developers review the generated fixes and better understand integer types in C/C++. In this paper, we present a tool IntPTI that implements the desired functionalities for C programs. IntPTI infers appropriate types for variables and expressions to eliminate representation issues, and then utilizes the derived types with fix patterns codified from the successful human-written patches. IntPTI provides a user-friendly web interface which allows users to review and manage the fixes. We evaluate IntPTI on 7 real-world projects and the results show its competitive repair accuracy and its scalability on large code bases. The demo video for IntPTI is available at: https://youtu.be/9Tgd4A\_FgZM.
Cooperative spectrum sensing is often necessary in cognitive radios systems to localize a transmitter by fusing the measurements from multiple sensing radios. However, revealing spectrum sensing information also generally leaks information about the location of the radio that made those measurements. We propose a protocol for performing cooperative spectrum sensing while preserving the privacy of the sensing radios. In this protocol, radios fuse sensing information through a distributed particle filter based on a tree structure. All sensing information is encrypted using public-key cryptography, and one of the radios serves as an anonymizer, whose role is to break the connection between the sensing radios and the public keys they use. We consider a semi-honest (honest-but-curious) adversary model in which there is at most a single adversary that is internal to the sensing network and complies with the specified protocol but wishes to determine information about the other participants. Under this scenario, an adversary may learn the sensing information of some of the radios, but it does not have any way to tie that information to a particular radio's identity. We test the performance of our proposed distributed, tree-based particle filter using physical measurements of FM broadcast stations.
Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.
This paper presents a true random number generator that exploits the subthreshold properties of jitter of events propagating in a self-timed ring and jitter of events propagating in an inverter based ring oscillator. Design was implemented in 180nm CMOS flash process. Devices provide high quality random bit sequences passing FIPS 140-2 and NIST SP 800-22 statistical tests which guaranty uniform distribution and unpredictability thanks to the physics based entropy source.
Conventional cyber defenses require continual maintenance: virus, firmware, and software updates; costly functional impact tests; and dedicated staff within a security operations center. The conventional defenses require access to external sources for the latest updates. The whitelisted system, however, is ideally a system that can sustain itself freed from external inputs. Cyber-Physical Systems (CPS), have the following unique traits: digital commands are physically observable and verifiable; possible combinations of commands are limited and finite. These CPS traits, combined with a trust anchor to secure an unclonable digital identity (i.e., digitally unclonable function [DUF] - Patent Application \#15/183,454; CodeLock), offers an excellent opportunity to explore defenses built on whitelisting approach called “Trustworthy Design Architecture (TDA).” There exist significant research challenges in defining what are the physically verifiable whitelists as well as the criteria for cyber-physical traits that can be used as the unclonable identity. One goal of the project is to identify a set of physical and/or digital characteristics that can uniquely identify an endpoint. The measurements must have the properties of being reliable, reproducible, and trustworthy. Given that adversaries naturally evolve with any defense, the adversary will have the goal of disrupting or spoofing this process. To protect against such disruptions, we provide a unique system engineering technique, when applied to CPSs (e.g., nuclear processing facilities, critical infrastructures), that will sustain a secure operational state without ever needing external information or active inputs from cybersecurity subject-matter experts (i.e., virus updates, IDS scans, patch management, vulnerability updates). We do this by eliminating system dependencies on external sources for protection. Instead, all internal co- munication is actively sealed and protected with integrity, authenticity and assurance checks that only cyber identities bound to the physical component can deliver. As CPSs continue to advance (i.e., IoTs, drones, ICSs), resilient-maintenance free solutions are needed to neutralize/reduce cyber risks. TDA is a conceptual system engineering framework specifically designed to address cyber-physical systems that can potentially be maintained and operated without the persistent need or demand for vulnerability or security patch updates.
Training a feed-forward network for the fast neural style transfer of images has proven successful, but the naive extension of processing videos frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near realtime. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures consistency over a longer period of time. Our network can incorporate different image stylization networks and clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitude faster.
Thanks to advancement of vehicle technologies, Unmanned Aerial Vehicle (UAV) now widely spread over practical services and applications affecting daily life of people positively. Especially, multiple heterogeneous UAVs with different capabilities should be considered since UAVs can play an important role in Internet of Things (IoT) environment in which the heterogeneity and the multi domain of UAVs are indispensable. Also, a concept of barrier-coverage has been proved as a promising one applicable to surveillance and security. In this paper, we present collision-free reinforced barriers by heterogeneous UAVs to support multi domain. Then, we define a problem which is to minimize maximum movement of UAVs on condition that a property of collision-free among UAVs is assured while they travel from current positions to specific locations so as to form reinforced barriers within multi domain. Because the defined problem depends on how to locate UAVs on barriers, we develop a novel approach that provides a collision-free movement as well as a creation of virtual lines in multi domain. Furthermore, we address future research topics which should be handled carefully for the barrier-coverage by heterogeneous UAVs.
Despite widespread use of commercial anti-virus products, the number of malicious files detected on home and corporate computers continues to increase at a significant rate. Recently, anti-virus companies have started investing in machine learning solutions to augment signatures manually designed by analysts. A malicious file's determination is often represented as a hierarchical structure consisting of a type (e.g. Worm, Backdoor), a platform (e.g. Win32, Win64), a family (e.g. Rbot, Rugrat) and a family variant (e.g. A, B). While there has been substantial research in automated malware classification, the aforementioned hierarchical structure, which can provide additional information to the classification models, has been ignored. In this paper, we propose the novel idea and study the performance of employing hierarchical learning algorithms for automated classification of malicious files. To the best of our knowledge, this is the first research effort which incorporates the hierarchical structure of the malware label in its automated classification and in the security domain, in general. It is important to note that our method does not require any additional effort by analysts because they typically assign these hierarchical labels today. Our empirical results on a real world, industrial-scale malware dataset of 3.6 million files demonstrate that incorporation of the label hierarchy achieves a significant reduction of 33.1% in the binary error rate as compared to a non-hierarchical classifier which is traditionally used in such problems.
The symmetric block ciphers, which represent a core element for building cryptographic communications systems and protocols, are used in providing message confidentiality, authentication and integrity. Various limitations in hardware and software resources, especially in terminal devices used in mobile communications, affect the selection of appropriate cryptosystem and its parameters. In this paper, an implementation of three symmetric ciphers (DES, 3DES, AES) used in different operating modes are analyzed on Android platform. The cryptosystems' performance is analyzed in different scenarios using several variable parameters: cipher, key size, plaintext size and number of threads. Also, the influence of parallelization supported by multi-core CPUs on cryptosystem performance is analyzed. Finally, some conclusions about the parameter selection for optimal efficiency are given.
Nowadays, Malware has become a serious threat to the digitization of the world due to the emergence of various new and complex malware every day. Due to this, the traditional signature-based methods for detection of malware effectively becomes an obsolete method. The efficiency of the machine learning model in context to the detection of malware files has been proved by different researches and studies. In this paper, a framework has been developed to detect and classify different files (e.g exe, pdf, php, etc.) as benign and malicious using two level classifier namely, Macro (for detection of malware) and Micro (for classification of malware files as a Trojan, Spyware, Adware, etc.). Cuckoo Sandbox is used for generating static and dynamic analysis report by executing files in the virtual environment. In addition, a novel model is developed for extracting features based on static, behavioral and network analysis using analysis report generated by the Cuckoo Sandbox. Weka Framework is used to develop machine learning models by using training datasets. The experimental results using proposed framework shows high detection rate with an accuracy of 100% using J48 Decision tree model, 99% using SMO (Sequential Minimal Optimization) and 97% using Random Forest tree. It also shows effective classification rate with accuracy 100% using J48 Decision tree, 91% using SMO and 66% using Random Forest tree. These results are used for detecting and classifying unknown files as benign or malicious.
In recent years, Moving Target Defense (MTD) has emerged as a potential game changer in the security landscape, due to its potential to create asymmetric uncertainty that favors the defender. Many different MTD techniques have then been proposed, each addressing an often very specific set of attack vectors. Despite the huge progress made in this area, there are still some critical gaps with respect to the analysis and quantification of the cost and benefits of deploying MTD techniques. In fact, common metrics to assess the performance of these techniques are still lacking and most of them tend to assess their performance in different and often incompatible ways. This paper addresses these gaps by proposing a quantitative analytic model for assessing the resource availability and performance of MTDs, and a method for the determination of the highest possible reconfiguration rate, and thus smallest probability of attacker's success, that meets performance and stability constraints. Finally, we present an experimental validation of the proposed approach.
This paper considers the problem of running a long-term on-demand service for executing actively-secure computations. We examined state-of-the-art tools and implementations for actively-secure computation and identified a set of key features indispensable to offer meaningful service like this. Since no satisfactory tools exist for the purpose, we developed Pool, a new tool for building and executing actively-secure computation protocols at extreme scales with nearly zero offline delay. With Pool, we are able to obliviously execute, for the first time, reactive computations like ORAM in the malicious threat model. Many technical benefits of Pool can be attributed to the concept of pool-based cut-and-choose. We show with experiments that this idea has significantly improved the scalability and usability of JIMU, a state-of-the-art LEGO protocol.
Traffic classification, i.e. associating network traffic to the application that generated it, is an important tool for several tasks, spanning on different fields (security, management, traffic engineering, R&D). This process is challenged by applications that preserve Internet users' privacy by encrypting the communication content, and even more by anonymity tools, additionally hiding the source, the destination, and the nature of the communication. In this paper, leveraging a public dataset released in 2017, we provide (repeatable) classification results with the aim of investigating to what degree the specific anonymity tool (and the traffic it hides) can be identified, when compared to the traffic of the other considered anonymity tools, using machine learning approaches based on the sole statistical features. To this end, four classifiers are trained and tested on the dataset: (i) Naïve Bayes, (ii) Bayesian Network, (iii) C4.5, and (iv) Random Forest. Results show that the three considered anonymity networks (Tor, I2P, JonDonym) can be easily distinguished (with an accuracy of 99.99%), telling even the specific application generating the traffic (with an accuracy of 98.00%).
Vehicular ad hoc networks (VANETs) are taking more attention from both the academia and the automotive industry due to a rapid development of wireless communication technologies. And with this development, vehicles called connected cars are increasingly being equipped with more sensors, processors, storages, and communication devices as they start to provide both infotainment and safety services through V2X communication. Such increase of vehicles is also related to the rise of security attacks and potential security threats. In a vehicular environment, security is one of the most important issues and it must be addressed before VANETs can be widely deployed. Conventional VANETs have some unique characteristics such as high mobility, dynamic topology, and a short connection time. Since an attacker can launch any unexpected attacks, it is difficult to predict these attacks in advance. To handle this problem, we propose collaborative security attack detection mechanism in a software-defined vehicular networks that uses multi-class support vector machine (SVM) to detect various types of attacks dynamically. We compare our security mechanism to existing distributed approach and present simulation results. The results demonstrate that the proposed security mechanism can effectively identify the types of attacks and achieve a good performance regarding high precision, recall, and accuracy.
This paper introduces an ensemble model that solves the binary classification problem by incorporating the basic Logistic Regression with the two recent advanced paradigms: extreme gradient boosted decision trees (xgboost) and deep learning. To obtain the best result when integrating sub-models, we introduce a solution to split and select sets of features for the sub-model training. In addition to the ensemble model, we propose a flexible robust and highly scalable new scheme for building a composite classifier that tries to simultaneously implement multiple layers of model decomposition and outputs aggregation to maximally reduce both bias and variance (spread) components of classification errors. We demonstrate the power of our ensemble model to solve the problem of predicting the outcome of Hearthstone, a turn-based computer game, based on game state information. Excellent predictive performance of our model has been acknowledged by the second place scored in the final ranking among 188 competing teams.
Embry-Riddle Aeronautical University (ERAU) is working with the Air Force Research Lab (AFRL) to develop a distributed multi-layer autonomous UAS planning and control technology for gathering intelligence in Anti-Access Area Denial (A2/AD) environments populated by intelligent adaptive adversaries. These resilient autonomous systems are able to navigate through hostile environments while performing Intelligence, Surveillance, and Reconnaissance (ISR) tasks, and minimizing the loss of assets. Our approach incorporates artificial life concepts, with a high-level architecture divided into three biologically inspired layers: cyber-physical, reactive, and deliberative. Each layer has a dynamic level of influence over the behavior of the agent. Algorithms within the layers act on a filtered view of reality, abstracted in the layer immediately below. Each layer takes input from the layer below, provides output to the layer above, and provides direction to the layer below. Fast-reactive control systems in lower layers ensure a stable environment supporting cognitive function on higher layers. The cyber-physical layer represents the central nervous system of the individual, consisting of elements of the vehicle that cannot be changed such as sensors, power plant, and physical configuration. On the reactive layer, the system uses an artificial life paradigm, where each agent interacts with the environment using a set of simple rules regarding wants and needs. Information is communicated explicitly via message passing and implicitly via observation and recognition of behavior. In the deliberative layer, individual agents look outward to the group, deliberating on efficient resource management and cooperation with other agents. Strategies at all layers are developed using machine learning techniques such as Genetic Algorithm (GA) or NN applied to system training that takes place prior to the mission.
Cyber-Physical Systems (CPS) consist of embedded computers with sensing and actuation capability, and are integrated into and tightly coupled with a physical system. Because the physical and cyber components of the system are tightly coupled, cyber-security is important for ensuring the system functions properly and safely. However, the effects of a cyberattack on the whole system may be difficult to determine, analyze, and therefore detect and mitigate. This work presents a model based software development framework integrated with a hardware-in-the-loop (HIL) testbed for rapidly deploying CPS attack experiments. The framework provides the ability to emulate low level attacks and obtain platform specific performance measurements that are difficult to obtain in a traditional simulation environment. The framework improves the cybersecurity design process which can become more informed and customized to the production environment of a CPS. The developed framework is illustrated with a case study of a railway transportation system.