Biblio
We present a testbed implementation for the development, evaluation and demonstration of security orchestration in a network function virtualization environment. As a specific scenario, we demonstrate how an intelligent response to DDoS and various other kinds of targeted attacks can be formulated such that these attacks and future variations can be mitigated. We utilise machine learning to characterise normal network traffic, attacks and responses, then utilise this information to orchestrate virtualized network functions around affected components to isolate these components and to capture, redirect and filter traffic (e.g. honeypotting) for additional analysis. This allows us to maintain a high level of network quality of service to given network functions and components despite adverse network conditions.
The paper considers an issues of protecting data from unauthorized access by users' authentication through keystroke dynamics. It proposes to use keyboard pressure parameters in combination with time characteristics of keystrokes to identify a user. The authors designed a keyboard with special sensors that allow recording complementary parameters. The paper presents an estimation of the information value for these new characteristics and error probabilities of users' identification based on the perceptron algorithms, Bayes' rule and quadratic form networks. The best result is the following: 20 users are identified and the error rate is 0.6%.
The growing popularity of Android and the increasing amount of sensitive data stored in mobile devices have lead to the dissemination of Android ransomware. Ransomware is a class of malware that makes data inaccessible by blocking access to the device or, more frequently, by encrypting the data; to recover the data, the user has to pay a ransom to the attacker. A solution for this problem is to backup the data. Although backup tools are available for Android, these tools may be compromised or blocked by the ransomware itself. This paper presents the design and implementation of RANSOMSAFEDROID, a TrustZone based backup service for mobile devices. RANSOMSAFEDROID is protected from malware by leveraging the ARM TrustZone extension and running in the secure world. It does backup of files periodically to a secure local persistent partition and pushes these backups to external storage to protect them from ransomware. Initially, RANSOMSAFEDROID does a full backup of the device filesystem, then it does incremental backups that save the changes since the last backup. As a proof-of-concept, we implemented a RANSOMSAFEDROID prototype and provide a performance evaluation using an i.MX53 development board.
In the past the security of building automation solely depended on the security of the devices inside or tightly connected to the building. In the last years more devices evolved using some kind of cloud service as a back-end or providers supplying some kind of device to the user. Also, the number of building automation systems connected to the Internet for management, control, and data storage increases every year. These developments cause the appearance of new threats on building automation. As Internet of Thing (IoT) and building automation intertwine more and more these threats are also valid for IoT installations. The paper presents new attack vectors and new threats using the threat model of Meyer et al.[1].
Due to the trend of under-ocean exploration, realtime monitoring or long-term surveillance of the under-ocean environment, e.g., real-time monitoring for under-ocean oil drilling, is imperative. Underwater wireless sensor networks could provide an optimal option, and have recently attracted intensive attention from researchers. Nevertheless, terrestrial wireless sensor networks (WSNs) have been well investigated and solved by many approaches that rely on the electromagnetic/optical transmission techniques. Deploying an applicable underwater wireless sensor network is still a big challenge. Due to critical conditions of the underwater environment (e.g., high pressure, high salinity, limited energy etc), the cost of the underwater sensor is significant. The dense sensor deployment is not applicable in the underwater condition. Therefore, Autonomous Underwater Vehicle (AUV) becomes an alternative option for implementing underwater surveillance and target detection. In this article, we present a framework to theoretically analyze the target detection probability in the underwater environment by using AUVs. The experimental results further verify our theoretical results.
This paper outlines a demonstration of the work carried out in the SoCoRo project investigating how far a neuro-typical population recognises facial expressions on a non-naturalistic robot face that are designed to show approval and disapproval. RFID-tagged objects are presented to an Emys robot head (called Alyx) and Alyx reacts to each with a facial expression. Participants are asked to put the object in a box marked 'Like' or 'Dislike'. This study is being extended to include assessment of participants' Autism Quotient using a validated questionnaire as a step towards using a robot to help train high-functioning adults with an Autism Spectrum Disorder in social signal recognition.
Facial expression recognition is a challenging problem in the field of computer vision. In this paper, we propose a deep learning approach that can learn the joint low-level and high-level features of human face to resolve this problem. Our deep neural networks utilize convolution and downsampling to extract the abstract and local features of human face, and reconstruct the raw input images to learn global features as supplementary information at the same time. We also add an adjustable weight in the networks when combining the two kinds of features for the final classification. The experimental results show that the proposed method can achieve good results, which has an average recognition accuracy of 93.65% on the test datasets.
It is well-known that online services resort to various cookies to track users through users' online service identifiers (IDs) - in other words, when users access online services, various "fingerprints" are left behind in the cyberspace. As they roam around in the physical world while accessing online services via mobile devices, users also leave a series of "footprints" – i.e., hints about their physical locations - in the physical world. This poses a potent new threat to user privacy: one can potentially correlate the "fingerprints" left by the users in the cyberspace with "footprints" left in the physical world to infer and reveal leakage of user physical world privacy, such as frequent user locations or mobility trajectories in the physical world - we refer to this problem as user physical world privacy leakage via user cyberspace privacy leakage. In this paper we address the following fundamental question: what kind - and how much - of user physical world privacy might be leaked if we could get hold of such diverse network datasets even without any physical location information. In order to conduct an in-depth investigation of these questions, we utilize the network data collected via a DPI system at the routers within one of the largest Internet operator in Shanghai, China over a duration of one month. We decompose the fundamental question into the three problems: i) linkage of various online user IDs belonging to the same person via mobility pattern mining; ii) physical location classification via aggregate user mobility patterns over time; and iii) tracking user physical mobility. By developing novel and effective methods for solving each of these problems, we demonstrate that the question of user physical world privacy leakage via user cyberspace privacy leakage is not hypothetical, but indeed poses a real potent threat to user privacy.
The proliferation of Internet-of-Things (IoT) devices within homes raises many security and privacy concerns. Recent headlines highlight the lack of effective security mechanisms in IoT devices. Security threats in IoT arise not only from vulnerabilities in individual devices but also from the composition of devices in unanticipated ways and the ability of devices to interact through both cyber and physical channels. Existing approaches provide methods for monitoring cyber interactions between devices but fail to consider possible physical interactions. To overcome this challenge, it is essential that security assessments of IoT networks take a holistic view of the network and treat it as a "system of systems", in which security is defined, not solely by the individual systems, but also by the interactions and trust dependencies between systems. In this paper, we propose a way of modeling cyber and physical interactions between IoT devices of a given network. By verifying the cyber and physical interactions against user-defined policies, our model can identify unexpected chains of events that may be harmful. It can also be applied to determine the impact of the addition (or removal) of a device into an existing network with respect to dangerous device interactions. We demonstrate the viability of our approach by instantiating our model using Alloy, a language and tool for relational models. In our evaluation, we considered three realistic IoT use cases and demonstrate that our model is capable of identifying potentially dangerous device interactions. We also measure the performance of our approach with respect to the CPU runtime and memory consumption of the Alloy model finder, and show that it is acceptable for smart-home IoT networks.
Long-term tracking is one of the most challenging problems in computer vision. During long-term tracking, the target object may suffer from scale changes, illumination changes, heavy occlusions, out-of-view, etc. Most existing tracking methods fail to handle object invisibility, supposing that the object is always visible throughout the image sequence. In this paper, a novel long-term tracking method is proposed, which mainly addresses the problem of object invisibility. We combine a correlation filter based tracker with an online classifier, aiming to estimate the object state and re-detect the object after its invisibility. In addition, an adaptive updating scheme is proposed for the appearance model of the object considering both visible and invisible situations. Quantitative and qualitative evaluations prove that our algorithm outperforms the state-of-the-art methods on the 20 benchmark sequences with object invisibility. Furthermore, the proposed algorithm achieves competitive performance with the state-of-the-art trackers on Object Tracking Benchmark which covers various challenging aspects in object tracking.
Internet of Things (IoT) is an integral part of application domains such as smart-home and digital healthcare. Various standard public key cryptography techniques (e.g., key exchange, public key encryption, signature) are available to provide fundamental security services for IoTs. However, despite their pervasiveness and well-proven security, they also have been shown to be highly energy costly for embedded devices. Hence, it is a critical task to improve the energy efficiency of standard cryptographic services, while preserving their desirable properties simultaneously. In this paper, we exploit synergies among various cryptographic primitives with algorithmic optimizations to substantially reduce the energy consumption of standard cryptographic techniques on embedded devices. Our contributions are: (i) We harness special precomputation techniques, which have not been considered for some important cryptographic standards to boost the performance of key exchange, integrated encryption, and hybrid constructions. (ii) We provide self-certification for these techniques to push their performance to the edge. (iii) We implemented our techniques and their counterparts on 8-bit AVR ATmega 2560 and evaluated their performance. We used microECC library and made the implementations on NIST-recommended secp192 curve, due to its standardization. Our experiments confirmed significant improvements on the battery life (up to 7x) while preserving the desirable properties of standard techniques. Moreover, to the best of our knowledge, we provide the first open-source framework including such set of optimizations on low-end devices.
We address the problem of object tracking in an underwater acoustic sensor network in which distributed nodes measure the strength of field generated by moving objects, encode the measurements into digital data packets, and transmit the packets to a fusion center in a random access manner. We allow for imperfect communication links, where information packets may be lost due to noise and collisions. The packets that are received correctly are used to estimate the objects' trajectories by employing an extended Kalman Filter, where provisions are made to accommodate a randomly changing number of obseravtions in each iteration. An adaptive rate control scheme is additionally applied to instruct the sensor nodes on how to adjust their transmission rate so as to improve the location estimation accuracy and the energy efficiency of the system. By focusing explicitly on the objects' locations, rather than working with a pre-specified grid of potential locations, we resolve the spatial quantization issues associated with sparse identification methods. Finally, we extend the method to address the possibility of objects entering and departing the observation area, thus improving the scalability of the system and relaxing the requirement for accurate knowledge of the objects' initial locations. Performance is analyzed in terms of the mean-squared localization error and the trade-offs imposed by the limited communication bandwidth.
This paper presents a quantitative study of adaptive filtering to cancel the EMG artifact from ECG signals. The proposed adaptive algorithm operates in real time; it adjusts its coefficients simultaneously with signals acquisition minimizing a cost function, the summation of weighted least square errors (LSE). The obtained results prove the success and the effectiveness of the proposed algorithm. The best ones were obtained for the forgetting factor equals to 0.99 and the regularization parameter equals to 0.02..
We develop and validate Internet path measurement techniques to distinguish congestion experienced when a flow self-induces congestion in the path from when a flow is affected by an already congested path. One application of this technique is for speed tests, when the user is affected by congestion either in the last mile or in an interconnect link. This difference is important because in the latter case, the user is constrained by their service plan (i.e., what they are paying for), and in the former case, they are constrained by forces outside of their control. We exploit TCP congestion control dynamics to distinguish these cases for Internet paths that are predominantly TCP traffic. In TCP terms, we re-articulate the question: was a TCP flow bottlenecked by an already congested (possibly interconnect) link, or did it induce congestion in an otherwise idle (possibly a last-mile) link? TCP congestion control affects the round-trip time (RTT) of packets within the flow (i.e., the flow RTT): an endpoint sends packets at higher throughput, increasing the occupancy of the bottleneck buffer, thereby increasing the RTT of packets in the flow. We show that two simple, statistical metrics derived from the flow RTT during the slow start period—its coefficient of variation, and the normalized difference between the maximum and minimum RTT—can robustly identify which type of congestion the flow encounters. We use extensive controlled experiments to demonstrate that our technique works with up to 90% accuracy. We also evaluate our techniques using two unique real-world datasets of TCP throughput measurements using Measurement Lab data and the Ark platform. We find up to 99% accuracy in detecting self-induced congestion, and up to 85% accuracy in detecting external congestion. Our results can benefit regulators of interconnection markets, content providers trying to improve customer service, and users trying to understand whether poor performance is something they can fix by upgrading their service tier.
By using generalized regression neural network clustering analysis, effective clustering of five kinds of network intrusion behavior modes is carried out. First of all, intrusion data is divided into five categories by making use of fuzzy C means clustering algorithm. Then, the samples that are closet to the center of each class in the clustering results are taken as the clustering training samples of generalized neural network for the data training, and the results output by the training are the individual owned invasion category. The experimental results showed that the new algorithm has higher classification accuracy of network intrusion ways, which can provide more reliable data support for the prevention of the network intrusion.
There is a long-standing need for improved cybersecurity through automation of attack signature detection, classification, and response. In this paper, we present experimental test bed results from an implementation of autonomic control plane feedback based on the Observe, Orient, Decide, Act (OODA) framework. This test bed modeled the building blocks for a proposed zero trust cloud data center network. We present test results of trials in which identity management with automated threat response and packet-based authentication were combined with dynamic management of eight distinct network trust levels. The log parsing and orchestration software we created work alongside open source log management tools to coordinate and integrate threat response from firewalls, authentication gateways, and other network devices. Threat response times are measured and shown to be a significant improvement over conventional methods.
Recent years have witnessed the trend of increasingly relying on distributed infrastructures. This increased the number of reported incidents of security breaches compromising users' privacy, where third parties massively collect, process and manage users' personal data. Towards these security and privacy challenges, we combine hierarchical identity based cryptographic mechanisms with emerging blockchain infrastructures and propose a blockchain-based data usage auditing architecture ensuring availability and accountability in a privacy-preserving fashion. Our approach relies on the use of auditable contracts deployed in blockchain infrastructures. Thus, it offers transparent and controlled data access, sharing and processing, so that unauthorized users or untrusted servers cannot process data without client's authorization. Moreover, based on cryptographic mechanisms, our solution preserves privacy of data owners and ensures secrecy for shared data with multiple service providers. It also provides auditing authorities with tamper-proof evidences for data usage compliance.
With the rapid development of bulk power grid under extra-high voltage (EHV) AC/DC hybrid power system and extensive access of distributed energy resources (DER), operation characteristics of power grid have become increasingly complicated. To cope with new severe challenges faced by safe operation of interconnected bulk power grids, an in-depth analysis of bulk power grid security defense system under the background of EHV and new energy resources was implemented from aspects of management and technology in this paper. Supported by big data and cloud computing, bulk power grid security defense system was divided into two parts: one is the prevention and control of operation risks. Power grid risks are eliminated and influence of random faults is reduced through measures such as network planning, power-cut scheme, risk pre-warning, equipment status monitoring, voltage control, frequency control and adjustment of operating mode. The other is the fault recovery control. By updating “three defense lines”, intelligent relay protection is used to deal with the challenges brought by EHV AC/DC hybrid grid and new energy resources. And then security defense system featured by passive defense is promoted to active type power grid security defense system.
Recently, Jung et al. [1] proposed a data access privilege scheme and claimed that their scheme addresses data and identity privacy as well as multi-authority, and provides data access privilege for attribute-based encryption. In this paper, we show that this scheme, and also its former and latest versions (i.e. [2] and [3] respectively) suffer from a number of weaknesses in terms of finegrained access control, users and authorities collusion attack, user authorization, and user anonymity protection. We then propose our new scheme that overcomes these shortcomings. We also prove the security of our scheme against user collusion attacks, authority collusion attacks and chosen plaintext attacks. Lastly, we show that the efficiency of our scheme is comparable with existing related schemes.
Multi-agent simulations are useful for exploring collective patterns of individual behavior in social, biological, economic, network, and physical systems. However, there is no provenance support for multi-agent models (MAMs) in a distributed setting. To this end, we introduce ProvMASS, a novel approach to capture provenance of MAMs in a distributed memory by combining inter-process identification, lightweight coordination of in-memory provenance storage, and adaptive provenance capture. ProvMASS is built on top of the Multi-Agent Spatial Simulation (MASS) library, a framework that combines multi-agent systems with large-scale fine-grained agent-based models, or MAMs. Unlike other environments supporting MAMs, MASS parallelizes simulations with distributed memory, where agents and spatial data are shared application resources. We evaluate our approach with provenance queries to support three use cases and performance measures. Initial results indicate that our approach can support various provenance queries for MAMs at reasonable performance overhead.
Cloud computing is a revolution in IT technology that provides scalable, virtualized on-demand resources to the end users with greater flexibility, less maintenance and reduced infrastructure cost. These resources are supervised by different management organizations and provided over Internet using known networking protocols, standards and formats. The underlying technologies and legacy protocols contain bugs and vulnerabilities that can open doors for intrusion by the attackers. Attacks as DDoS (Distributed Denial of Service) are ones of the most frequent that inflict serious damage and affect the cloud performance. In a DDoS attack, the attacker usually uses innocent compromised computers (called zombies) by taking advantages of known or unknown bugs and vulnerabilities to send a large number of packets from these already-captured zombies to a server. This may occupy a major portion of network bandwidth of the victim cloud infrastructures or consume much of the servers time. Thus, in this work, we designed a DDoS detection system based on the C.4.5 algorithm to mitigate the DDoS threat. This algorithm, coupled with signature detection techniques, generates a decision tree to perform automatic, effective detection of signatures attacks for DDoS flooding attacks. To validate our system, we selected other machine learning techniques and compared the obtained results.
Cyber-Physical Systems (CPS) such as Unmanned Aerial Systems (UAS) sense and actuate their environment in pursuit of a mission. The attack surface of these remotely located, sensing and communicating devices is both large, and exposed to adversarial actors, making mission assurance a challenging problem. While best-practice security policies should be followed, they are rarely enough to guarantee mission success as not all components in the system may be trusted and the properties of the environment (e.g., the RF environment) may be under the control of the attacker. CPS must thus be built with a high degree of resilience to mitigate threats that security cannot alleviate. In this paper, we describe the Agile and Resilient Embedded Systems (ARES) methodology and metric set. The ARES methodology pursues cyber security and resilience (CSR) as high level system properties to be developed in the context of the mission. An analytic process guides system developers in defining mission objectives, examining principal issues, applying CSR technologies, and understanding their interactions.
The development of radar technology, Synthetic Aperture Radar (SAR) and Unmanned Aerial Vehicle (UAV) requires the communication facilities and infrastructures that have variety of platforms and high quality of image. In this paper, we obtain the basic configuration of triangle array antenna using corporate feeding-line for Circularly Polarized- Synthetic Aperture Radar (CP-SAR) sensor embedded on small UAV or drone airspace with compact, small, and simple configuration. The Method of Moments (MoM) is chosen in the numerical analysis for fast calculation of the unknown current on the patch antenna. The developing of triangle array antenna is consist of four patches of simple equilateral triangle patch with adding truncated corner of each patch and resonant frequency at f = 1.25 GHz. Proximity couple, perturbation segment, single feeding method are applied to generate the circular polarization wave from radiating patch. The corporate feeding-line design is implemented by combining some T-junctions to distribute the current from input port to radiating patch and to reach 2×2 patches. The performance results of this antenna, especially for gain and axial ratio (Ar) at the resonant frequency are 11.02 dBic and 2.47 dB, respectively. Furthermore, the two-beams appeared at boresight in elevation plane have similar values each other i.e. for average beamwidth of 10 dBic-gain and the 3 dB-Ar are about 20° and 70°, respectively.
With the development of modern logistics industry railway freight enterprises as the main traditional logistics enterprises, the service mode is facing many problems. In the era of big data, for railway freight enterprises, coordinated development and sharing of information resources have become the requirements of the times, while how to protect the privacy of citizens has become one of the focus issues of the public. To prevent the disclosure or abuse of the citizens' privacy information, the citizens' privacy needs to be preserved in the process of information opening and sharing. However, most of the existing privacy preserving models cannot to be used to resist attacks with continuously growing background knowledge. This paper presents the method of applying differential privacy to protect associated data, which can be shared in railway freight service association information. First, the original service data need to slice by optimal shard length, then differential method and apriori algorithm is used to add Laplace noise in the Candidate sets. Thus the citizen's privacy information can be protected even if the attacker gets strong background knowledge. Last, sharing associated data to railway information resource partners. The steps and usefulness of the discussed privacy preservation method is illustrated by an example.