Biblio
Affective facial expression is a key feature of non-verbal behavior and is considered as a symptom of an internal emotional state. Emotion recognition plays an important role in social communication: human-human and also for human-robot interaction. This work aims at the development of a framework able to recognise human emotions through facial expression for human-robot interaction. Simple features based on facial landmarks distances and angles are extracted to feed a dynamic probabilistic classification framework. The public online dataset Karolinska Directed Emotional Faces (KDEF) [12] is used to learn seven different emotions (e.g. angry, fearful, disgusted, happy, sad, surprised, and neutral) performed by seventy subjects. Offline and on-the-fly tests were carried out: leave-one-out cross validation tests using the dataset and on-the-fly tests during human-robot interactions. Preliminary results show that the proposed framework can correctly recognise human facial expressions with potential to be used in human-robot interaction scenarios.
The Internet of Vehicles (IoV) is a complex and dynamic mobile network system that enables information sharing between vehicles, their surrounding sensors, and clouds. While IoV opens new opportunities in various applications and services to provide safety on the road, it introduces new challenges in the field of digital forensics investigations. The existing tools and procedures of digital forensics cannot meet the highly distributed, decentralized, dynamic, and mobile infrastructures of the IoV. Forensic investigators will face challenges while identifying necessary pieces of evidence from the IoV environment, and collecting and analyzing the evidence. In this article, we propose TrustIoV - a digital forensic framework for the IoV systems that provides mechanisms to collect and store trustworthy evidence from the distributed infrastructure. Trust-IoV maintains a secure provenance of the evidence to ensure the integrity of the stored evidence and allows investigators to verify the integrity of the evidence during an investigation. Our experimental results on a simulated environment suggest that Trust-IoV can operate with minimal overhead while ensuring the trustworthiness of evidence in a strong adversarial scenario.
Barrier coverage has been widely adopted to prevent unauthorized invasion of important areas in sensor networks. As sensors are typically placed outdoors, they are susceptible to getting faulty. Previous works assumed that faulty sensors are easy to recognize, e.g., they may stop functioning or output apparently deviant sensory data. In practice, it is, however, extremely difficult to recognize faulty sensors as well as their invalid output. We, in this paper, propose a novel fault-tolerant intrusion detection algorithm (TrusDet) based on trust management to address this challenging issue. TrusDet comprises of three steps: i) sensor-level detection, ii) sink-level decision by collective voting, and iii) trust management and fault determination. In the Step i) and ii), TrusDet divides the surveillance area into a set of fine- grained subareas and exploits temporal and spatial correlation of sensory output among sensors in different subareas to yield a more accurate and robust performance of barrier coverage. In the Step iii), TrusDet builds a trust management based framework to determine the confidence level of sensors being faulty. We implement TrusDet on HC- SR501 infrared sensors and demonstrate that TrusDet has a desired performance.
Recently perceived vulnerabilities in public key infrastructures (PKI) demand that a semantic or cognitive definition of trust is essential for augmenting the security through trust formulations. In this paper, we examine the meaning of trust in PKIs. Properly categorized trust can help in developing intelligent algorithms that can adapt to the security and privacy requirements of the clients. We delineate the different types of trust in a generic PKI model.
Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large-scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they've been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective. In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several-week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample's behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use.
This research in progress paper describes the role of cyber security measures undertaken in an ICT system for integrating electric storage technologies into the grid. To do so, it defines security requirements for a communications gateway and gives detailed information and hands-on configuration advice on node and communication line security, data storage, coping with backend M2M communications protocols and examines privacy issues. The presented research paves the road for developing secure smart energy communications devices that allow enhancing energy efficiency. The described measures are implemented in an actual gateway device within the HORIZON 2020 project STORY, which aims at developing new ways to use storage and demonstrating these on six different demonstration sites.
Internet of Things (IoT) distributed secure data management system is characterized by authentication, privacy policies to preserve data integrity. Multi-phase security and privacy policies ensure confidentiality and trust between the users and service providers. In this regard, we present a novel Two-phase Incentive-based Secure Key (TISK) system for distributed data management in IoT. The proposed system classifies the IoT user nodes and assigns low-level, high-level security keys for data transactions. Low-level secure keys are generic light-weight keys used by the data collector nodes and data aggregator nodes for trusted transactions. TISK phase-I Generic Service Manager (GSM-C) module verifies the IoT devices based on self-trust incentive and server-trust incentive levels. High-level secure keys are dedicated special purpose keys utilized by data manager nodes and data expert nodes for authorized transactions. TISK phase-II Dedicated Service Manager (DSM-C) module verifies the certificates issued by GSM-C module. DSM-C module further issues high-level secure keys to data manager nodes and data expert nodes for specific purpose transactions. Simulation results indicate that the proposed TISK system reduces the key complexity and key cost to ensure distributed secure data management in IoT network.
To build a resilient and secure microgrid in the face of growing cyber-attacks and cyber-mistakes, we present a software-defined networking (SDN)-based communication network architecture for microgrid operations. We leverage the global visibility, direct networking controllability, and programmability offered by SDN to investigate multiple security applications, including self-healing communication network management, real-time and uncertainty-aware communication network verification, and specification-based intrusion detection. We also expand a novel cyber-physical testing and evaluation platform that combines a power distribution system simulator (for microgrid energy services) and an SDN emulator with a distributed control environment (for microgrid communications). Experimental results demonstrate that the SDN-based communication architecture and applications can significantly enhance the resilience and security of microgrid operations against the realization of various cyber threats.
We present a testbed implementation for the development, evaluation and demonstration of security orchestration in a network function virtualization environment. As a specific scenario, we demonstrate how an intelligent response to DDoS and various other kinds of targeted attacks can be formulated such that these attacks and future variations can be mitigated. We utilise machine learning to characterise normal network traffic, attacks and responses, then utilise this information to orchestrate virtualized network functions around affected components to isolate these components and to capture, redirect and filter traffic (e.g. honeypotting) for additional analysis. This allows us to maintain a high level of network quality of service to given network functions and components despite adverse network conditions.
We develop and validate Internet path measurement techniques to distinguish congestion experienced when a flow self-induces congestion in the path from when a flow is affected by an already congested path. One application of this technique is for speed tests, when the user is affected by congestion either in the last mile or in an interconnect link. This difference is important because in the latter case, the user is constrained by their service plan (i.e., what they are paying for), and in the former case, they are constrained by forces outside of their control. We exploit TCP congestion control dynamics to distinguish these cases for Internet paths that are predominantly TCP traffic. In TCP terms, we re-articulate the question: was a TCP flow bottlenecked by an already congested (possibly interconnect) link, or did it induce congestion in an otherwise idle (possibly a last-mile) link? TCP congestion control affects the round-trip time (RTT) of packets within the flow (i.e., the flow RTT): an endpoint sends packets at higher throughput, increasing the occupancy of the bottleneck buffer, thereby increasing the RTT of packets in the flow. We show that two simple, statistical metrics derived from the flow RTT during the slow start period—its coefficient of variation, and the normalized difference between the maximum and minimum RTT—can robustly identify which type of congestion the flow encounters. We use extensive controlled experiments to demonstrate that our technique works with up to 90% accuracy. We also evaluate our techniques using two unique real-world datasets of TCP throughput measurements using Measurement Lab data and the Ark platform. We find up to 99% accuracy in detecting self-induced congestion, and up to 85% accuracy in detecting external congestion. Our results can benefit regulators of interconnection markets, content providers trying to improve customer service, and users trying to understand whether poor performance is something they can fix by upgrading their service tier.
Use of digital token - which certifies the bearer's rights to some kind of products or services - is quite common nowadays for its convenience, ease of use and cost-effectiveness. Many of such digital tokens, however, are produced with software alone, making them vulnerable to forgery, including alteration and duplication. For a more secure safeguard for both token owner's right and service provider's accountability, digital tokens should be tamper-resistant as much as possible in order for them to withstand physical attacks as well. In this paper, we present a rights management system that leverages tamper-resistant digital tokens created by hardware-software collaboration in our eTRON architecture. The system features the complete life cycle of a digital token from generation to storage and redemption. Additionally, it provides a secure mechanism for transfer of rights in a peer-to-peer manner over the Internet. The proposed system specifies protocols for permissible manipulation on digital tokens, and subsequently provides a set of APIs for seamless application development. Access privileges to the tokens are strictly defined and state-of-the-art asymmetric cryptography is used for ensuring their confidentiality. Apart from the digital tokens being physically tamper-resistant, the protocols involved in the system are proven to be secure against attacks. Furthermore, an authentication mechanism is implemented that invariably precedes any operation involving the digital token in question. The proposed system presents clear security gains compared to existing systems that do not take tamper-resistance into account, and schemes that use symmetric key cryptography.
Security Evaluation and Management (SEM) is considerably important process to protect the Embedded System (ES) from various kinds of security's exploits. In general, SEM's processes have some challenges, which limited its efficiency. Some of these challenges are system-based challenges like the hetero-geneity among system's components and system's size. Some other challenges are expert-based challenges like mis-evaluation possibility and experts non-continuous availability. Many of these challenges were addressed by the Multi Metric (MM) framework, which depends on experts' or subjective evaluation for basic evaluations. Despite of its productivity, subjective evaluation has some drawbacks (e.g. expert misevaluation) foster the need for considering objective evaluations in the MM framework. In addition, the MM framework is system centric framework, thus, by modelling complex and huge system using the MM framework a guide is needed indicating changes toward desirable security's requirements. This paper proposes extensions for the MM framework consider the usage of objective evaluations and work as guide for needed changes to satisfy desirable security requirements.
Emerging cyber-physical systems (CPS) often require collecting end users' data to support data-informed decision making processes. There has been a long-standing argument as to the tradeoff between privacy and data utility. In this paper, we adopt a multiparametric programming approach to rigorously study conditions under which data utility has to be sacrificed to protect privacy and situations where free-lunch privacy can be achieved, i.e., data can be concealed without hurting the optimality of the decision making underlying the CPS. We formalize the concept of free-lunch privacy, and establish various results on its existence, geometry, as well as efficient computation methods. We propose the free-lunch privacy mechanism, which is a pragmatic mechanism that exploits free-lunch privacy if it exists with the constant guarantee of optimal usage of data. We study the resilience of this mechanism against attacks that attempt to infer the parameter of a user's data generating process. We close the paper by a case study on occupancy-adaptive smart home temperature control to demonstrate the efficacy of the mechanism.
Conventional cyber defenses require continual maintenance: virus, firmware, and software updates; costly functional impact tests; and dedicated staff within a security operations center. The conventional defenses require access to external sources for the latest updates. The whitelisted system, however, is ideally a system that can sustain itself freed from external inputs. Cyber-Physical Systems (CPS), have the following unique traits: digital commands are physically observable and verifiable; possible combinations of commands are limited and finite. These CPS traits, combined with a trust anchor to secure an unclonable digital identity (i.e., digitally unclonable function [DUF] - Patent Application \#15/183,454; CodeLock), offers an excellent opportunity to explore defenses built on whitelisting approach called “Trustworthy Design Architecture (TDA).” There exist significant research challenges in defining what are the physically verifiable whitelists as well as the criteria for cyber-physical traits that can be used as the unclonable identity. One goal of the project is to identify a set of physical and/or digital characteristics that can uniquely identify an endpoint. The measurements must have the properties of being reliable, reproducible, and trustworthy. Given that adversaries naturally evolve with any defense, the adversary will have the goal of disrupting or spoofing this process. To protect against such disruptions, we provide a unique system engineering technique, when applied to CPSs (e.g., nuclear processing facilities, critical infrastructures), that will sustain a secure operational state without ever needing external information or active inputs from cybersecurity subject-matter experts (i.e., virus updates, IDS scans, patch management, vulnerability updates). We do this by eliminating system dependencies on external sources for protection. Instead, all internal co- munication is actively sealed and protected with integrity, authenticity and assurance checks that only cyber identities bound to the physical component can deliver. As CPSs continue to advance (i.e., IoTs, drones, ICSs), resilient-maintenance free solutions are needed to neutralize/reduce cyber risks. TDA is a conceptual system engineering framework specifically designed to address cyber-physical systems that can potentially be maintained and operated without the persistent need or demand for vulnerability or security patch updates.
It is a well-known fact that nowadays access to sensitive information is being performed through the use of a three-tier-architecture. Web applications have become a handy interface between users and data. As database-driven web applications are being used more and more every day, web applications are being seen as a good target for attackers with the aim of accessing sensitive data. If an organization fails to deploy effective data protection systems, they might be open to various attacks. Governmental organizations, in particular, should think beyond traditional security policies in order to achieve proper data protection. It is, therefore, imperative to perform security testing and make sure that there are no holes in the system, before an attack happens. One of the most commonly used web application attacks is by insertion of an SQL query from the client side of the application. This attack is called SQL Injection. Since an SQL Injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. To overcome the SQL injection problems, there is a need to use different security systems. In this paper, we will use 3 different scenarios for testing security systems. Using Penetration testing technique, we will try to find out which is the best solution for protecting sensitive data within the government network of Kosovo.
Cloud computing is a solution to reduce the cost of IT by providing elastic access to shared resources. It also provides solutions for on-demand computing power and storage for devices at the edge networks with limited resources. However, increasing the number of connected devices caused by IoT architecture leads to higher network traffic and delay for cloud computing. The centralised architecture of cloud computing also makes the edge networks more susceptible to challenges in the core network. Fog computing is a solution to decrease the network traffic, delay, and increase network resilience. In this paper, we study how fog computing may improve network resilience. We also conduct a simulation to study the effect of fog computing on network traffic and delay. We conclude that using fog computing prepares the network for better response time in case of interactive requests and makes the edge networks more resilient to challenges in the core network.
OpenFlow has recently emerged as a powerful paradigm to help build dynamic, adaptive and agile networks. By decoupling control plane from data plane, OpenFlow allows network operators to program a centralized intelligence, OpenFlow controller, to manage network-wide traffic flows to meet the changing needs. However, from the security's point of view, a buggy or even malicious controller could compromise the control logic, and then the entire network. Even worse, the recent attack Stuxnet on industrial control systems also indicates the similar, severe threat to OpenFlow controllers from the commercial operating systems they are running on. In this paper, we comprehensively studied the attack vectors against the OpenFlow critical component, controller, and proposed a cross layer diversity approach that enables OpenFlow controllers to detect attacks, corruptions, failures, and then automatically continue correct execution. Case studies demonstrate that our approach can protect OpenFlow controllers from threats coming from compromised operating systems and themselves.
In this paper, inspired by Gatys's recent work, we propose a novel approach that transforms photos to comics using deep convolutional neural networks (CNNs). While Gatys's method that uses a pre-trained VGG network generally works well for transferring artistic styles such as painting from a style image to a content image, for more minimalist styles such as comics, the method often fails to produce satisfactory results. To address this, we further introduce a dedicated comic style CNN, which is trained for classifying comic images and photos. This new network is effective in capturing various comic styles and thus helps to produce better comic stylization results. Even with a grayscale style image, Gatys's method can still produce colored output, which is not desirable for comics. We develop a modified optimization framework such that a grayscale image is guaranteed to be synthesized. To avoid converging to poor local minima, we further initialize the output image using grayscale version of the content image. Various examples show that our method synthesizes better comic images than the state-of-the-art method.
Clean slate design of computing system is an emerging topic for continuing growth of warehouse-scale computers. A famous custom design is rackscale (RS) computing by considering a single rack as a computer that consists of a number of processors, storages and accelerators customized to a target application. In RS, each user is expected to occupy a single or more than one rack. However, new users frequently appear and the users often change their application scales and parameters that would require different numbers of processors, storages and accelerators in a rack. The reconfiguration of interconnection networks on their components is potentially needed to support the above demand in RS. In this context, we propose the inter-rackscale (IRS) architecture that disaggregates various hardware resources into different racks according to their own areas. The heart of IRS is to use free-space optics (FSO) for tightly-coupled connections between processors, storages and GPUs distributed in different racks, by swapping endpoints of FSO links to change network topologies. Through a large IRS system simulation, we show that by utilizing FSO links for interconnection between racks, the FSO-equipped IRS architecture can provide comparable communication latency between heterogeneous resources to that of the counterpart RS architecture. A utilization of 3 FSO terminals per rack can improve at least 87.34% of inter-CPU/SSD(GPU) communication over Fat-tree and improve at least 92.18% of that over 2-D Torus. We verify the advantages of IRS over RS in job scheduling performance.