Biblio
The usage of connected devices and their role within our daily- and business life gains more and more impact. In addition, various derivations of Cyber-Physical Systems (CPS) reach new business fields, like smart healthcare or Industry 4.0. Although these systems do bring many advantages for users by extending the overall functionality of existing systems, they come with several challenges, especially for system engineers and architects. One key challenge consists in achieving a sufficiently high level of security within the CPS environment, as sensitive data or safety-critical functions are often integral parts of CPS. Being system of systems (SoS), CPS complexity, unpredictability and heterogeneity complicate analyzing the overall level of security, as well as providing a way to detect ongoing attacks. Usually, security metrics and frameworks provide an effective tool to measure the level of security of a given component or system. Although several comprehensive surveys exist, an assessment of the effectiveness of the existing solutions for CPS environments is insufficiently investigated in literature. In this work, we address this gap by benchmarking a carefully selected variety of existing security metrics in terms of their usability for CPS. Accordingly, we pinpoint critical CPS challenges and qualitatively assess the effectiveness of the existing metrics for CPS systems.
Pre-Silicon hardware Trojan detection has been studied for years. The most popular benchmark circuits are from the Trust-Hub. Their common feature is that the probability of activating hardware Trojans is very low. This leads to a series of machine learning based hardware Trojan detection methods which try to find the nets with low signal probability of 0 or 1. On the other hand, it is considered that, if the probability of activating hardware Trojans is high, these hardware Trojans can be easily found through behaviour simulations or during functional test. This paper explores the "grey zone" between these two opposite scenarios: if the activation probability of a hardware Trojan is not low enough for machine learning to detect it and is not high enough for behaviour simulation or functional test to find it, it can escape from detection. Experiments show the existence of such hardware Trojans, and this paper suggests a new set of hardware Trojan benchmark circuits for future study.
In this paper, we provide insights towards achieving more robust automatic facial expression recognition in smart environments based on our benchmark with three labeled facial expression databases. These databases are selected to test for desktop, 3D and smart environment application scenarios. This work is meant to provide a neutral comparison and guidelines for developers and researchers interested to integrate facial emotion recognition technologies in their applications, understand its limitations and adaptation as well as enhancement strategies. We also introduce and compare three different metrics for finding the primary expression in a time window of a displayed emotion. In addition, we outline facial emotion recognition limitations and enhancements for smart environments and non-frontal setups. By providing our comparison and enhancements we hope to build a bridge from affective computing research and solution providers to application developers that like to enhance new applications by including emotion based user modeling.
Dynamic software updating (DSU) is an extremely useful feature to be used during software evolution. It can be used to reduce down-time costs, for security enhancements, profiling and testing new functionalities. There are many studies and solutions on dynamic software updating regarding diverse problems introduced by the topic, but there is a lack of research which compares various approaches concerning supported changes and demands on resources. In this paper, we are comparing currently available concepts for Java programming language that deal with dynamically applied changes and measuring the impact of those changes on computer resource demands.
To reduce the complex communication problem that arise as the number of on-chip component increases, the use of Network-on-Chip (NoC) as interconnection architectures have become more promising to solve complex on-chip communication problems. However, providing a suitable test base to measure and verify functionality of any NoC is a compulsory. Universal Verification Methodology (UVM) is introduced as a standardized and reusable methodology for verifying integrated circuit design. In this research, a scalable and reconfigurable verification and benchmark environment for NoC is proposed.
Over the last few years, researchers proposed a multitude of automated bug-detection approaches that mine a class of bugs that we call API misuses. Evaluations on a variety of software products show both the omnipresence of such misuses and the ability of the approaches to detect them. This work presents MuBench, a dataset of 89 API misuses that we collected from 33 real-world projects and a survey. With the dataset we empirically analyze the prevalence of API misuses compared to other types of bugs, finding that they are rare, but almost always cause crashes. Furthermore, we discuss how to use it to benchmark and compare API-misuse detectors.