Biblio
Networked embedded systems (which include IoT, CPS, etc.) are vulnerable. Even though we know how to secure these systems, their heterogeneity and the heterogeneity of security policies remains a major problem. Designers face ever more sophisticated attacks while they are not always security experts and have to get a trade-off on design criteria. We propose in this paper the CLASA architecture (Cross-Layer Agent Security Architecture), a generic, integrated, inter-operable, decentralized and modular architecture which relies on cross-layering.
Verifying the identity of nodes within a wireless ad hoc mesh network and the authenticity of their messages in sufficiently secure, yet power-efficient ways is a long-standing challenge. This paper shows how the more recent concepts of self-sovereign identity management can be applied to Internet-of-Things mesh networks, using LoRaWAN as an example and applying Sovrin's decentralized identifiers and verifiable credentials in combination with Schnorr signatures for securing the communication with a focus on simplex and broadcast connections. Besides the concept and system architecture, the paper discusses an ESP32-based implementation using SX1276/SX1278 LoRa chips, adaptations made to the lmic- and MbedTLS-based software stack, and practically evaluates performance aspects in terms of data overhead, time-on-air impact, and power consumption.
The new generation of digital services are natively conceived as an ordered set of Virtual Network Functions, deployed across boundaries and organizations. In this context, security threats, variable network conditions, computational and memory capabilities and software vulnerabilities may significantly weaken the whole service chain, thus making very difficult to combat the newest kinds of attacks. It is thus extremely important to conceive a flexible (and standard-compliant) framework able to attest the trustworthiness and the reliability of each single function of a Service Function Chain. At the time of this writing, and to the best of authors knowledge, the scientific literature addressed all of these problems almost separately. To bridge this gap, this paper proposes a novel methodology, properly tailored within the ETSI-NFV framework. From one side, Software-Defined Controllers continuously monitor the properties and the performance indicators taken from networking domains of each single Virtual Network Function available in the architecture. From another side, a high-level orchestrator combines, on demand, the suitable Virtual Network Functions into a Service Function Chain, based on the user requests, targeted security requirements, and measured reliability levels. The paper concludes by further explaining the functionalities of the proposed architecture through a use case.
The electrical power system is the backbone of our nations critical infrastructure. It has been designed to withstand single component failures based on a set of reliability metrics which have proven acceptable during normal operating conditions. However, in recent years there has been an increasing frequency of extreme weather events. Many have resulted in widespread long-term power outages, proving reliability metrics do not provide adequate energy security. As a result, researchers have focused their efforts resilience metrics to ensure efficient operation of power systems during extreme events. A resilient system has the ability to resist, adapt, and recover from disruptions. Therefore, resilience has demonstrated itself as a promising concept for currently faced challenges in power distribution systems. In this work, we propose an operational resilience metric for modern power distribution systems. The metric is based on the aggregation of system assets adaptive capacity in real and reactive power. This metric gives information to the magnitude and duration of a disturbance the system can withstand. We demonstrate resilience metric in a case study under normal operation and during a power contingency on a microgrid. In the future, this information can be used by operators to make more informed decisions based on system resilience in an effort to prevent power outages.
The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.
The current evaluation of API recommendation systems mainly focuses on correctness, which is calculated through matching results with ground-truth APIs. However, this measurement may be affected if there exist more than one APIs in a result. In practice, some APIs are used to implement basic functionalities (e.g., print and log generation). These APIs can be invoked everywhere, and they may contribute less than functionally related APIs to the given requirements in recommendation. To study the impacts of correct-but-useless APIs, we use utility to measure them. Our study is conducted on more than 5,000 matched results generated by two specification-based API recommendation techniques. The results show that the matched APIs are heavily overlapped, 10% APIs compose more than 80% matched results. The selected 10% APIs are all correct, but few of them are used to implement the required functionality. We further propose a heuristic approach to measure the utility and conduct an online evaluation with 15 developers. Their reports confirm that the matched results with higher utility score usually have more efforts on programming than the lower ones.