Biblio
Smartphones have become ubiquitous in our everyday lives, providing diverse functionalities via millions of applications (apps) that are readily available. To achieve these functionalities, apps need to access and utilize potentially sensitive data, stored in the user's device. This can pose a serious threat to users' security and privacy, when considering malicious or underskilled developers. While application marketplaces, like Google Play store and Apple App store, provide factors like ratings, user reviews, and number of downloads to distinguish benign from risky apps, studies have shown that these metrics are not adequately effective. The security and privacy health of an application should also be considered to generate a more reliable and transparent trustworthiness score. In order to automate the trustworthiness assessment of mobile applications, we introduce the Trust4App framework, which not only considers the publicly available factors mentioned above, but also takes into account the Security and Privacy (S&P) health of an application. Additionally, it considers the S&P posture of a user, and provides an holistic personalized trustworthiness score. While existing automatic trustworthiness frameworks only consider trustworthiness indicators (e.g. permission usage, privacy leaks) individually, Trust4App is, to the best of our knowledge, the first framework to combine these indicators. We also implement a proof-of-concept realization of our framework and demonstrate that Trust4App provides a more comprehensive, intuitive and actionable trustworthiness assessment compared to existing approaches.
Cyber physical systems are the key innovation driver for many domains such as automotive, avionics, industrial process control, and factory automation. However, their interconnection potentially provides adversaries easy access to sensitive data, code, and configurations. If attackers gain control, material damage or even harm to people must be expected. To counteract data theft, system manipulation and cyber-attacks, security mechanisms must be embedded in the cyber physical system. Adding hardware security in the form of the standardized Trusted Platform Module (TPM) is a promising approach. At the same time, traditional dependability features such as safety, availability, and reliability have to be maintained. To determine the right balance between security and dependability it is essential to understand their interferences. This paper supports developers in identifying the implications of using TPMs on the dependability of their system.We highlight potential consequences of adding TPMs to cyber-physical systems by considering the resulting safety, reliability, and availability. Furthermore, we discuss the potential of enhancing the dependability of TPM services by applying traditional redundancy techniques.
The fifth generation of cellular networks (5G) will enable different use cases where security will be more critical than ever before (e.g. autonomous vehicles and critical IoT devices). Unfortunately, the new networks are being built on the certainty that security problems cannot be solved in the short term. Far from reinventing the wheel, one of our goals is to allow security software developers to implement and test their reactive solutions for the capillary network of 5G devices. Therefore, in this paper a solution for analysing proximity-based attacks in 5G environments is modelled and tested using OMNET++. The solution, named CRAT, is able to decouple the security analysis from the hardware of the device with the aim to extend the analysis of proximity-based attacks to different use-cases in 5G. We follow a high-level approach, in which the devices can take the role of victim, offender and guardian following the principles of the routine activity theory.
In recent years, there has been progress in applying information technology to industrial control systems (ICS), which is expected to make the development cost of control devices and systems lower. On the other hand, the security threats are becoming important problems. In 2017, a command injection issue on a data logger was reported. In this paper, we focus on the risk assessment in security design for data loggers used in industrial control systems. Our aim is to provide a risk assessment method optimized for control devices and systems in such a way that one can prioritize threats more preciously, that would lead work resource (time and budget) can be assigned for more important threats than others. We discuss problems with application of the automotive-security guideline of JASO TP15002 to ICS risk assessment. Consequently, we propose a three-phase risk assessment method with a novel Risk Scoring Systems (RSS) for quantitative risk assessment, RSS-CWSS. The idea behind this method is to apply CWSS scoring systems to RSS by fixing values for some of CWSS metrics, considering what the designers can evaluate during the concept phase. Our case study with ICS employing a data logger clarifies that RSS-CWSS can offer an interesting property that it has better risk-score dispersion than the TP15002-specified RSS.
The analysis of security-related event logs is an important step for the investigation of cyber-attacks. It allows tracing malicious activities and lets a security operator find out what has happened. However, since IT landscapes are growing in size and diversity, the amount of events and their highly different representations are becoming a Big Data challenge. Unfortunately, current solutions for the analysis of security-related events, so called Security Information and Event Management (SIEM) systems, are not able to keep up with the load. In this work, we propose a distributed SIEM platform that makes use of highly efficient distributed normalization and persists event data into an in-memory database. We implement the normalization on common distribution frameworks, i.e. Spark, Storm, Trident and Heron, and compare their performance with our custom-built distribution solution. Additionally, different tuning options are introduced and their speed advantage is presented. In the end, we show how the writing into an in-memory database can be tuned to achieve optimal persistence speed. Using the proposed approach, we are able to not only fully normalize, but also persist more than 20 billion events per day with relatively small client hardware. Therefore, we are confident that our approach can handle the load of events in even very large IT landscapes.
Cloud computing is the major paradigm in today's IT world with the capabilities of security management, high performance, flexibility, scalability. Customers valuing these features can better benefit if they use a cloud environment built using HPC fabric architecture. However, security is still a major concern, not only on the software side but also on the hardware side. There are multiple studies showing that the malicious users can affect the regular customers through the hardware if they are co-located on the same physical system. Therefore, solving possible security concerns on the HPC fabric architecture will clearly make the fabric industries leader in this area. In this paper, we propose an autonomic HPC fabric architecture that leverages both resilient computing capabilities and adaptive anomaly analysis for further security.
Aggregate Computing is a promising paradigm for coordinating large numbers of possibly situated devices, typical of scenarios related to the Internet of Things, smart cities, drone coordination, and mass urban events. Currently, little work has been devoted to study and improve security in aggregate programs, and existing works focus solely on application-level countermeasures. Those security systems work under the assumption that the underlying computational model is respected; however, so-called Byzantine behaviour violates such assumption. In this paper, we discuss how Byzantine behaviours can hinder an aggregate program, and exploit application-level protection for creating bigger disruption. We discuss how the blockchain technology can mitigate these attacks by enforcing behaviours consistent with the expected operational semantics, with no impact on the application logic.
Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.
Self-Adaptive Systems (SAS) are revolutionizing many aspects of our society. From server clusters to autonomous vehicles, SAS are becoming more ubiquitous and essential to our world. Security is frequently a priority for these systems as many SAS conduct mission-critical operations, or work with sensitive information. Fortunately, security is being more recognized as an indispensable aspect of virtually all aspects of computing systems, in all phases of software development. Despite the growing prominence in security, from computing education to vulnerability detection systems, it is just another concern of creating good software. Despite how critical security is, it is a quality attribute like other aspects such as reliability, stability, or adaptability in a SAS.
At a time when all it takes to open a Twitter account is a mobile phone, the act of authenticating information encountered on social media becomes very complex, especially when we lack measures to verify digital identities in the first place. Because the platform supports anonymity, fake news generated by dubious sources have been observed to travel much faster and farther than real news. Hence, we need valid measures to identify authors of misinformation to avert these consequences. Researchers propose different authorship attribution techniques to approach this kind of problem. However, because tweets are made up of only 280 characters, finding a suitable authorship attribution technique is a challenge. This research aims to classify authors of tweets by comparing machine learning methods like logistic regression and naive Bayes. The processes of this application are fetching of tweets, pre-processing, feature extraction, and developing a machine learning model for classification. This paper illustrates the text classification for authorship process using machine learning techniques. In total, there were 46,895 tweets used as both training and testing data, and unique features specific to Twitter were extracted. Several steps were done in the pre-processing phase, including removal of short texts, removal of stop-words and punctuations, tokenizing and stemming of texts as well. This approach transforms the pre-processed data into a set of feature vector in Python. Logistic regression and naive Bayes algorithms were applied to the set of feature vectors for the training and testing of the classifier. The logistic regression based classifier gave the highest accuracy of 91.1% compared to the naive Bayes classifier with 89.8%.
Parfait [1] is a static analysis tool originally developed to find implementation defects in C/C++ systems code. Parfait's focus is on proving both high precision (low false positives) as well as scaling to systems with millions of lines of code (typically requiring 10 minutes of analysis time per million lines). Parfait has since been extended to detect security vulnerabilities in applications code, supporting the Java EE and PL/SQL server stack. In this abstract we describe some of the challenges we encountered in this process including some of the differences seen between the applications code being analysed, our solutions that enable us to analyse a variety of applications, and a summary of the challenges that remain.
In traditional steganographic schemes, RGB three channels payloads are assigned equally in a true color image. In fact, the security of color image steganography relates not only to data-embedding algorithms but also to different payload partition. How to exploit inter-channel correlations to allocate payload for performance enhancement is still an open issue in color image steganography. In this paper, a novel channel-dependent payload partition strategy based on amplifying channel modification probabilities is proposed, so as to adaptively assign the embedding capacity among RGB channels. The modification probabilities of three corresponding pixels in RGB channels are simultaneously increased, and thus the embedding impacts could be clustered, in order to improve the empirical steganographic security against the channel co-occurrences detection. Experimental results show that the new color image steganographic schemes incorporated with the proposed strategy can effectively make the embedding changes concentrated mainly in textured regions, and achieve better performance on resisting the modern color image steganalysis.
Popularization of the Internet-of-Things (IoT) has brought widespread concerns on IoT security, especially in face of several recent security incidents related to IoT devices. Due to the resource-constrained nature of many IoT devices, security offloading has been proposed to provide good-enough security for IoT with minimum overhead on the devices. In this paper, we investigate the inevitable risk associated with security offloading: the unprotected and unmonitored transmission from IoT devices to the offloaded security mechanisms. An important challenge in modeling the security risk is the dynamic nature of IoT due to demand fluctuations and infrastructure instability. We propose a stochastic model to capture both the expected and worst-case security risks of an IoT system. We then propose a framework to efficiently address the optimal robust deployment of security mechanisms in IoT. We use results from extensive simulations to demonstrate the superb performance and efficiency of our approach compared to several other algorithms.
This talk will cover two topics, namely, modeling and design of Moving Target Defense (MTD), and DIFT games for modeling Advanced Persistent Threats (APTs). We will first present a game-theoretic approach to characterizing the trade-off between resource efficiency and defense effectiveness in decoy- and randomization-based MTD. We will then address the game formulation for APTs. APTs are mounted by intelligent and resourceful adversaries who gain access to a targeted system and gather information over an extended period of time. APTs consist of multiple stages, including initial system compromise, privilege escalation, and data exfiltration, each of which involves strategic interaction between the APT and the targeted system. While this interaction can be viewed as a game, the stealthiness, adaptiveness, and unpredictability of APTs imply that the information structure of the game and the strategies of the APT are not readily available. Our approach to modeling APTs is based on the insight that the persistent nature of APTs creates information flows in the system that can be monitored. One monitoring mechanism is Dynamic Information Flow Tracking (DIFT), which taints and tracks malicious information flows through a system and inspects the flows at designated traps. Since tainting all flows in the system will incur significant memory and storage overhead, efficient tagging policies are needed to maximize the probability of detecting the APT while minimizing resource costs. In this work, we develop a multi-stage stochastic game framework for modeling the interaction between an APT and a DIFT, as well as designing an efficient DIFT-based defense. Our model is grounded on APT data gathered using the Refinable Attack Investigation (RAIN) flow-tracking framework. We present the current state of our formulation, insights that it provides on designing effective defenses against APTs, and directions for future work.