Biblio
In this paper, an industrial testbed is proposed utilizing commercial-off-the-shelf equipment, and it is used to study the weakness of industrial Ethernet, i.e., PROFINET. The investigation is based on observation of the principles of operation of PROFINET and the functionality of industrial control systems.
Like any other software engineering activity, assessing the security of a software system entails prioritizing the resources and minimizing the risks. Techniques ranging from the manual inspection to automated static and dynamic analyses are commonly employed to identify security vulnerabilities prior to the release of the software. However, none of these techniques is perfect, as static analysis is prone to producing lots of false positives and negatives, while dynamic analysis and manual inspection are unwieldy, both in terms of required time and cost. This research aims to improve these techniques by mining relevant information from vulnerabilities found in the app markets. The approach relies on the fact that many modern software systems, in particular mobile software, are developed using rich application development frameworks (ADF), allowing us to raise the level of abstraction for detecting vulnerabilities and thereby making it possible to classify the types of vulnerabilities that are encountered in a given category of application. By coupling this type of information with severity of the vulnerabilities, we are able to improve the efficiency of static and dynamic analyses, and target the manual effort on the riskiest vulnerabilities.
Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.
Population protocols are a well established model of computation by anonymous, identical finite state agents. A protocol is well-specified if from every initial configuration, all fair executions of the protocol reach a common consensus. The central verification question for population protocols is the well-specification problem: deciding if a given protocol is well-specified. Esparza et al. have recently shown that this problem is decidable, but with very high complexity: it is at least as hard as the Petri net reachability problem, which is EXPSPACE-hard, and for which only algorithms of non-primitive recursive complexity are currently known. In this paper we introduce the class WS3 of well-specified strongly-silent protocols and we prove that it is suitable for automatic verification. More precisely, we show that WS3 has the same computational power as general well-specified protocols, and captures standard protocols from the literature. Moreover, we show that the membership problem for WS3 reduces to solving boolean combinations of linear constraints over N. This allowed us to develop the first software able to automatically prove well-specification for all of the infinitely many possible inputs.
Ensemble waveform analysis is used to calculate signal to noise ratio (SNR) and other recording characteristics from micromagnetically modeled heat assisted magnetic recording waveforms and waveforms measured at both drive and spin-stand level. Using windowing functions provides the breakdown between transition and remanence SNRs. In addition, channel bit density (CBD) can be extracted from the ensemble waveforms using the di-bit extraction method. Trends in both transition SNR, remanence SNR, and CBD as a function of ambient temperature at constant track width showed good agreement between model and measurement. Both model and drive-level measurement show degradation in SNR at higher ambient temperatures, which may be due to changes in the down-track profile at the track edges compared with track center. CBD as a function of cross-track position is also calculated for both modeling and spin-stand measurements. The CBD widening at high cross-track offset, which is observed at both measurement and model, was directly related to the radius of curvature of the written transitions observed in the model and the thermal profiles used.
The SCADA infrastructure is a key component for power grid operations. Securing the SCADA infrastructure against cyber intrusions is thus vital for a well-functioning power grid. However, the task remains a particular challenge, not the least since not all available security mechanisms are easily deployable in these reliability-critical and complex, multi-vendor environments that host modern systems alongside legacy ones, to support a range of sensitive power grid operations. This paper examines how effective a few countermeasures are likely to be in SCADA environments, including those that are commonly considered out of bounds. The results show that granular network segmentation is a particularly effective countermeasure, followed by frequent patching of systems (which is unfortunately still difficult to date). The results also show that the enforcement of a password policy and restrictive network configuration including whitelisting of devices contributes to increased security, though best in combination with granular network segmentation.
The continuous advance in recent cloud-based computer networks has generated a number of security challenges associated with intrusions in network systems. With the exponential increase in the volume of network traffic data, involvement of humans in such detection systems is time consuming and a non-trivial problem. Secondly, network traffic data tends to be highly dimensional, comprising of numerous features and attributes, making classification challenging and thus susceptible to the curse of dimensionality problem. Given such scenarios, the need arises for dimensional reduction, feature selection, combined with machine-learning techniques in the classification of such data. Therefore, as a contribution, this paper seeks to employ data mining techniques in a cloud-based environment, by selecting appropriate attributes and features with the least importance in terms of weight for the classification. Often the standard is to select features with better weights while ignoring those with least weights. In this study, we seek to find out if we can make prediction using those features with least weights. The motivation is that adversaries use stealth to hide their activities from the obvious. The question then is, can we predict any stealth activity of an adversary using the least observed attributes? In this particular study, we employ information gain to select attributes with the lowest weights and then apply machine learning to classify if a combination, in this case, of both source and destination ports are attacked or not. The motivation of this investigation is if attributes that are of least importance can be used to predict if an attack could occur. Our preliminary results show that even when the source and destination port attributes are used in combination with features with the least weights, it is possible to classify such network traffic data and predict if an attack will occur or not.
Nowadays the application of integrated management systems (IMS) attracts the attention of top management from various organizations. However, there is an important problem of running the security audits in IMS and realization of complex checks of different ISO standards in full scale with the essential reducing of available resources.
Detection and prevention of data breaches in corporate networks is one of the most important security problems of today's world. The techniques and applications proposed for solution are not successful when attackers attempt to steal data using steganography. Steganography is the art of storing data in a file called cover, such as picture, sound and video. The concealed data cannot be directly recognized in the cover. Steganalysis is the process of revealing the presence of embedded messages in these files. There are many statistical and signature based steganalysis algorithms. In this work, the detection of steganographic images with steganalysis techniques is reviewed and a system has been developed which automatically detects steganographic images in network traffic by using open source tools.
Today, we witness the emergence of smart environments, where devices are able to connect independently without human- intervention. Mobile ad hoc networks are an example of smart environments that are widely deployed in public spaces. They offer great services and features compared with wired systems. However, these networks are more sensitive to malicious attacks because of the lack of infrastructure and the self-organizing nature of devices. Thus, communication between nodes is much more exposed to various security risks, than other networks. In this paper, we will present a synthetic study on security concept for MANETs, and then we will introduce a contribution based on evaluating link quality, using ETX metric, to enhance network availability.
Today, we witness the emergence of smart environments, where devices are able to connect independently without human- intervention. Mobile ad hoc networks are an example of smart environments that are widely deployed in public spaces. They offer great services and features compared with wired systems. However, these networks are more sensitive to malicious attacks because of the lack of infrastructure and the self-organizing nature of devices. Thus, communication between nodes is much more exposed to various security risks, than other networks. In this paper, we will present a synthetic study on security concept for MANETs, and then we will introduce a contribution based on evaluating link quality, using ETX metric, to enhance network availability.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
Internet of Things (IoT) is characterized by heterogeneous devices that interact with each other on a collaborative basis to fulfill a common goal. In this scenario, some of the deployed devices are expected to be constrained in terms of memory usage, power consumption and processing resources. To address the specific properties and constraints of such networks, a complete stack of standardized protocols has been developed, among them the Routing Protocol for Low-Power and lossy networks (RPL). However, this protocol is exposed to a large variety of attacks from the inside of the network itself. To fill this gap, this paper focuses on the design and the integration of a novel Link reliable and Trust aware model into the RPL protocol. Our approach aims to ensure Trust among entities and to provide QoS guarantees during the construction and the maintenance of the network routing topology. Our model targets both node and link Trust and follows a multidimensional approach to enable an accurate Trust value computation for IoT entities. To prove the efficiency of our proposal, this last has been implemented and tested successfully within an IoT environment. Therefore, a set of experiments has been made to show the high accuracy level of our system.
Hardware Trojan (HT) detection methods based on the side channel analysis deeply suffer from the process variations. In order to suppress the effect of the variations, we devise a method that smartly selects two highly correlated paths for each interconnect (edge) that is suspected to have an HT on it. First path is the shortest one passing through the suspected edge and the second one is a path that is highly correlated with the first one. Delay ratio of these paths avails the detection of the HT inserted circuits. Test results reveal that the method enables the detection of even the minimally invasive Trojans in spite of both inter and intra die variations with the spatial correlations.
Cross-VM attacks have emerged as a major threat on commercial clouds. These attacks commonly exploit hardware level leakages on shared physical servers. A co-located machine can readily feel the presence of a co-located instance with a heavy computational load through performance degradation due to contention on shared resources. Shared cache architectures such as the last level cache (LLC) have become a popular leakage source to mount cross-VM attack. By exploiting LLC leakages, researchers have already shown that it is possible to recover fine grain information such as cryptographic keys from popular software libraries. This makes it essential to verify implementations that handle sensitive data across the many versions and numerous target platforms, a task too complicated, error prone and costly to be handled by human beings. Here we propose a machine learning based technique to classify applications according to their cache access profiles. We show that with minimal and simple manual processing steps feature vectors can be used to train models using support vector machines to classify the applications with a high degree of success. The profiling and training steps are completely automated and do not require any inspection or study of the code to be classified. In native execution, we achieve a successful classification rate as high as 98% (L1 cache) and 78$\backslash$% (LLC) over 40 benchmark applications in the Phoronix suite with mild training. In the cross-VM setting on the noisy Amazon EC2 the success rate drops to 60$\backslash$% for a suite of 25 applications. With this initial study we demonstrate that it is possible to train meaningful models to successfully predict applications running in co-located instances.
In the past the security of building automation solely depended on the security of the devices inside or tightly connected to the building. In the last years more devices evolved using some kind of cloud service as a back-end or providers supplying some kind of device to the user. Also, the number of building automation systems connected to the Internet for management, control, and data storage increases every year. These developments cause the appearance of new threats on building automation. As Internet of Thing (IoT) and building automation intertwine more and more these threats are also valid for IoT installations. The paper presents new attack vectors and new threats using the threat model of Meyer et al.[1].
We present an object tracking framework which fuses multiple unstable video-based methods and supports automatic tracker initialization and termination. To evaluate our system, we collected a large dataset of hand-annotated 5-minute traffic surveillance videos, which we are releasing to the community. To the best of our knowledge, this is the first publicly available dataset of such long videos, providing a diverse range of real-world object variation, scale change, interaction, different resolutions and illumination conditions. In our comprehensive evaluation using this dataset, we show that our automatic object tracking system often outperforms state-of-the-art trackers, even when these are provided with proper manual initialization. We also demonstrate tracking throughput improvements of 5× or more vs. the competition.
This work introduces concepts and algorithms along with a case study validating them, to enhance the event detection, pattern recognition and anomaly identification results in real life video surveillance. The motivation for the work underlies in the observation that human behavioral patterns in general continuously evolve and adapt with time, rather than being static. First, limitations in existing work with respect to this phenomena are identified. Accordingly, the notion and algorithms of Dynamic Clustering are introduced in order to overcome these drawbacks. Correspondingly, we propose the concept of maintaining two separate sets of data in parallel, namely the Normal Plane and the Anomaly Plane, to successfully achieve the task of learning continuously. The practicability of the proposed algorithms in a real life scenario is demonstrated through a case study. From the analysis presented in this work, it is evident that a more comprehensive analysis, closely following human perception can be accomplished by incorporating the proposed notions and algorithms in a video surveillance event.
In recent years a wide range of wearable IoT healthcare applications have been developed and deployed. The rapid increase in wearable devices allows the transfer of patient personal information between different devices, at the same time personal health and wellness information of patients can be tracked and attacked. There are many techniques that are used for protecting patient information in medical and wearable devices. In this research a comparative study of the complexity for cyber security architecture and its application in IoT healthcare industry has been carried out. The objective of the study is for protecting healthcare industry from cyber attacks focusing on IoT based healthcare devices. The design has been implemented on Xilinx Zynq-7000, targeting XC7Z030 - 3fbg676 FPGA device.
ICT systems have become an integral part of business and life. At the same time, these systems have become extremely complex. In such systems exist numerous vulnerabilities waiting to be exploited by potential threat actors. pwnPr3d is a novel modelling approach that performs automated architectural analysis with the objective of measuring the cyber security of the modeled architecture. Its integrated modelling language allows users to model software and hardware components with great level of details. To illustrate this capability, we present in this paper the metamodel of UNIX, operating systems being the core of every software and every IT system. After describing the main UNIX constituents and how they have been modelled, we illustrate how the modelled OS integrates within pwnPr3d's rationale by modelling the spreading of a self-replicating malware inspired by WannaCry.