Biblio
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
By embracing the cloud computing paradigm enterprises are able to boost their agility and productivity whilst realising significant cost savings. However, many enterprises are reluctant to adopt cloud services for supporting their critical operations due to security and privacy concerns. One way to alleviate these concerns is to devise policies that infuse suitable security controls in cloud services. This work proposes a class of ontologically-expressed rules, namely the so-called axiomatic rules, that aim at ensuring the correctness of these policies by harnessing the various knowledge artefacts that they embody. It also articulates an adequate framework for the expression of policies, one which provides ontological templates for modelling the knowledge artefacts encoded in the policies and which form the basis for the proposed axiomatic rules.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
Semiconductor design houses are increasingly becoming dependent on third party vendors to procure intellectual property (IP) and meet time-to-market constraints. However, these third party IPs cannot be trusted as hardware Trojans can be maliciously inserted into them by untrusted vendors. While different approaches have been proposed to detect Trojans in third party IPs, their limitations have not been extensively studied. In this paper, we analyze the limitations of the state-of-the-art Trojan detection techniques and demonstrate with experimental results how to defeat these detection mechanisms. We then propose a Trojan detection framework based on information flow security (IFS) verification. Our framework detects violation of IFS policies caused by Trojans without the need of white-box knowledge of the IP. We experimentally validate the efficacy of our proposed technique by accurately identifying Trojans in the trust-hub benchmarks. We also demonstrate that our technique does not share the limitations of the previously proposed Trojan detection techniques.
Tamper detection circuits provide the first and most important defensive wall in protecting electronic modules containing security data. A widely used procedure is to cover the entire module with a foil containing fine conductive mesh, which detects intrusion attempts. Detection circuits are further classified as passive or active. Passive circuits have the advantage of low power consumption, however they are unable to detect small variations in the conductive mesh parameters. Since modern tools provide an upper leverage over the passive method, the most efficient way to protect security modules is thus to use active circuits. The active tamper detection circuits are typically probing the conductive mesh with short pulses, analyzing its response in terms of delay and shape. The method proposed in this paper generates short pulses at one end of the mesh and analyzes the response at the other end. Apart from measuring pulse delay, the analysis includes a frequency domain characterization of the system, determining whether there has been an intrusion or not, by comparing it to a reference (un-tampered with) spectrum. The novelty of this design is the combined analysis, in time and frequency domains, of the small variations in mesh characteristic parameters.
A Distributed Denial of Service (DDoS) attack is an attempt to make an online service, a network, or even an entire organization, unavailable by saturating it with traffic from multiple sources. DDoS attacks are among the most common and most devastating threats that network defenders have to watch out for. DDoS attacks are becoming bigger, more frequent, and more sophisticated. Volumetric attacks are the most common types of DDoS attacks. A DDoS attack is considered volumetric, or high-rate, when within a short period of time it generates a large amount of packets or a high volume of traffic. High-rate attacks are well-known and have received much attention in the past decade; however, despite several detection and mitigation strategies have been designed and implemented, high-rate attacks are still halting the normal operation of information technology infrastructures across the Internet when the protection mechanisms are not able to cope with the aggregated capacity that the perpetrators have put together. With this in mind, the present paper aims to propose and test a distributed and collaborative architecture for online high-rate DDoS attack detection and mitigation based on an in-memory distributed graph data structure and unsupervised machine learning algorithms that leverage real-time streaming data and analytics. We have successfully tested our proposed mechanism using a real-world DDoS attack dataset at its original rate in pursuance of reproducing the conditions of an actual large scale attack.
Information and communication technologies are extensively used to monitor and control electric microgrids. Although, such innovation enhance self healing, resilience, and efficiency of the energy infrastructure, it brings emerging security threats to be a critical challenge. In the context of microgrid, the cyber vulnerabilities may be exploited by malicious users for manipulate system parameters, meter measurements and price information. In particular, malware may be used to acquire direct access to monitor and control devices in order to destabilize the microgrid ecosystem. In this paper, we exploit a sandbox to analyze security vulnerability to malware of involved embedded smart-devices, by monitoring at different abstraction levels potential malicious behaviors. In this direction, the CoSSMic project represents a relevant case study.
This paper combines FMEA and n2 approaches in order to create a methodology to determine risks associated with the components of an underwater system. This methodology is based on defining the risk level related to each one of the components and interfaces that belong to a complex underwater system. As far as the authors know, this approach has not been reported before. The resulting information from the mentioned procedures is combined to find the system's critical elements and interfaces that are most affected by each failure mode. Finally, a calculation is performed to determine the severity level of each failure mode based on the system's critical elements.
Wearables, such as Fitbit, Apple Watch, and Microsoft Band, with their rich collection of sensors, facilitate the tracking of healthcare- and wellness-related metrics. However, the assessment of the physiological metrics collected by these devices could also be useful in identifying the user of the wearable, e.g., to detect unauthorized use or to correctly associate the data to a user if wearables are shared among multiple users. Further, researchers and healthcare providers often rely on these smart wearables to monitor research subjects and patients in their natural environments over extended periods of time. Here, it is important to associate the sensed data with the corresponding user and to detect if a device is being used by an unauthorized individual, to ensure study compliance. Existing one-time authentication approaches using credentials (e.g., passwords, certificates) or trait-based biometrics (e.g., face, fingerprints, iris, voice) might fail, since such credentials can easily be shared among users. In this paper, we present a continuous and reliable wearable-user authentication mechanism using coarse-grain minute-level physical activity (step counts) and physiological data (heart rate, calorie burn, and metabolic equivalent of task). From our analysis of 421 Fitbit users from a two-year long health study, we are able to statistically distinguish nearly 100% of the subject-pairs and to identify subjects with an average accuracy of 92.97%.
Wearable Internet-of-Things (WIoT) environments have demonstrated great potential in a broad range of applications in healthcare and well-being. Security is essential for WIoT environments. Lack of security in WIoTs not only harms user privacy, but may also harm the user's safety. Though devices in the WIoT can be attacked in many ways, in this paper we focus on adversaries who mount what we call sensor-hijacking attacks, which prevent the constituent medical devices from accurately collecting and reporting the user's health state (e.g., reporting old or wrong physiological measurements). In this paper we outline some of our experiences in implementing a data-driven security solution for detecting sensor-hijacking attack on a secure wearable internet-of-things (WIoT) base station called the Amulet. Given the limited capabilities (computation, memory, battery power) of the Amulet platform, implementing such a security solution is quite challenging and presents several trade-offs with respect to detection accuracy and resources requirements. We conclude the paper with a list of insights into what capabilities constrained WIoT platforms should provide developers so as to make the inclusion of data-driven security primitives in such systems.
ICT systems have become an integral part of business and life. At the same time, these systems have become extremely complex. In such systems exist numerous vulnerabilities waiting to be exploited by potential threat actors. pwnPr3d is a novel modelling approach that performs automated architectural analysis with the objective of measuring the cyber security of the modeled architecture. Its integrated modelling language allows users to model software and hardware components with great level of details. To illustrate this capability, we present in this paper the metamodel of UNIX, operating systems being the core of every software and every IT system. After describing the main UNIX constituents and how they have been modelled, we illustrate how the modelled OS integrates within pwnPr3d's rationale by modelling the spreading of a self-replicating malware inspired by WannaCry.
We propose a method for comparative analysis of evaluation of the cryptographic strength of the asymmetric encryption algorithms RSA and the existing GOST R 34.10-2001. Describes the fundamental design ratios, this method is based on computing capacity used for decoding and the forecast for the development of computer technology.
This paper presents a novel feature learning model for cyber security tasks. We propose to use Auto-encoders (AEs), as a generative model, to learn latent representation of different feature sets. We show how well the AE is capable of automatically learning a reasonable notion of semantic similarity among input features. Specifically, the AE accepts a feature vector, obtained from cyber security phenomena, and extracts a code vector that captures the semantic similarity between the feature vectors. This similarity is embedded in an abstract latent representation. Because the AE is trained in an unsupervised fashion, the main part of this success comes from appropriate original feature set that is used in this paper. It can also provide more discriminative features in contrast to other feature engineering approaches. Furthermore, the scheme can reduce the dimensionality of the features thereby signicantly minimising the memory requirements. We selected two different cyber security tasks: networkbased anomaly intrusion detection and Malware classication. We have analysed the proposed scheme with various classifiers using publicly available datasets for network anomaly intrusion detection and malware classifications. Several appropriate evaluation metrics show improvement compared to prior results.
We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most k distinct elements, these properties can be estimated accurately using a sample of size O(k log k). For these estimation tasks, this performance is optimal, to constant factors. Complementing these theoretical results, we also demonstrate that our estimators perform exceptionally well, in practice, for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. The key step in our approach is to first use the sample to characterize the ``unseen'' portion of the distribution—effectively reconstructing this portion of the distribution as accurately as if one had a logarithmic factor larger sample. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: We seek to estimate the shape of the unobserved portion of the distribution. This work can be seen as introducing a robust, general, and theoretically principled framework that, for many practical applications, essentially amplifies the sample size by a logarithmic factor; we expect that it may be fruitfully used as a component within larger machine learning and statistical analysis systems.
Rootkits detecting in the Windows operating system is an important part of information security monitoring and audit system. Methods of hided process detection were analyzed. The software is developed which implements the four methods of hidden process detection in a user mode (PID based method, the descriptor based method, system call based method, opened windows based method) to use in the monitoring and audit systems.
This paper presents an integrated Analog Delay Line (ADL) for analog RF signal processing. The design is inspired by a Bucket Brigade Device (BBD) structure. It transfers charges from a sampled input signal stage after stage. It belongs to the Charge Coupled Devices (CCD). This ADL is fully differential with Common Mode (CM) control. The 28nm Fully Depleted Silicon on Insulator (FDSOI) Technology from ST Microelectronics is used for the design. Further results come from simulations using Spectre Cadence.
Information shared on Twitter is ever increasing and users-recipients are overwhelmed by the number of tweets they receive, many of which of no interest. Filters that estimate the interest of each incoming post can alleviate this problem, for example by allowing users to sort incoming posts by predicted interest (e.g., "top stories" vs. "most recent" in Facebook). Global and personal filters have been used to detect interesting posts in social networks. Global filters are trained on large collections of posts and reactions to posts (e.g., retweets), aiming to predict how interesting a post is for a broad audience. In contrast, personal filters are trained on posts received by a particular user and the reactions of the particular user. Personal filters can provide recommendations tailored to a particular user's interests, which may not coincide with the interests of the majority of users that global filters are trained to predict. On the other hand, global filters are typically trained on much larger datasets compared to personal filters. Hence, global filters may work better in practice, especially with new users, for which personal filters may have very few training instances ("cold start" problem). Following Uysal and Croft, we devised a hybrid approach that combines the strengths of both global and personal filters. As in global filters, we train a single system on a large, multi-user collection of tweets. Each tweet, however, is represented as a feature vector with a number of user-specific features.
Smart Internet of Things (IoT) applications will rely on advanced IoT platforms that not only provide access to IoT sensors and actuators, but also provide access to cloud services and data analytics. Future IoT platforms should thus provide connectivity and intelligence. One approach to connecting IoT devices, IoT networks to cloud networks and services is to use network federation mechanisms over the internet to create network slices across heterogeneous platforms. Network slices also need to be protected from potential external and internal threats. In this paper we describe an approach for enforcing global security policies in the federated cloud and IoT networks. Our approach allows a global security to be defined in the form of a single service manifest and enforced across all federation network segments. It relies on network function virtualisation (NFV) and service function chaining (SFC) to enforce the security policy. The approach is illustrated with two case studies: one for a user that wishes to securely access IoT devices and another in which an IoT infrastructure administrator wishes to securely access some remote cloud and data analytics services.
The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work.
The notion of style is pivotal to literature. The choice of a certain writing style moulds and enhances the overall character of a book. Stylometry uses statistical methods to analyze literary style. This work aims to build a recommendation system based on the similarity in stylometric cues of various authors. The problem at hand is in close proximity to the author attribution problem. It follows a supervised approach with an initial corpus of books labelled with their respective authors as training set and generate recommendations based on the misclassified books. Results in book similarity are substantiated by domain experts.
Data Deduplication provides lots of benefits to security and privacy issues which can arise as user's sensitive data at risk of within and out of doors attacks. Traditional secret writing that provides knowledge confidentiality is incompatible with knowledge deduplication. Ancient secret writing wants completely different users to encode their knowledge with their own keys. Thus, identical knowledge copies of completely different various users can result in different ciphertexts that makes Deduplication not possible. Convergent secret writing has been planned to enforce knowledge confidentiality whereas creating Deduplication possible. It encrypts/decrypts a knowledge copy with a confluent key, that is obtained by computing the cryptographical hash price of the content of the information copy. Once generation of key and encryption, the user can retain the keys and send ciphertext to cloud.
In this paper, we propose a new Blockchain-based message and revocation accountability system called Blackchain. Combining a distributed ledger with existing mechanisms for security in V2X communication systems, we design a distributed event data recorder (EDR) that satisfies traditional accountability requirements by providing a compressed global state. Unlike previous approaches, our distributed ledger solution provides an accountable revocation mechanism without requiring trust in a single misbehavior authority, instead allowing a collaborative and transparent decision making process through Blackchain. This makes Blackchain an attractive alternative to existing solutions for revocation in a Security Credential Management System (SCMS), which suffer from the traditional disadvantages of PKIs, notably including centralized trust. Our proposal becomes scalable through the use of hierarchical consensus: individual vehicles dynamically create clusters, which then provide their consensus decisions as input for road-side units (RSUs), which in turn publish their results to misbehavior authorities. This authority, which is traditionally a single entity in the SCMS, responsible for the integrity of the entire V2X network, is now a set of authorities that transparently perform a revocation, whose result is then published in a global Blackchain state. This state can be used to prevent the issuance of certificates to previously malicious users, and also prevents the authority from misbehaving through the transparency implied by a global system state.
Subscriber Identity Module (SIM) is the backbone of modern mobile communication. SIM can be used to store a number of user sensitive information such as user contacts, SMS, banking information (some banking applications store user credentials on the SIM) etc. Unfortunately, the current SIM model has a major weakness. When the mobile device is lost, an adversary can simply steal a user's SIM and use it. He/she can then extract the user's sensitive information stored on the SIM. Moreover, The adversary can then pose as the user and communicate with the contacts stored on the SIM. This opens up the avenue to a large number of social engineering techniques. Additionally, if the user has provided his/her number as a recovery option for some accounts, the adversary can get access to them. The current methodology to deal with a stolen SIM is to contact your particular service provider and report a theft. The service provider then blocks the services on your SIM, but the adversary still has access to the data which is stored on the SIM. Therefore, a secure scheme is required to ensure that only legal users are able to access and utilize their SIM.
Bitcoin, a peer-to-peer payment system and digital currency, is often involved in illicit activities such as scamming, ransomware attacks, illegal goods trading, and thievery. At the time of writing, the Bitcoin ecosystem has not yet been mapped and as such there is no estimate of the share of illicit activities. This paper provides the first estimation of the portion of cyber-criminal entities in the Bitcoin ecosystem. Our dataset consists of 854 observations categorised into 12 classes (out of which 5 are cybercrime-related) and a total of 100,000 uncategorised observations. The dataset was obtained from the data provider who applied three types of clustering of Bitcoin transactions to categorise entities: co-spend, intelligence-based, and behaviour-based. Thirteen supervised learning classifiers were then tested, of which four prevailed with a cross-validation accuracy of 77.38%, 76.47%, 78.46%, 80.76% respectively. From the top four classifiers, Bagging and Gradient Boosting classifiers were selected based on their weighted average and per class precision on the cybercrime-related categories. Both models were used to classify 100,000 uncategorised entities, showing that the share of cybercrime-related is 29.81% according to Bagging, and 10.95% according to Gradient Boosting with number of entities as the metric. With regard to the number of addresses and current coins held by this type of entities, the results are: 5.79% and 10.02% according to Bagging; and 3.16% and 1.45% according to Gradient Boosting.