Biblio
Universally Composable (UC) framework provides the strongest security notion for designing fully trusted cryptographic protocols, and it is very challenging on applying UC security in the design of RFID mutual authentication protocols. In this paper, we formulate the necessary conditions for achieving UC secure RFID mutual authentication protocols which can be fully trusted in arbitrary environment, and indicate the inadequacy of some existing schemes under the UC framework. We define the ideal functionality for RFID mutual authentication and propose the first UC secure RFID mutual authentication protocol based on public key encryption and certain trusted third parties which can be modeled as functionalities. We prove the security of our protocol under the strongest adversary model assuming both the tags' and readers' corruptions. We also present two (public) key update protocols for the cases of multiple readers: one uses Message Authentication Code (MAC) and the other uses trusted certificates in Public Key Infrastructure (PKI). Furthermore, we address the relations between our UC framework and the zero-knowledge privacy model proposed by Deng et al. [1].
User privacy is an important issue in a blockchain based transaction system. Bitcoin, being one of the most widely used blockchain based transaction system, fails to provide enough protection on users' privacy. Many subsequent studies focus on establishing a system that hides the linkage between the identities (pseudonyms) of users and the transactions they carry out in order to provide a high level of anonymity. Examples include Zerocoin, Zerocash and so on. It thus becomes an interesting question whether such new transaction systems do provide enough protection on users' privacy. In this paper, we propose a novel and effective approach for de-anonymizing these transaction systems by leveraging information in the system that is not directly related, including the number of transactions made by each identity and time stamp of sending and receiving. Combining probability studies with optimization tools, we establish a model which allows us to determine, among all possible ways of linking between transactions and identities, the one that is most likely to be true. Subsequent transaction graph analysis could then be carried out, leading to the de-anonymization of the system. To solve the model, we provide exact algorithms based on mixed integer linear programming. Our research also establishes interesting relationships between the de-anonymization problem and other problems studied in the literature of theoretical computer science, e.g., the graph matching problem and scheduling problem.
A Distributed Denial of Service (DDoS) attack is an attempt to make an online service, a network, or even an entire organization, unavailable by saturating it with traffic from multiple sources. DDoS attacks are among the most common and most devastating threats that network defenders have to watch out for. DDoS attacks are becoming bigger, more frequent, and more sophisticated. Volumetric attacks are the most common types of DDoS attacks. A DDoS attack is considered volumetric, or high-rate, when within a short period of time it generates a large amount of packets or a high volume of traffic. High-rate attacks are well-known and have received much attention in the past decade; however, despite several detection and mitigation strategies have been designed and implemented, high-rate attacks are still halting the normal operation of information technology infrastructures across the Internet when the protection mechanisms are not able to cope with the aggregated capacity that the perpetrators have put together. With this in mind, the present paper aims to propose and test a distributed and collaborative architecture for online high-rate DDoS attack detection and mitigation based on an in-memory distributed graph data structure and unsupervised machine learning algorithms that leverage real-time streaming data and analytics. We have successfully tested our proposed mechanism using a real-world DDoS attack dataset at its original rate in pursuance of reproducing the conditions of an actual large scale attack.
This paper presents an approach to formalizing and enforcing a class of use privacy properties in data-driven systems. In contrast to prior work, we focus on use restrictions on proxies (i.e. strong predictors) of protected information types. Our definition relates proxy use to intermediate computations that occur in a program, and identify two essential properties that characterize this behavior: 1) its result is strongly associated with the protected information type in question, and 2) it is likely to causally affect the final output of the program. For a specific instantiation of this definition, we present a program analysis technique that detects instances of proxy use in a model, and provides a witness that identifies which parts of the corresponding program exhibit the behavior. Recognizing that not all instances of proxy use of a protected information type are inappropriate, we make use of a normative judgment oracle that makes this inappropriateness determination for a given witness. Our repair algorithm uses the witness of an inappropriate proxy use to transform the model into one that provably does not exhibit proxy use, while avoiding changes that unduly affect classification accuracy. Using a corpus of social datasets, our evaluation shows that these algorithms are able to detect proxy use instances that would be difficult to find using existing techniques, and subsequently remove them while maintaining acceptable classification performance.
Distributed Denial of Service (DDoS) attacks are a popular and inexpensive form of cyber attacks. Application layer DDoS attacks utilize legitimate application layer requests to overwhelm a web server. These attacks are a major threat to Internet applications and web services. The main goal of these attacks is to make the services unavailable to legitimate users by overwhelming the resources on a web server. They look valid in connection and protocol characteristics, which makes them difficult to detect. In this paper, we propose a detection method for the application layer DDoS attacks, which is based on user behavior anomaly detection. We extract instances of user behaviors requesting resources from HTTP web server logs. We apply the Principle Component Analysis (PCA) subspace anomaly detection method for the detection of anomalous behavior instances. Web server logs from a web server hosting a student resource portal were collected as experimental data. We also generated nine different HTTP DDoS attacks through penetration testing. Our performance results on the collected data show that using PCAsubspace anomaly detection on user behavior data can detect application layer DDoS attacks, even if they are trying to mimic a normal user's behavior at some level.
To assure cyber security of an enterprise, typically SIEM (Security Information and Event Management) system is in place to normalize security events from different preventive technologies and flag alerts. Analysts in the security operation center (SOC) investigate the alerts to decide if it is truly malicious or not. However, generally the number of alerts is overwhelming with majority of them being false positive and exceeding the SOC's capacity to handle all alerts. Because of this, potential malicious attacks and compromised hosts may be missed. Machine learning is a viable approach to reduce the false positive rate and improve the productivity of SOC analysts. In this paper, we develop a user-centric machine learning framework for the cyber security operation center in real enterprise environment. We discuss the typical data sources in SOC, their work flow, and how to leverage and process these data sets to build an effective machine learning system. The paper is targeted towards two groups of readers. The first group is data scientists or machine learning researchers who do not have cyber security domain knowledge but want to build machine learning systems for security operations center. The second group of audiences are those cyber security practitioners who have deep knowledge and expertise in cyber security, but do not have machine learning experiences and wish to build one by themselves. Throughout the paper, we use the system we built in the Symantec SOC production environment as an example to demonstrate the complete steps from data collection, label creation, feature engineering, machine learning algorithm selection, model performance evaluations, to risk score generation.
VANET network is a new technology on which future intelligent transport systems are based; its purpose is to develop the vehicular environment and make it more comfortable. In addition, it provides more safety for drivers and cars on the road. Therefore, we have to make this technology as secured as possible against many threats. As VANET is a subclass of MANET, it has inherited many security problems but with a different architecture and DOS attacks are one of them. In this paper, we have focused on DOS attacks that prevent users to receive the right information at the right moment. We have analyzed DOS attacks behavior and effects on the network using different mathematical models in order to find an efficient solution.
The spread of Internet of Things (IoT) botnets like those utilizing the Mirai malware were successful enough to power some of the most powerful DDoS attacks that have been seen thus far on the Internet. Two such attacks occurred on October 21, 2016 and September 20, 2016. Since there are an estimated three billion IoT devices currently connected to the Internet, these attacks highlight the need to understand the spread of IoT worms like Mirai and the vulnerability that they create for the Internet. In this work, we describe the spread of IoT worms using a proposed model known as the IoT Botnet with Attack Information (IoT-BAI), which utilizes a variation of the Susceptible-Exposed-Infected-Recovered-Susceptible (SEIRS) epidemic model [14]. The IoT-BAI model has shown that it may be possible to mitigate the frequency of IoT botnet attacks with improved user information which may positively affect user behavior. Additionally, the IoT-BAI model has shown that increased vulnerability to attack can be caused by new hosts entering the IoT population on a daily basis. Models like IoT-BAI could be used to predict user behavior after significant events in the network like a significant botnet attack.
The increasing volume of malicious content in social networks requires automated methods to detect and eliminate such content. This paper describes a supervised machine learning classification model that has been built to detect the distribution of malicious content in online social networks (ONSs). Multisource features have been used to detect social network posts that contain malicious Uniform Resource Locators (URLs). These URLs could direct users to websites that contain malicious content, drive-by download attacks, phishing, spam, and scams. For the data collection stage, the Twitter streaming application programming interface (API) was used and VirusTotal was used for labelling the dataset. A random forest classification model was used with a combination of features derived from a range of sources. The random forest model without any tuning and feature selection produced a recall value of 0.89. After further investigation and applying parameter tuning and feature selection methods, however, we were able to improve the classifier performance to 0.92 in recall.
This paper proposes a design method of a support tool for detection and diagnosis of failures in discrete event systems (DES). The design of this diagnoser goes through three phases: an identification phase and finding paths and temporal parameters of the model describing the two modes of normal and faulty operation, a detection phase provided by the comparison and monitoring time operation and a location phase based on the combination of the temporal evolution of the parameters and thresholds exceeded technique. Our contribution lays in the application of this technique in the presence of faults arising simultaneously, sensors and actuators. The validation of the proposed approach is illustrated in a filling system through a simulation.
VisorFlow aims to monitor the flow of information between processes without requiring modifications to the operating system kernel or its userspace. VisorFlow runs in a privileged Xen domain and monitors the system calls executing in other domains running either Linux or Windows. VisorFlow uses its observations to prevent confidential information from leaving a local network. We describe the design and implementation of VisorFlow, describe how we used VisorFlow to confine na\"ıve users and malicious insiders during the 2017 Cyber-Defense Exercise, and provide performance measurements. We have released VisorFlow and its companion library, libguestrace, as open-source software.
In-vehicle networks like Controller Area Network, FlexRay, Ethernet are now subjected to huge security threats where unauthorized entities can take control of the whole vehicle. This can pose very serious threats including accidents. Security features like encryption, message authentication are getting implemented in vehicle networks to counteract these issues. This paper is proposing a set of novel validation techniques to ensure that vehicle network security is fool proof. Security validation against requirements, security validation using white box approach, black box approach and grey box approaches are put forward. Test system architecture, validation of message authentication, decoding the patterns from vehicle network data, using diagnostics as a security loophole, V2V V2X loopholes, gateway module security testing are considered in detail. Aim of this research paper is to put forward a set of tools and methods for finding and reporting any security loopholes in the in-vehicle network security implementation.
Delegated authorization protocols have become wide-spread to implement Web applications and services, where some popular providers managing people identity information and personal data allow their users to delegate third party Web services to access their data. In this paper, we analyze the risks related to untrusted providers not behaving correctly, and we solve this problem by proposing the first verifiable delegated authorization protocol that allows third party services to verify the correctness of users data returned by the provider. The contribution of the paper is twofold: we show how delegated authorization can be cryptographically enforced through authenticated data structures protocols, we extend the standard OAuth2 protocol by supporting efficient and verifiable delegated authorization including database updates and privileges revocation.
In the verifiable database (VDB) model, a computationally weak client (database owner) delegates his database management to a database service provider on the cloud, which is considered untrusted third party, while users can query the data and verify the integrity of query results. Since the process can be computationally costly and has a limited support for sophisticated query types such as aggregated queries, we propose in this paper a framework that helps bridge the gap between security and practicality trade-offs. The proposed framework remodels the verifiable database problem using Stackelberg security game. In the new model, the database owner creates and uploads to the database service provider the database and its authentication structure (AS). Next, the game is played between the defender (verifier), who is a trusted party to the database owner and runs scheduled randomized verifications using Stackelberg mixed strategy, and the database service provider. The idea is to randomize the verification schedule in an optimized way that grants the optimal payoff for the verifier while making it extremely hard for the database service provider or any attacker to figure out which part of the database is being verified next. We have implemented and compared the proposed model performance with a uniform randomization model. Simulation results show that the proposed model outperforms the uniform randomization model. Furthermore, we have evaluated the efficiency of the proposed model against different cost metrics.
Verifying attacks against cyber physical systems can be a costly and time-consuming process. By using a simulated environment, attacks can be verified quickly and accurately. By combining the simulation of a cyber physical system with a hybrid attack graph, the effects of a series of exploits can be accurately analysed. Furthermore, the use of a simulated environment to verify attacks may uncover new information about the nature of the attacks.
This paper describes a novel aerospace electronic component risk assessment methodology and supporting virtual laboratory structure designed to augment existing supply chain management practices and aid in Microelectronics Trust Assurance. This toolkit and methodology applies structure to the unclear and evolving risk assessment problem, allowing quantification of key risks affecting both advanced and obsolete systems that rely on semiconductor technologies. The impacts of logistics & supply chain risk, technology & counterfeit risk, and faulty component risk on trusted and non-trusted procurement options are quantified. The benefits of component testing on part reliability are assessed and incorporated into counterfeit mitigation calculations. This toolkit and methodology seek to assist acquisition staff by providing actionable decision data regarding the increasing threat of counterfeit components by assessing the risks faced by systems, identifying mitigation strategies to reduce this risk, and resolving these risks through the optimal test and procurement path based on the component criticality risk tolerance of the program.
A visible nearest neighbor (VNN) query returns the k nearest objects that are visible to a query point, which is used to support various applications such as route planning, target monitoring, and antenna placement. However, with the proliferation of wireless communications and advances in positioning technology for mobile equipments, efficiently searching for VNN among moving objects are required. While most previous work on VNN query focused on static objects, in this paper, we treats the objects as moving consecutively when indexing them, and study the visible nearest neighbor query for moving objects (MVNN) . Assuming that the objects are represented as trajectories given by linear functions of time, we propose a scheme which indexes the moving objects by time-parameterized R-tree (TPR-tree) and obstacles by R-tree. The paper offers four heuristics for visibility and space pruning. New algorithms, Post-pruning and United-pruning, are developed for efficiently solving MVNN queries with all four heuristics. The effectiveness and efficiency of our solutions are verified by extensive experiments over synthetic datasets on real road network.
NEtwork MObility (NEMO) has gained recently a lot of attention from a number of standardization and researches committees. Although NEMO-Basic Support Protocol (NEMO-BSP) seems to be suitable in the context of the Intelligent Transport Systems (ITS), it has several shortcomings, such as packets loss and lack of security, since it is a host-based mobility scheme. Therefore, in order to improve handoff performance and solve these limitations, schemes adapting Proxy MIPv6 for NEMO have been appeared. But the majorities did not deal with the case of the handover of the Visiting Mobile Nodes (VMN) located below the Mobile Router (MR). Thus, this paper proposes a Visiting Mobile Node Authentication Protocol for Proxy MIPv6-Based NEtwork MObility which ensures strong authentication between entities. To evaluate the security performance of our proposition, we have used the AVISPA/SPAN software which guarantees that our proposed protocol is a safe scheme.
Vehicular Ad Hoc Networks (VANETs) enable vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications that bring many benefits and conveniences to improve the road safety and drive comfort in future transportation systems. Sybil attack is considered one of the most risky threats in VANETs since a Sybil attacker can generate multiple fake identities with false messages to severely impair the normal functions of safety-related applications. In this paper, we propose a novel Sybil attack detection method based on Received Signal Strength Indicator (RSSI), Voiceprint, to conduct a widely applicable, lightweight and full-distributed detection for VANETs. To avoid the inaccurate position estimation according to predefined radio propagation models in previous RSSI-based detection methods, Voiceprint adopts the RSSI time series as the vehicular speech and compares the similarity among all received time series. Voiceprint does not rely on any predefined radio propagation model, and conducts independent detection without the support of the centralized infrastructure. It has more accurate detection rate in different dynamic environments. Extensive simulations and real-world experiments demonstrate that the proposed Voiceprint is an effective method considering the cost, complexity and performance.
Security in virtualised environments is becoming increasingly important for institutions, not only for a firm's own on-site servers and network but also for data and sites that are hosted in the cloud. Today, security is either handled globally by the cloud provider, or each customer needs to invest in its own security infrastructure. This paper proposes a Virtual Security Operation Center (VSOC) that allows to collect, analyse and visualize security related data from multiple sources. For instance, a user can forward log data from its firewalls, applications and routers in order to check for anomalies and other suspicious activities. The security analytics provided by the VSOC are comparable to those of commercial security incident and event management (SIEM) solutions, but are deployed as a cloud-based solution with the additional benefit of using big data processing tools to handle large volumes of data. This allows us to detect more complex attacks that cannot be detected with todays signature-based (i.e. rules) SIEM solutions.
This paper considers a framework of electrical cyber-physical systems (ECPSs) in which each bus and branch in a power grid is equipped with a controller and a sensor. By means of measuring the damages of cyber attacks in terms of cutting off transmission lines, three solution approaches are proposed to assess and deal with the damages caused by faults or cyber attacks. Splitting incident is treated as a special situation in cascading failure propagation. A new simulation platform is built for simulating the protection procedure of ECPSs under faults. The vulnerability of ECPSs under faults is analyzed by experimental results based on IEEE 39-bus system.
Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. Recent studies on decoy routing consider all decoy routing systems to be susceptible to a fundamental attack – regardless of their specific designs–in which the censors re-route traffic around decoy ASes, thereby preventing censored users from using such systems. In this paper, we propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to all previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it downstream-only decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Designing a downstream-only decoy routing system is a challenging engineering problem since decoy routers do not intercept the upstream traffic of censored users. We design the first downstream-only decoy routing system, called Waterfall, by devising unique covert communication mechanisms. We also use various techniques to make our Waterfall implementation resistant to traffic analysis attacks. We believe that downstream-only decoy routing is a significant step towards making decoy routing systems practical. This is because a downstream-only decoy routing system can be deployed using a significantly smaller number of volunteer ASes, given a target resistance to rerouting attacks. For instance, we show that a Waterfall implementation with only a single decoy AS is as resistant to routing attacks (against China) as a traditional decoy system (e.g., Telex) with 53 decoy ASes.
As increasingly more enterprises are deploying cloud file-sharing services, this adds a new channel for potential insider threats to company data and IPs. In this paper, we introduce a two-stage machine learning system to detect anomalies. In the first stage, we project the access logs of cloud file-sharing services onto relationship graphs and use three complementary graph-based unsupervised learning methods: OddBall, PageRank and Local Outlier Factor (LOF) to generate outlier indicators. In the second stage, we ensemble the outlier indicators and introduce the discrete wavelet transform (DWT) method, and propose a procedure to use wavelet coefficients with the Haar wavelet function to identify outliers for insider threat. The proposed system has been deployed in a real business environment, and demonstrated effectiveness by selected case studies.
Measuring fidgeting is an important goal for the psychology of mind-wandering and for human computer interaction (HCI). Previous work measuring the movement of the head, torso and thigh during HCI has shown that engaging screen content leads to non-instrumental movement inhibition (NIMI). Camera-based methods for measuring wrist movements are limited by occlusions. Here we used a high pass filtered magnitude of wearable tri-axial accelerometer recordings during 2-minute passive HCI stimuli as a surrogate for movement of the wrists and ankles. With 24 seated, healthy volunteers experiencing HCI, this metric showed that wrists moved significantly more than ankles. We found that NIMI could be detected in the wrists and ankles; it distinguished extremes of interest and boredom via restlessness. We conclude that both free-willed and forced screen engagement can elicit NIMI of the wrists and ankles.
Smart energy meters record electricity consumption and generation at fine-grained intervals, and are among the most widely deployed sensors in the world. Energy data embeds detailed information about a building's energy-efficiency, as well as the behavior of its occupants, which academia and industry are actively working to extract. In many cases, either inadvertently or by design, these third-parties only have access to anonymous energy data without an associated location. The location of energy data is highly useful and highly sensitive information: it can provide important contextual information to improve big data analytics or interpret their results, but it can also enable third-parties to link private behavior derived from energy data with a particular location. In this paper, we present Weatherman, which leverages a suite of analytics techniques to localize the source of anonymous energy data. Our key insight is that energy consumption data, as well as wind and solar generation data, largely correlates with weather, e.g., temperature, wind speed, and cloud cover, and that every location on Earth has a distinct weather signature that uniquely identifies it. Weatherman represents a serious privacy threat, but also a potentially useful tool for researchers working with anonymous smart meter data. We evaluate Weatherman's potential in both areas by localizing data from over one hundred smart meters using a weather database that includes data from over 35,000 locations. Our results show that Weatherman localizes coarse (one-hour resolution) energy consumption, wind, and solar data to within 16.68km, 9.84km, and 5.12km, respectively, on average, which is more accurate using much coarser resolution data than prior work on localizing only anonymous solar data using solar signatures.