Visible to the public Biblio

Found 16998 results

2018-04-11
Wang, J. K., Peng, Chunyi.  2017.  Analysis of Time Delay Attacks Against Power Grid Stability. Proceedings of the 2Nd Workshop on Cyber-Physical Security and Resilience in Smart Grids. :67–72.

The modern power grid, as a critical national infrastructure, is operated as a cyber-physical system. While the Wide-Area Monitoring, Protection and Control Systems (WAMPCS) in the power grid ensures stable dynamical responses by allowing real-time remote control and collecting measurement over across the power grid, they also expose the power grid to potential cyber-attacks. In this paper, we analyze the effects of Time Delay Attacks (TDAs), which disturb stability of the power grid by simply delaying the transfer of measurement and control demands over the grid's cyber infrastructure. Different from the existing work which simulates TDAs' impacts under specific scenarios, we come up with a generic analytical framework to derive the TDAs' effective conditions. In particular, we propose three concepts of TDA margins, TDA boundary, and TDA surface to define the insecure zones where TDAs are able to destabilize the grid. The proposed concepts and analytical results are exemplified in the context of Load Frequency Control (LFC), but can be generalized to other power control applications.

Bhalachandra, Sridutt, Porterfield, Allan, Olivier, Stephen L., Prins, Jan F., Fowler, Robert J..  2017.  Improving Energy Efficiency in Memory-Constrained Applications Using Core-Specific Power Control. Proceedings of the 5th International Workshop on Energy Efficient Supercomputing. :6:1–6:8.

Power is increasingly the limiting factor in High Performance Computing (HPC) at Exascale and will continue to influence future advancements in supercomputing. Recent processors equipped with on-board hardware counters allow real time monitoring of operating conditions such as energy and temperature, in addition to performance measures such as instructions retired and memory accesses. An experimental memory study presented on modern CPU architectures, Intel Sandybridge and Haswell, identifies a metric, TORo\_core, that detects bandwidth saturation and increased latency. TORo-Core is used to construct a dynamic policy applied at coarse and fine-grained levels to modulate per-core power controls on Haswell machines. The coarse and fine-grained application of dynamic policy shows best energy savings of 32.1% and 19.5% with a 2% slowdown in both cases. On average for six MPI applications, the fine-grained dynamic policy speeds execution by 1% while the coarse-grained application results in a 3% slowdown. Energy savings through frequency reduction not only provide cost advantages, they also reduce resource contention and create additional thermal headroom for non-throttled cores improving performance.

Cui, T., Yu, H., Hao, F..  2017.  Security Control for Linear Systems Subject to Denial-of-Service Attacks. 2017 36th Chinese Control Conference (CCC). :7673–7678.

This paper studies the stability of event-triggered control systems subject to Denial-of-Service attacks. An improved method is provided to increase frequency and duration of the DoS attacks where closed-loop stability is not destroyed. A two-mode switching control method is adopted to maintain stability of event-triggered control systems in the presence of attacks. Moreover, this paper reveals the relationship between robustness of systems against DoS attacks and lower bound of the inter-event times, namely, enlarging the inter-execution time contributes to enhancing the robustness of the systems against DoS attacks. Finally, some simulations are presented to illustrate the efficiency and feasibility of the obtained results.

Spanos, Georgios, Angelis, Lefteris, Toloudis, Dimitrios.  2017.  Assessment of Vulnerability Severity Using Text Mining. Proceedings of the 21st Pan-Hellenic Conference on Informatics. :49:1–49:6.

Software1 vulnerabilities are closely associated with information systems security, a major and critical field in today's technology. Vulnerabilities constitute a constant and increasing threat for various aspects of everyday life, especially for safety and economy, since the social impact from the problems that they cause is complicated and often unpredictable. Although there is an entire research branch in software engineering that deals with the identification and elimination of vulnerabilities, the growing complexity of software products and the variability of software production procedures are factors contributing to the ongoing occurrence of vulnerabilities, Hence, another area that is being developed in parallel focuses on the study and management of the vulnerabilities that have already been reported and registered in databases. The information contained in such databases includes, a textual description and a number of metrics related to vulnerabilities. The purpose of this paper is to investigate to what extend the assessment of the vulnerability severity can be inferred directly from the corresponding textual description, or in other words, to examine the informative power of the description with respect to the vulnerability severity. For this purpose, text mining techniques, i.e. text analysis and three different classification methods (decision trees, neural networks and support vector machines) were employed. The application of text mining to a sample of 70,678 vulnerabilities from a public data source shows that the description itself is a reliable and highly accurate source of information for vulnerability prioritization.

Wang, Wenhao, Chen, Guoxing, Pan, Xiaorui, Zhang, Yinqian, Wang, XiaoFeng, Bindschaedler, Vincent, Tang, Haixu, Gunter, Carl A..  2017.  Leaky Cauldron on the Dark Land: Understanding Memory Side-Channel Hazards in SGX. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2421–2434.

Side-channel risks of Intel SGX have recently attracted great attention. Under the spotlight is the newly discovered page-fault attack, in which an OS-level adversary induces page faults to observe the page-level access patterns of a protected process running in an SGX enclave. With almost all proposed defense focusing on this attack, little is known about whether such efforts indeed raise the bar for the adversary, whether a simple variation of the attack renders all protection ineffective, not to mention an in-depth understanding of other attack surfaces in the SGX system. In the paper, we report the first step toward systematic analyses of side-channel threats that SGX faces, focusing on the risks associated with its memory management. Our research identifies 8 potential attack vectors, ranging from TLB to DRAM modules. More importantly, we highlight the common misunderstandings about SGX memory side channels, demonstrating that high frequent AEXs can be avoided when recovering EdDSA secret key through a new page channel and fine-grained monitoring of enclave programs (at the level of 64B) can be done through combining both cache and cross-enclave DRAM channels. Our findings reveal the gap between the ongoing security research on SGX and its side-channel weaknesses, redefine the side-channel threat model for secure enclaves, and can provoke a discussion on when to use such a system and how to use it securely.

Gulmezoglu, Berk, Eisenbarth, Thomas, Sunar, Berk.  2017.  Cache-Based Application Detection in the Cloud Using Machine Learning. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :288–300.

Cross-VM attacks have emerged as a major threat on commercial clouds. These attacks commonly exploit hardware level leakages on shared physical servers. A co-located machine can readily feel the presence of a co-located instance with a heavy computational load through performance degradation due to contention on shared resources. Shared cache architectures such as the last level cache (LLC) have become a popular leakage source to mount cross-VM attack. By exploiting LLC leakages, researchers have already shown that it is possible to recover fine grain information such as cryptographic keys from popular software libraries. This makes it essential to verify implementations that handle sensitive data across the many versions and numerous target platforms, a task too complicated, error prone and costly to be handled by human beings. Here we propose a machine learning based technique to classify applications according to their cache access profiles. We show that with minimal and simple manual processing steps feature vectors can be used to train models using support vector machines to classify the applications with a high degree of success. The profiling and training steps are completely automated and do not require any inspection or study of the code to be classified. In native execution, we achieve a successful classification rate as high as 98% (L1 cache) and 78$\backslash$% (LLC) over 40 benchmark applications in the Phoronix suite with mild training. In the cross-VM setting on the noisy Amazon EC2 the success rate drops to 60$\backslash$% for a suite of 25 applications. With this initial study we demonstrate that it is possible to train meaningful models to successfully predict applications running in co-located instances.

Huang, Yunfan, Yang, Haomiao, Nie, Mengxi, Wu, Honggang.  2017.  Image Feature Extraction with Homomorphic Encryption on Integer Vector. Proceedings of the 2017 International Conference on Machine Learning and Soft Computing. :111–116.

With the amount of user-contributed image data increasing, it is a potential threat for users that everyone may have the access to gain privacy information. To reduce the possibility of the loss of real information, this paper combines homomorphic encryption scheme and image feature extraction to provide a guarantee for users' privacy. In this paper, the whole system model mainly consists of three parts, including social network service providers (SP), the Interested party (IP) and the applications. Except for the image preprocessing phase, the main operations of feature extraction are conducted in ciphertext domain, which means only SP has the access to the privacy of the users. The extraction algorithm is used to obtain a multi-dimensional histogram descriptor as image feature for each image. As a result, the histogram descriptor can be extracted correctly in encrypted domain in an acceptable time. Besides, the extracted feature can represent the image effectively because of relatively high accuracy. Additionally, many different applications can be conducted by using the encrypted features because of the support of our encryption scheme.

Kim, Y. S., Son, C. W., Lee, S. I..  2017.  A Method of Cyber Security Vulnerability Test for the DPPS and PMAS Test-Bed. 2017 17th International Conference on Control, Automation and Systems (ICCAS). :1749–1752.

Vulnerability analysis is important procedure for a cyber security evaluation process. There are two types of vulnerability analysis, which is an interview for the facility manager and a vulnerability scanning with a software tool. It is difficult to use the vulnerability scanning tool on an operating nuclear plant control system because of the possibility of giving adverse effects to the system. The purpose of this paper is to suggest a method of cyber security vulnerability test using the DPPS and PMAS test-bed. Based on functions of the test-bed, possible threats and vulnerabilities in terms of cyber security were analyzed. Attack trees and test scenarios could be established with the consideration of attack vectors. It is expected that this method can be helpful to implement adequate security controls and verify whether the security controls make adverse impact to the inherent functions of the systems.

Meyer, D., Haase, J., Eckert, M., Klauer, B..  2017.  New Attack Vectors for Building Automation and IoT. IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society. :8126–8131.

In the past the security of building automation solely depended on the security of the devices inside or tightly connected to the building. In the last years more devices evolved using some kind of cloud service as a back-end or providers supplying some kind of device to the user. Also, the number of building automation systems connected to the Internet for management, control, and data storage increases every year. These developments cause the appearance of new threats on building automation. As Internet of Thing (IoT) and building automation intertwine more and more these threats are also valid for IoT installations. The paper presents new attack vectors and new threats using the threat model of Meyer et al.[1].

Ghanem, K., Aparicio-Navarro, F. J., Kyriakopoulos, K. G., Lambotharan, S., Chambers, J. A..  2017.  Support Vector Machine for Network Intrusion and Cyber-Attack Detection. 2017 Sensor Signal Processing for Defence Conference (SSPD). :1–5.

Cyber-security threats are a growing concern in networked environments. The development of Intrusion Detection Systems (IDSs) is fundamental in order to provide extra level of security. We have developed an unsupervised anomaly-based IDS that uses statistical techniques to conduct the detection process. Despite providing many advantages, anomaly-based IDSs tend to generate a high number of false alarms. Machine Learning (ML) techniques have gained wide interest in tasks of intrusion detection. In this work, Support Vector Machine (SVM) is deemed as an ML technique that could complement the performance of our IDS, providing a second line of detection to reduce the number of false alarms, or as an alternative detection technique. We assess the performance of our IDS against one-class and two-class SVMs, using linear and non- linear forms. The results that we present show that linear two-class SVM generates highly accurate results, and the accuracy of the linear one-class SVM is very comparable, and it does not need training datasets associated with malicious data. Similarly, the results evidence that our IDS could benefit from the use of ML techniques to increase its accuracy when analysing datasets comprising of non- homogeneous features.

Gebhardt, D., Parikh, K., Dzieciuch, I., Walton, M., Hoang, N. A. V..  2017.  Hunting for Naval Mines with Deep Neural Networks. OCEANS 2017 - Anchorage. :1–5.

Explosive naval mines pose a threat to ocean and sea faring vessels, both military and civilian. This work applies deep neural network (DNN) methods to the problem of detecting minelike objects (MLO) on the seafloor in side-scan sonar imagery. We explored how the DNN depth, memory requirements, calculation requirements, and training data distribution affect detection efficacy. A visualization technique (class activation map) was incorporated that aids a user in interpreting the model's behavior. We found that modest DNN model sizes yielded better accuracy (98%) than very simple DNN models (93%) and a support vector machine (78%). The largest DNN models achieved textless;1% efficacy increase at a cost of a 17x increase of trainable parameter count and computation requirements. In contrast to DNNs popularized for many-class image recognition tasks, the models for this task require far fewer computational resources (0.3% of parameters), and are suitable for embedded use within an autonomous unmanned underwater vehicle.

Arumugam, T., Scott-Hayward, S..  2017.  Demonstrating State-Based Security Protection Mechanisms in Software Defined Networks. 2017 8th International Conference on the Network of the Future (NOF). :123–125.

The deployment of Software Defined Networking (SDN) and Network Functions Virtualization (NFV) technologies is increasing, with security as a recognized application driving adoption. However, despite the potential with SDN/NFV for automated and adaptive network security services, the controller interaction presents both a performance and scalability challenge, and a threat vector. To overcome the performance issue, stateful data-plane designs have been proposed. However, these solutions do not offer protection from SDN-specific attacks linked to necessary control functions such as link reconfiguration and switch identification. In this work, we leverage the OpenState framework to introduce state-based SDN security protection mechanisms. The extensions required for this design are presented with respect to an SDN configuration-based attack. The demonstration shows the ability of the SDN Configuration (CFG) security protection mechanism to support legitimate relocation requests and to protect against malicious connection attempts.

Deliu, I., Leichter, C., Franke, K..  2017.  Extracting Cyber Threat Intelligence from Hacker Forums: Support Vector Machines versus Convolutional Neural Networks. 2017 IEEE International Conference on Big Data (Big Data). :3648–3656.

Hacker forums and other social platforms may contain vital information about cyber security threats. But using manual analysis to extract relevant threat information from these sources is a time consuming and error-prone process that requires a significant allocation of resources. In this paper, we explore the potential of Machine Learning methods to rapidly sift through hacker forums for relevant threat intelligence. Utilizing text data from a real hacker forum, we compared the text classification performance of Convolutional Neural Network methods against more traditional Machine Learning approaches. We found that traditional machine learning methods, such as Support Vector Machines, can yield high levels of performance that are on par with Convolutional Neural Network algorithms.

Muñoz-González, Luis, Biggio, Battista, Demontis, Ambra, Paudice, Andrea, Wongrassamee, Vasin, Lupu, Emil C., Roli, Fabio.  2017.  Towards Poisoning of Deep Learning Algorithms with Back-Gradient Optimization. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. :27–38.

A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to optimize the poisoning points (a.k.a. adversarial training examples). In this work, we first extend the definition of poisoning attacks to multiclass problems. We then propose a novel poisoning algorithm based on the idea of back-gradient optimization, i.e., to compute the gradient of interest through automatic differentiation, while also reversing the learning procedure to drastically reduce the attack complexity. Compared to current poisoning strategies, our approach is able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures. We empirically evaluate its effectiveness on several application examples, including spam filtering, malware detection, and handwritten digit recognition. We finally show that, similarly to adversarial test examples, adversarial training examples can also be transferred across different learning algorithms.

Chen, Lingwei, Hou, Shifu, Ye, Yanfang.  2017.  SecureDroid: Enhancing Security of Machine Learning-Based Detection Against Adversarial Android Malware Attacks. Proceedings of the 33rd Annual Computer Security Applications Conference. :362–372.

With smart phones being indispensable in people's everyday life, Android malware has posed serious threats to their security, making its detection of utmost concern. To protect legitimate users from the evolving Android malware attacks, machine learning-based systems have been successfully deployed and offer unparalleled flexibility in automatic Android malware detection. In these systems, based on different feature representations, various kinds of classifiers are constructed to detect Android malware. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the security of machine learning in Android malware detection on the basis of a learning-based classifier with the input of a set of features extracted from the Android applications (apps). We consider different importances of the features associated with their contributions to the classification problem as well as their manipulation costs, and present a novel feature selection method (named SecCLS) to make the classifier harder to be evaded. To improve the system security while not compromising the detection accuracy, we further propose an ensemble learning approach (named SecENS) by aggregating the individual classifiers that are constructed using our proposed feature selection method SecCLS. Accordingly, we develop a system called SecureDroid which integrates our proposed methods (i.e., SecCLS and SecENS) to enhance security of machine learning-based Android malware detection. Comprehensive experiments on the real sample collections from Comodo Cloud Security Center are conducted to validate the effectiveness of SecureDroid against adversarial Android malware attacks by comparisons with other alternative defense methods. Our proposed secure-learning paradigm can also be readily applied to other malware detection tasks.

Zuo, Pengfei, Hua, Yu, Wang, Cong, Xia, Wen, Cao, Shunde, Zhou, Yukun, Sun, Yuanyuan.  2017.  Mitigating Traffic-Based Side Channel Attacks in Bandwidth-Efficient Cloud Storage. Proceedings of the 2017 Symposium on Cloud Computing. :638–638.

Data deduplication [3] is able to effectively identify and eliminate redundant data and only maintain a single copy of files and chunks. Hence, it is widely used in cloud storage systems to save the users' network bandwidth for uploading data. However, the occurrence of deduplication can be easily identified by monitoring and analyzing network traffic, which leads to the risk of user privacy leakage. The attacker can carry out a very dangerous side channel attack, i.e., learn-the-remaining-information (LRI) attack, to reveal users' privacy information by exploiting the side channel of network traffic in deduplication [1]. In the LRI attack, the attacker knows a large part of the target file in the cloud and tries to learn the remaining unknown parts via uploading all possible versions of the file's content. For example, the attacker knows all the contents of the target file X except the sensitive information \texttheta. To learn the sensitive information, the attacker needs to upload m files with all possible values of \texttheta, respectively. If a file Xd with the value \textthetad is deduplicated and other files are not, the attacker knows that the information \texttheta = \textthetad. In the threat model of the LRI attack, we consider a general cloud storage service model that includes two entities, i.e., the user and cloud storage server. The attack is launched by the users who aim to steal the privacy information of other users [1]. The attacker can act as a user via its own account or use multiple accounts to disguise as multiple users. The cloud storage server communicates with the users through Internet. The connections from the clients to the cloud storage server are encrypted by SSL or TLS protocol. Hence, the attacker can monitor and measure the amount of network traffic between the client and server but cannot intercept and analyze the contents of the transmitted data due to the encryption. The attacker can then perform the sophisticated traffic analysis with sufficient computing resources. We propose a simple yet effective scheme, called randomized redundant chunk scheme (RRCS), to significantly mitigate the risk of the LRI attack while maintaining the high bandwidth efficiency of deduplication. The basic idea behind RRCS is to add randomized redundant chunks to mix up the real deduplication states of files used for the LRI attack, which effectively obfuscates the view of the attacker, who attempts to exploit the side channel of network traffic for the LRI attack. RRCS includes three key function modules, range generation (RG), secure bounds setting (SBS), and security-irrelevant redundancy elimination (SRE). When uploading the random-number redundant chunks, RRCS first uses RG to generate a fixed range [0,$łambda$N] ($łambda$ $ε$ (0,1]), in which the number of added redundant chunks is randomly chosen, where N is the total number of chunks in a file and $łambda$ is a system parameter. However, the fixed range may cause a security issue. SBS is used to deal with the bounds of the fixed range to avoid the security issue. There may exist security-irrelevant redundant chunks in RRCS. SRE reduces the security-irrelevant redundant chunks to improve the deduplication efficiency. The design details are presented in our technical report [5]. Our security analysis demonstrates RRCS can significantly reduce the risk of the LRI attack [5]. We examine the performance of RRCS using three real-world trace-based datasets, i.e., Fslhomes [2], MacOS [2], and Onefull [4], and compare RRCS with the randomized threshold scheme (RTS) [1]. Our experimental results show that source-based deduplication eliminates 100% data redundancy which however has no security guarantee. File-level (chunk-level) RTS only eliminates 8.1% – 16.8% (9.8% – 20.3%) redundancy, due to only eliminating the redundancy of the files (chunks) that have many copies. RRCS with $łambda$ = 0.5 eliminates 76.1% – 78.0% redundancy and RRCS with $łambda$ = 1 eliminates 47.9% – 53.6% redundancy.

Kramer, Sean, Zhang, Zhiming, Dofe, Jaya, Yu, Qiaoyan.  2017.  Mitigating Control Flow Attacks in Embedded Systems with Novel Built-in Secure Register Bank. Proceedings of the on Great Lakes Symposium on VLSI 2017. :483–486.

Embedded systems are prone to security attacks from their limited resources available for self-protection and unsafe language typically used for application programming. Attacks targeting control flow is one of the most common exploitations for embedded systems. We propose a hardware-level, effective, and low overhead countermeasure to mitigate these types of attacks. In the proposed method, a Built-in Secure Register Bank (BSRB) is introduced to the processor micro-architecture to store the return addresses of subroutines. The inconsistency on the return addresses will direct the processor to select a clean copy to resume the normal control flow and mitigate the security threat. This proposed countermeasure is inaccessible for the programmer and does not require any compiler support, thus achieving better flexibility than software-based countermeasures. Experimental results show that the proposed method only increases the area and power by 3.8% and 4.4%, respectively, over the baseline OpenRISC processor.

Putra, Guntur Dharma, Sulistyo, Selo.  2017.  Trust Based Approach in Adjacent Vehicles to Mitigate Sybil Attacks in VANET. Proceedings of the 2017 International Conference on Software and E-Business. :117–122.

Vehicular Ad-Hoc Network (VANET) is a form of Peer-to-Peer (P2P) wireless communication between vehicles, which is characterized by the high mobility. In practice, VANET can be utilized to cater connections via multi-hop communication between vehicles to provide traffic information seamlessly, such as traffic jam and traffic accident, without the need of dedicated centralized infrastructure. Although dedicated infrastructures may also be involved in VANET, such as Road Side Units (RSUs), most of the time VANET relies solely on Vehicle-to-Vehicle (V2V) communication, which makes it vulnerable to several potential attacks in P2P based communication, as there are no trusted authorities that provide authentication and security. One of the potential threats is a Sybil attack, wherein an adversary uses a considerable number of forged identities to illegitimately infuse false or biased information which may mislead a system into making decisions benefiting the adversary. Avoiding Sybil attacks in VANET is a difficult problem, as there are typically no trusted authorities that provide cryptographic assurance of Sybil resilience. This paper presents a technique to detect and mitigate Sybil attacks, which requires no dedicated infrastructure, by utilizing just V2V communication. The proposed method work based on underlying assumption that says the mobility of vehicles in high vehicle density and the limited transmission power of the adversary creates unique groups of vehicle neighbors at a certain time point, which can be calculated in a statistical fashion providing a temporal and spatial analysis to verify real and impersonated vehicle identities. The proposed method also covers the mitigation procedures to create a trust model and announce neighboring vehicles regarding the detected tempered identities in a secure way utilizing Diffie-Hellman key distribution. This paper also presents discussions concerning the proposed approach with regard to benefits and drawbacks of sparse road condition and other potential threats.

Bronte, Robert, Shahriar, Hossain, Haddad, Hisham M..  2017.  Mitigating Distributed Denial of Service Attacks at the Application Layer. Proceedings of the Symposium on Applied Computing. :693–696.

Distributed Denial of Service (DDoS) attacks on web applications have been a persistent threat. Existing approaches for mitigating application layer DDoS attacks have limitations such low detection rate and inability to detect attacks targeting resource files. In this work, we propose Application layer DDoS (App-DDoS) attack detection framework by leveraging the concepts of Term Frequency (TF)-Inverse Document Frequency (IDF) and Latent Semantic Indexing (LSI). The approach involves analyzing web server logs to identify popular pages using TF-IDF; building normal resource access profile; generating query of accessed resources; and applying LSI technique to determine the similarity between a given session and known good sessions. A high-level of dissimilarity triggers a DDoS attack warning. We apply the proposed approach to traffics generated from three PHP applications. The initial results suggest that the proposed approach can identify ongoing DDoS attacks against web applications.

Prabadevi, B., Jeyanthi, N..  2017.  A Mitigation System for ARP Cache Poisoning Attacks. Proceedings of the Second International Conference on Internet of Things and Cloud Computing. :20:1–20:7.

Though the telecommunication protocol ARP provides the most prominent service for data transmission in the network by providing the physical layer address for any host's network layer address, its stateless nature remains one of the most well-known opportunities for the attacker community and ultimate threat for the hosts in the network. ARP cache poisoning results in numerous attacks, of which the most noteworthy ones MITM, host impersonation and DoS attacks. This paper presents various recent mitigation methods and proposes a novel mitigation system for ARP cache Poisoning Attacks. The proposed system works as follows: for any ARP Request or Reply messages a time stamp is generated. When it is received or sent by a host, the host will make cross layer inspection and IP-MAC pair matching with ARP table Entry. If ARP table entry matches and cross layer consistency is ensured then ARP reply with Time Stamp is sent. If in both the cases evaluated to be bogus packet, then the IP-MAC pair is added to the untrusted list and further packet inspection is done to ensure no attack has been deployed onto the network. The time is also noted for each entry made into the ARP table which makes ARP stateful. The system is evaluated based on criteria specified by the researchers.

Siby, Sandra, Maiti, Rajib Ranjan, Tippenhauer, Nils Ole.  2017.  IoTScanner: Detecting Privacy Threats in IoT Neighborhoods. Proceedings of the 3rd ACM International Workshop on IoT Privacy, Trust, and Security. :23–30.

In the context of the emerging Internet of Things (IoT), a proliferation of wireless connectivity can be expected. That ubiquitous wireless communication will be hard to centrally manage and control, and can be expected to be opaque to end users. As a result, owners and users of physical space are threatened to lose control over their digital environments. In this work, we propose the idea of an IoTScanner. The IoTScanner integrates a range of radios to allow local reconnaissance of existing wireless infrastructure and participating nodes. It enumerates such devices, identifies connection patterns, and provides valuable insights for technical support and home users alike. Using our IoTScanner, we investigate metrics that could be used to classify devices and identify privacy threats in an IoT neighborhood.

Villalobos, J. J., Rodero, Ivan, Parashar, Manish.  2017.  An Unsupervised Approach for Online Detection and Mitigation of High-Rate DDoS Attacks Based on an In-Memory Distributed Graph Using Streaming Data and Analytics. Proceedings of the Fourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies. :103–112.

A Distributed Denial of Service (DDoS) attack is an attempt to make an online service, a network, or even an entire organization, unavailable by saturating it with traffic from multiple sources. DDoS attacks are among the most common and most devastating threats that network defenders have to watch out for. DDoS attacks are becoming bigger, more frequent, and more sophisticated. Volumetric attacks are the most common types of DDoS attacks. A DDoS attack is considered volumetric, or high-rate, when within a short period of time it generates a large amount of packets or a high volume of traffic. High-rate attacks are well-known and have received much attention in the past decade; however, despite several detection and mitigation strategies have been designed and implemented, high-rate attacks are still halting the normal operation of information technology infrastructures across the Internet when the protection mechanisms are not able to cope with the aggregated capacity that the perpetrators have put together. With this in mind, the present paper aims to propose and test a distributed and collaborative architecture for online high-rate DDoS attack detection and mitigation based on an in-memory distributed graph data structure and unsupervised machine learning algorithms that leverage real-time streaming data and analytics. We have successfully tested our proposed mechanism using a real-world DDoS attack dataset at its original rate in pursuance of reproducing the conditions of an actual large scale attack.

Meyer, Philipp, Hiesgen, Raphael, Schmidt, Thomas C., Nawrocki, Marcin, Wählisch, Matthias.  2017.  Towards Distributed Threat Intelligence in Real-Time. Proceedings of the SIGCOMM Posters and Demos. :76–78.

In this demo, we address the problem of detecting anomalies on the Internet backbone in near real-time. Many of today's incidents may only become visible from inspecting multiple data sources and by considering multiple vantage points simultaneously. We present a setup based on the distributed forensic platform VAST that was extended to import various data streams from passive measurements and incident reporting at multiple locations, and perform an effective correlation analysis shortly after the data becomes exposed to our queries.

Gascon, Hugo, Grobauer, Bernd, Schreck, Thomas, Rist, Lukas, Arp, Daniel, Rieck, Konrad.  2017.  Mining Attributed Graphs for Threat Intelligence. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :15–22.

Understanding and fending off attack campaigns against organizations, companies and individuals, has become a global struggle. As today's threat actors become more determined and organized, isolated efforts to detect and reveal threats are no longer effective. Although challenging, this situation can be significantly changed if information about security incidents is collected, shared and analyzed across organizations. To this end, different exchange data formats such as STIX, CyBOX, or IODEF have been recently proposed and numerous CERTs are adopting these threat intelligence standards to share tactical and technical threat insights. However, managing, analyzing and correlating the vast amount of data available from different sources to identify relevant attack patterns still remains an open problem. In this paper we present Mantis, a platform for threat intelligence that enables the unified analysis of different standards and the correlation of threat data trough a novel type-agnostic similarity algorithm based on attributed graphs. Its unified representation allows the security analyst to discover similar and related threats by linking patterns shared between seemingly unrelated attack campaigns through queries of different complexity. We evaluate the performance of Mantis as an information retrieval system for threat intelligence in different experiments. In an evaluation with over 14,000 CyBOX objects, the platform enables retrieving relevant threat reports with a mean average precision of 80%, given only a single object from an incident, such as a file or an HTTP request. We further illustrate the performance of this analysis in two case studies with the attack campaigns Stuxnet and Regin.

Assadi, Sepehr, Khanna, Sanjeev.  2017.  Randomized Composable Coresets for Matching and Vertex Cover. Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :3–12.

A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say k, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and $\backslash$emph\round-optimal\ (requiring essentially no interaction between the machines) protocol. Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets. These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems. However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of $\backslash$polylog\(n)\ requires using representative summaries of size $\backslash$widetilde\$\backslash$Omega\(ntextasciicircum2) i.e. essentially no better summary exists than each machine simply sending its entire input graph. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit $\backslash$emph\randomized composable coresets\ of size $\backslash$widetildeØ\(n) that yield an $\backslash$widetildeØ\(1)-approximate solution$\backslash$footnote\Here and throughout the paper, we use $\backslash$Ot($\backslash$cdot) notation to suppress $\backslash$polylog\(n)\ factors, where n is the number of vertices in the graph. In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs. This results in an Õ(1)-approximation simultaneous protocol for these problems with Õ(nk) total communication when the input is randomly partitioned across k machines. We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our $\backslash$Ot(nk) bound for total communication is optimal for em any simultaneous communication protocol (i.e. not only for randomized coresets) for these two problems. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems.\vphantom\