Biblio
The task of attack attribution, i.e., identifying the entity responsible for an attack, is complicated and usually requires the involvement of an experienced security expert. Prior attempts to automate attack attribution apply various machine learning techniques on features extracted from the malware's code and behavior in order to identify other similar malware whose authors are known. However, the same malware can be reused by multiple actors, and the actor who performed an attack using a malware might differ from the malware's author. Moreover, information collected during an incident may contain many clues about the identity of the attacker in addition to the malware used. In this paper, we propose a method of attack attribution based on textual analysis of threat intelligence reports, using state of the art algorithms and models from the fields of machine learning and natural language processing (NLP). We have developed a new text representation algorithm which captures the context of the words and requires minimal feature engineering. Our approach relies on vector space representation of incident reports derived from a small collection of labeled reports and a large corpus of general security literature. Both datasets have been made available to the research community. Experimental results show that the proposed representation can attribute attacks more accurately than the baselines' representations. In addition, we show how the proposed approach can be used to identify novel previously unseen threat actors and identify similarities between known threat actors.
The quantity of Internet of Things (IoT) devices in the marketplace and lack of security is staggering. The interconnectedness of IoT devices has increased the attack surface for hackers. "White Worm" technology has the potential to combat infiltrating malware. Before white worm technology becomes viable, its capabilities must be constrained to specific devices and limited to non-harmful actions. This paper addresses the current problem, international research, and the conflicting interest of individuals, businesses, and governments regarding white worm technology. Proposed is a new perspective on utilizing white worm technology to protect the vulnerability of IoT devices, while overcoming its challenges.
Vehicles are becoming increasingly connected to the outside world. We can connect our devices to the vehicle's infotainment system and internet is being added as a functionality. Therefore, security is a major concern as the attack surface has become much larger than before. Consequently, attackers are creating malware that can infect vehicles and perform life-threatening activities. For example, a malware can compromise vehicle ECUs and cause unexpected consequences. Hence, ensuring the security of connected vehicle software and networks is extremely important to gain consumer confidence and foster the growth of this emerging market. In this paper, we propose a characterization of vehicle malware and a security architecture to protect vehicle from these malware. The architecture uses multiple computational platforms and makes use of the virtualization technique to limit the attack surface. There is a real-time operating system to control critical vehicle functionalities and multiple other operating systems for non-critical functionalities (infotainment, telematics, etc.). The security architecture also describes groups of components for the operating systems to prevent malicious activities and perform policing (monitor, detect, and control). We believe this work will help automakers guard their systems against malware and provide a clear guideline for future research.
Identifying cyberattack vectors on cyber supply chains (CSC) in the event of cyberattacks are very important in mitigating cybercrimes effectively on Cyber Physical Systems CPS. However, in the cyber security domain, the invincibility nature of cybercrimes makes it difficult and challenging to predict the threat probability and impact of cyber attacks. Although cybercrime phenomenon, risks, and treats contain a lot of unpredictability's, uncertainties and fuzziness, cyberattack detection should be practical, methodical and reasonable to be implemented. We explore Bayesian Belief Networks (BBN) as knowledge representation in artificial intelligence to be able to be formally applied probabilistic inference in the cyber security domain. The aim of this paper is to use Bayesian Belief Networks to detect cyberattacks on CSC in the CPS domain. We model cyberattacks using DAG method to determine the attack propagation. Further, we use a smart grid case study to demonstrate the applicability of attack and the cascading effects. The results show that BBN could be adapted to determine uncertainties in the event of cyberattacks in the CSC domain.
IoT devices introduce unprecedented threats into home and professional networks. As they fail to adhere to security best practices, they are broadly exploited by malicious actors to build botnets or steal sensitive information. Their adoption challenges established security standard as classic security measures are often inappropriate to secure them. This is even more problematic in sensitive environments where the presence of insecure IoTs can be exploited to bypass strict security policies. In this paper, we demonstrate an attack against a highly secured network using a Bluetooth smart bulb. This attack allows a malicious actor to take advantage of a smart bulb to exfiltrate data from an air gapped network.
Securing Cyber-Physical Systems (CPS) against cyber-attacks is challenging due to the wide range of possible attacks - from stealthy ones that seek to manipulate/drop/delay control and measurement signals to malware that infects host machines that control the physical process. This has prompted the research community to address this problem through developing targeted methods that protect and check the run-time operation of the CPS. Since protecting signals and checking for errors result in performance penalties, they must be performed within the delay bounds dictated by the control loop. Due to the large number of potential checks that can be performed, coupled with various degrees of their effectiveness to detect a wide range of attacks, strategic assignment of these checks in the control loop is a critical endeavor. To that end, this paper presents a coherent runtime framework - which we coin BLOC - for orchestrating the CPS with check blocks to secure them against cyber attacks. BLOC capitalizes on game theoretical techniques to enable the defender to find an optimal randomized use of check blocks to secure the CPS while respecting the control-loop constraints. We develop a Stackelberg game model for stateless blocks and a Markov game model for stateful ones and derive optimal policies that minimize the worst-case damage from rational adversaries. We validate our models through extensive simulations as well as a real implementation for a HVAC system.
The dependability of Cyber Physical Systems (CPS) solely lies in the secure and reliable functionality of their backbone, the computing platform. Security of this platform is not only threatened by the vulnerabilities in the software peripherals, but also by the vulnerabilities in the hardware internals. Such threats can arise from malicious modifications to the integrated circuits (IC) based computing hardware, which can disable the system, leak information or produce malfunctions. Such modifications to computing hardware are made possible by the globalization of the IC industry, where a computing chip can be manufactured anywhere in the world. In the complex computing environment of CPS such modifications can be stealthier and undetectable. Under such circumstances, design of these malicious modifications, and eventually their detection, will be tied to the functionality and operation of the CPS. So it is imperative to address such threats by incorporating security awareness in the computing hardware design in a comprehensive manner taking the entire system into consideration. In this paper, we present a study in the influence of hardware Trojans on closed-loop systems, which form the basis of CPS, and establish threat models. Using these models, we perform a case study on a critical CPS application, gas pipeline based SCADA system. Through this process, we establish a completely virtual simulation platform along with a hardware-in-the-loop based simulation platform for implementation and testing.
The recent malware outbreaks have shown that the existing end-point security solutions are not robust enough to secure the systems from getting compromised. The techniques, like code obfuscation along with one or more zero-days, are used by malware developers for evading the security systems. These malwares are used for large-scale attacks involving Advanced Persistent Threats(APT), Botnets, Cryptojacking, etc. Cryptojacking poses a severe threat to various organizations and individuals. We are summarising multiple methods available for the detection of malware.
This paper presents TrustSign, a novel, trusted automatic malware signature generation method based on high-level deep features transferred from a VGG-19 neural network model pre-trained on the ImageNet dataset. While traditional automatic malware signature generation techniques rely on static or dynamic analysis of the malware's executable, our method overcomes the limitations associated with these techniques by producing signatures based on the presence of the malicious process in the volatile memory. Signatures generated using TrustSign well represent the real malware behavior during runtime. By leveraging the cloud's virtualization technology, TrustSign analyzes the malicious process in a trusted manner, since the malware is unaware and cannot interfere with the inspection procedure. Additionally, by removing the dependency on the malware's executable, our method is capable of signing fileless malware. Thus, we focus our research on in-browser cryptojacking attacks, which current antivirus solutions have difficulty to detect. However, TrustSign is not limited to cryptojacking attacks, as our evaluation included various ransomware samples. TrustSign's signature generation process does not require feature engineering or any additional model training, and it is done in a completely unsupervised manner, obviating the need for a human expert. Therefore, our method has the advantage of dramatically reducing signature generation and distribution time. The results of our experimental evaluation demonstrate TrustSign's ability to generate signatures invariant to the process state over time. By using the signatures generated by TrustSign as input for various supervised classifiers, we achieved 99.5% classification accuracy.
An emerging Internet business is residential proxy (RESIP) as a service, in which a provider utilizes the hosts within residential networks (in contrast to those running in a datacenter) to relay their customers' traffic, in an attempt to avoid server- side blocking and detection. With the prominent roles the services could play in the underground business world, little has been done to understand whether they are indeed involved in Cybercrimes and how they operate, due to the challenges in identifying their RESIPs, not to mention any in-depth analysis on them. In this paper, we report the first study on RESIPs, which sheds light on the behaviors and the ecosystem of these elusive gray services. Our research employed an infiltration framework, including our clients for RESIP services and the servers they visited, to detect 6 million RESIP IPs across 230+ countries and 52K+ ISPs. The observed addresses were analyzed and the hosts behind them were further fingerprinted using a new profiling system. Our effort led to several surprising findings about the RESIP services unknown before. Surprisingly, despite the providers' claim that the proxy hosts are willingly joined, many proxies run on likely compromised hosts including IoT devices. Through cross-matching the hosts we discovered and labeled PUP (potentially unwanted programs) logs provided by a leading IT company, we uncovered various illicit operations RESIP hosts performed, including illegal promotion, Fast fluxing, phishing, malware hosting, and others. We also reverse engi- neered RESIP services' internal infrastructures, uncovered their potential rebranding and reselling behaviors. Our research takes the first step toward understanding this new Internet service, contributing to the effective control of their security risks.
The Dark Web, a conglomerate of services hidden from search engines and regular users, is used by cyber criminals to offer all kinds of illegal services and goods. Multiple Dark Web offerings are highly relevant for the cyber security domain in anticipating and preventing attacks, such as information about zero-day exploits, stolen datasets with login information, or botnets available for hire. In this work, we analyze and discuss the challenges related to information gathering in the Dark Web for cyber security intelligence purposes. To facilitate information collection and the analysis of large amounts of unstructured data, we present BlackWidow, a highly automated modular system that monitors Dark Web services and fuses the collected data in a single analytics framework. BlackWidow relies on a Docker-based micro service architecture which permits the combination of both preexisting and customized machine learning tools. BlackWidow represents all extracted data and the corresponding relationships extracted from posts in a large knowledge graph, which is made available to its security analyst users for search and interactive visual exploration. Using BlackWidow, we conduct a study of seven popular services on the Deep and Dark Web across three different languages with almost 100,000 users. Within less than two days of monitoring time, BlackWidow managed to collect years of relevant information in the areas of cyber security and fraud monitoring. We show that BlackWidow can infer relationships between authors and forums and detect trends for cybersecurity-related topics. Finally, we discuss exemplary case studies surrounding leaked data and preparation for malicious activity.
Cybercrimes and cyber criminals widely use dark web and illegal functionalities of the dark web towards the world crisis. More than half of the criminal activities and the terror activities conducted through the dark web such as, cryptocurrency, selling human organs, red rooms, child pornography, arm deals, drug deals, hire assassins and hackers, hacking software and malware programs, etc. The law enforcement agencies such as FBI, NSA, Interpol, Mossad, FSB etc, are always conducting surveillance programs through the dark web to trace down the mass criminals and terrorists while stopping the crimes and the terror activities. This paper is about the dark web marketing and surveillance programs. In the deep end research will discuss the dark web access with securely and how the law enforcement agencies exponentially tracking down the users with terror behaviours and activities. Moreover, the paper discusses dark web sites which users can grab the dark web jihadist services and anonymous markets including safety precautions.
Machine learning is a major area in artificial intelligence, which enables computer to learn itself explicitly without programming. As machine learning is widely used in making decision automatically, attackers have strong intention to manipulate the prediction generated my machine learning model. In this paper we study about the different types of attacks and its countermeasures on machine learning model. By research we found that there are many security threats in various algorithms such as K-nearest-neighbors (KNN) classifier, random forest, AdaBoost, support vector machine (SVM), decision tree, we revisit existing security threads and check what are the possible countermeasures during the training and prediction phase of machine learning model. In machine learning model there are 2 types of attacks that is causative attack which occurs during the training phase and exploratory attack which occurs during the prediction phase, we will also discuss about the countermeasures on machine learning model, the countermeasures are data sanitization, algorithm robustness enhancement, and privacy preserving techniques.