Biblio
This paper presents TrustSign, a novel, trusted automatic malware signature generation method based on high-level deep features transferred from a VGG-19 neural network model pre-trained on the ImageNet dataset. While traditional automatic malware signature generation techniques rely on static or dynamic analysis of the malware's executable, our method overcomes the limitations associated with these techniques by producing signatures based on the presence of the malicious process in the volatile memory. Signatures generated using TrustSign well represent the real malware behavior during runtime. By leveraging the cloud's virtualization technology, TrustSign analyzes the malicious process in a trusted manner, since the malware is unaware and cannot interfere with the inspection procedure. Additionally, by removing the dependency on the malware's executable, our method is capable of signing fileless malware. Thus, we focus our research on in-browser cryptojacking attacks, which current antivirus solutions have difficulty to detect. However, TrustSign is not limited to cryptojacking attacks, as our evaluation included various ransomware samples. TrustSign's signature generation process does not require feature engineering or any additional model training, and it is done in a completely unsupervised manner, obviating the need for a human expert. Therefore, our method has the advantage of dramatically reducing signature generation and distribution time. The results of our experimental evaluation demonstrate TrustSign's ability to generate signatures invariant to the process state over time. By using the signatures generated by TrustSign as input for various supervised classifiers, we achieved 99.5% classification accuracy.
The Dark Web, a conglomerate of services hidden from search engines and regular users, is used by cyber criminals to offer all kinds of illegal services and goods. Multiple Dark Web offerings are highly relevant for the cyber security domain in anticipating and preventing attacks, such as information about zero-day exploits, stolen datasets with login information, or botnets available for hire. In this work, we analyze and discuss the challenges related to information gathering in the Dark Web for cyber security intelligence purposes. To facilitate information collection and the analysis of large amounts of unstructured data, we present BlackWidow, a highly automated modular system that monitors Dark Web services and fuses the collected data in a single analytics framework. BlackWidow relies on a Docker-based micro service architecture which permits the combination of both preexisting and customized machine learning tools. BlackWidow represents all extracted data and the corresponding relationships extracted from posts in a large knowledge graph, which is made available to its security analyst users for search and interactive visual exploration. Using BlackWidow, we conduct a study of seven popular services on the Deep and Dark Web across three different languages with almost 100,000 users. Within less than two days of monitoring time, BlackWidow managed to collect years of relevant information in the areas of cyber security and fraud monitoring. We show that BlackWidow can infer relationships between authors and forums and detect trends for cybersecurity-related topics. Finally, we discuss exemplary case studies surrounding leaked data and preparation for malicious activity.
With the growing scale of Cyber-Physical Systems (CPSs), it is challenging to maintain their stability under all operating conditions. How to reduce the downtime and locate the failures becomes a core issue in system design. In this paper, we employ a hierarchical contract-based resilience framework to guarantee the stability of CPS. In this framework, we use Assume Guarantee (A-G) contracts to monitor the non-functional properties of individual components (e.g., power and latency), and hierarchically compose such contracts to deduce information about faults at the system level. The hierarchical contracts enable rapid fault detection in large-scale CPS. However, due to the vast number of components in CPS, manually designing numerous contracts and the hierarchy becomes challenging. To address this issue, we propose a technique to automatically decompose a root contract into multiple lower-level contracts depending on I/O dependencies between components. We then formulate a multi-objective optimization problem to search the optimal parameters of each lower-level contract. This enables automatic contract refinement taking into consideration the communication overhead between components. Finally, we use a case study from the manufacturing domain to experimentally demonstrate the benefits of the proposed framework.
A term systems of systems (SoS) refers to a setup in which a number of independent systems collaborate to create a value that each of them is unable to achieve independently. Complexity of a SoS structure is higher compared to its constitute systems that brings challenges in analyzing its critical properties such as security. An SoS can be seen as a set of connected systems or services that needs to be adequately protected. Communication between such systems or services can be considered as a service itself, and it is the paramount for establishment of a SoS as it enables connections, dependencies, and a cooperation. Given that reliable and predictable communication contributes directly to a correct functioning of an SoS, communication as a service is one of the main assets to consider. Protecting it from malicious adversaries should be one of the highest priorities within SoS design and operation. This study aims to investigate the attack propagation problem in terms of service-guarantees through the decomposition into sub-services enriched with preconditions and postconditions at the service levels. Such analysis is required as a prerequisite for an efficient SoS risk assessment at the design stage of the SoS development life cycle to protect it from possibly high impact attacks capable of affecting safety of systems and humans using the system.
ASA systems (firewall, IDS, IPS) are probable to become communication bottlenecks in networks with growing network bandwidths. To alleviate this issue, we suggest to use Application-aware mechanism based on Deep Packet Inspection (DPI) to bypass chosen traffic around firewalls. The services of Internet video sharing gained importance and expanded their share of the multimedia market. The Internet video should meet strict service quality (QoS) criteria to make the broadcasting of broadcast television a viable and comparable level of quality. However, since the Internet video relies on packet communication, it is subject to delays, transmission failures, loss of data and bandwidth restrictions that may have a catastrophic effect on the quality of multimedia.
In the past few years, visual information collection and transmission is increased significantly for various applications. Smart vehicles, service robotic platforms and surveillance cameras for the smart city applications are collecting a large amount of visual data. The preservation of the privacy of people presented in this data is an important factor in storage, processing, sharing and transmission of visual data across the Internet of Robotic Things (IoRT). In this paper, a novel anonymisation method for information security and privacy preservation in visual data in sharing layer of the Web of Robotic Things (WoRT) is proposed. The proposed framework uses deep neural network based semantic segmentation to preserve the privacy in video data base of the access level of the applications and users. The data is anonymised to the applications with lower level access but the applications with higher legal access level can analyze and annotated the complete data. The experimental results show that the proposed method while giving the required access to the authorities for legal applications of smart city surveillance, is capable of preserving the privacy of the people presented in the data.
The Internet of Things (IoT) is connecting the world in a way humanity has never seen before. With applications in healthcare, agricultural, transportation, and more, IoT devices help in bridging the gap between the physical and the virtual worlds. These devices usually carry sensitive data which requires security and protection in transit and rest. However, the limited power and energy consumption make it harder and more challenging to implementing security protocols, especially Public-Key Cryptosystems (PKC). In this paper, we present a hardware/software co-design for Elliptic-Curve Cryptography (ECC) PKC suitable for lightweight devices. We present the implementation results for our design on an edge node to be used for indoor localization in a healthcare facilities.
In this work, we will present a new hybrid cryptography method based on two hard problems: 1- The problem of the discrete logarithm on an elliptic curve defined on a finite local ring. 2- The closest vector problem in lattice and the conjugate problem on square matrices. At first, we will make the exchange of keys to the Diffie-Hellman. The encryption of a message is done with a bad basis of a lattice.
Security challenges present in Machine-to-Machine Communication (M2M-C) and big data paradigm are fundamentally different from conventional network security challenges. In M2M-C paradigms, “Trust” is a vital constituent of security solutions that address security threats and for such solutions,it is important to quantify and evaluate the amount of trust in the information and its source. In this work, we focus on Machine Learning (ML) Based Trust (MLBT) evaluation model for detecting malicious activities in a vehicular Based M2M-C (VBM2M-C) network. In particular, we present an Entropy Based Feature Engineering (EBFE) coupled Extreme Gradient Boosting (XGBoost) model which is optimized with Binary Particle Swarm optimization technique. Based on three performance metrics, i.e., Accuracy Rate (AR), True Positive Rate (TPR), False Positive Rate (FPR), the effectiveness of the proposed method is evaluated in comparison to the state-of-the-art ensemble models, such as XGBoost and Random Forest. The simulation results demonstrates the superiority of the proposed model with approximately 10% improvement in accuracy, TPR and FPR, with reference to the attacker density of 30% compared with the start-of-the-art algorithms.
Memory corruption vulnerabilities have been around for decades and rank among the most prevalent vulnerabilities in embedded systems. Yet this constrained environment poses unique design and implementation challenges that significantly complicate the adoption of common hardening techniques. Combined with the irregular and involved nature of embedded patch management, this results in prolonged vulnerability exposure windows and vulnerabilities that are relatively easy to exploit. Considering the sensitive and critical nature of many embedded systems, this situation merits significant improvement. In this work, we present the first quantitative study of exploit mitigation adoption in 42 embedded operating systems, showing the embedded world to significantly lag behind the general-purpose world. To improve the security of deeply embedded systems, we subsequently present μArmor, an approach to address some of the key gaps identified in our quantitative analysis. μArmor raises the bar for exploitation of embedded memory corruption vulnerabilities, while being adoptable on the short term without incurring prohibitive extra performance or storage costs.
Techniques applied in response to detrimental digital incidents vary in many respects according to their attributes. Models of techniques exist in current research but are typically restricted to some subset with regards to the discipline of the incident. An enormous collection of techniques is actually available for use. There is no single model representing all these techniques. There is no current categorisation of digital forensics reactive techniques that classify techniques according to the attribute of function and nor is there an attempt to classify techniques in a means that goes beyond a subset. In this paper, an ontology that depicts digital forensic reactive techniques classified by function is presented. The ontology itself contains additional information for each technique useful for merging into a cognate system where the relationship between techniques and other facets of the digital investigative process can be defined. A number of existing techniques were collected and described according to their function - a verb. The function then guided the placement and classification of the techniques in the ontology according to the ontology development process. The ontology contributes to a knowledge base for digital forensics - essentially useful as a resource for the various people operating in the field of digital forensics. The benefit of this that the information can be queried, assumptions can be made explicit, and there is a one-stop-shop for digital forensics reactive techniques with their place in the investigation detailed.
With the rapidly increasing number of Internet of Things (IoT) devices and their extensive integration into peoples' daily lives, the security of those devices is of primary importance. Nonetheless, many IoT devices suffer from the absence, or the bad application, of security concepts, which leads to severe vulnerabilities in those devices. To achieve early detection of potential vulnerabilities, network scanner tools are frequently used. However, most of those tools are highly specialized; thus, multiple tools and a meaningful correlation of their results are required to obtain an adequate listing of identified network vulnerabilities. To simplify this process, we propose a modular framework for automated network reconnaissance and vulnerability indication in IP-based networks. It allows integrating a diverse set of tools as either, scanning tools or analysis tools. Moreover, the framework enables result aggregation of different modules and allows information sharing between modules facilitating the development of advanced analysis modules. Additionally, intermediate scanning and analysis data is stored, enabling a historical view of derived information and also allowing users to retrace decision-making processes. We show the framework's modular capabilities by implementing one scanner module and three analysis modules. The automated process is then evaluated using an exemplary scenario with common IP-based IoT components.