Biblio
Social Internet of Things (SIoT) is an extension of Internet of Things (IoT) that converges with Social networking concepts to create Social networks of interconnected smart objects. This convergence allows the enrichment of the two paradigms, resulting into new ecosystems. While IoT follows two interaction paradigms, human-to-human (H2H) and thing-to-thing (T2T), SIoT adds on human-to-thing (H2T) interactions. SIoT enables smart “Social objects” that intelligently mimic the social behavior of human in the daily life. These social objects are equipped with social functionalities capable of discovering other social objects in the surroundings and establishing social relationships. They crawl through the social network of objects for the sake of searching for services and information of interest. The notion of trust and trustworthiness in social communities formed in SIoT is still new and in an early stage of investigation. In this paper, our contributions are threefold. First, we present the fundamentals of SIoT and trust concepts in SIoT, clarifying the similarities and differences between IoT and SIoT. Second, we categorize the trust management solutions proposed so far in the literature for SIoT over the last six years and provide a comprehensive review. We then perform a comparison of the state of the art trust management schemes devised for SIoT by performing comparative analysis in terms of trust management process. Third, we identify and discuss the challenges and requirements in the emerging new wave of SIoT, and also highlight the challenges in developing trust and evaluating trustworthiness among the interacting social objects.
Computer networks and surging advancements of innovative information technology construct a critical infrastructure for network transactions of business entities. Information exchange and data access though such infrastructure is scrutinized by adversaries for vulnerabilities that lead to cyber-attacks. This paper presents an agent-based system modelling to conceptualize and extract explicit and latent structure of the complex enterprise systems as well as human interactions within the system to determine common vulnerabilities of the entity. The model captures emergent behavior resulting from interactions of multiple network agents including the number of workstations, regular, administrator and third-party users, external and internal attacks, defense mechanisms for the network setting, and many other parameters. A risk-based approach to modelling cybersecurity of a business entity is utilized to derive the rate of attacks. A neural network model will generalize the type of attack based on network traffic features allowing dynamic state changes. Rules of engagement to generate self-organizing behavior will be leveraged to appoint a defense mechanism suitable for the attack-state of the model. The effectiveness of the model will be depicted by time-state chart that shows the number of affected assets for the different types of attacks triggered by the entity risk and the time it takes to revert into normal state. The model will also associate a relevant cost per incident occurrence that derives the need for enhancement of security solutions.
Gartner, a large research and advisory company, anticipates that by 2024 80% of security operation centers (SOCs) will use machine learning (ML) based solutions to enhance their operations.11https://www.ciodive.com/news/how-data-science-tools-can-lighten-the-load-for-cybersecurity-teams/572209/ In light of such widespread adoption, it is vital for the research community to identify and address usability concerns. This work presents the results of the first in situ usability assessment of ML-based tools. With the support of the US Navy, we leveraged the national cyber range-a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities-to study six US Naval SOC analysts' usage of two tools. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics for user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust \$a\$ and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings.
Adversary emulation is an offensive exercise that provides a comprehensive assessment of a system’s resilience against cyber attacks. However, adversary emulation is typically a manual process, making it costly and hard to deploy in cyber-physical systems (CPS) with complex dynamics, vulnerabilities, and operational uncertainties. In this paper, we develop an automated, domain-aware approach to adversary emulation for CPS. We formulate a Markov Decision Process (MDP) model to determine an optimal attack sequence over a hybrid attack graph with cyber (discrete) and physical (continuous) components and related physical dynamics. We apply model-based and model-free reinforcement learning (RL) methods to solve the discrete-continuous MDP in a tractable fashion. As a baseline, we also develop a greedy attack algorithm and compare it with the RL procedures. We summarize our findings through a numerical study on sensor deception attacks in buildings to compare the performance and solution quality of the proposed algorithms.
The current trend of IoT user is toward the use of services and data externally due to voluminous processing, which demands resourceful machines. Instead of relying on the cloud of poor connectivity or a limited bandwidth, the IoT user prefers to use a cloudlet-based fog computing. However, the choice of cloudlet is solely dependent on its trust and reliability. In practice, even though a cloudlet possesses a required trusted platform module (TPM), we argue that the presence of a TPM is not enough to make the cloudlet trustworthy as the TPM supports only the primitive security of the bootstrap. Besides uncertainty in security, other uncertain conditions of the network (e.g. network bandwidth, latency and expectation time to complete a service request for cloud-based services) may also prevail for the cloudlets. Therefore, in order to evaluate the trust value of multiple cloudlets under uncertainty, this paper broadly proposes the empirical process for evaluation of trust. This will be followed by a measure of trust-based reputation of cloudlets through computational intelligence such as fuzzy logic and ant colony optimization (ACO). In the process, fuzzy logic-based inference and membership evaluation of trust are presented. In addition, ACO and its pheromone communication across different colonies are being modeled with multiple cloudlets. Finally, a measure of affinity or popular trust and reputation of the cloudlets is also proposed. Together with the context of application under multiple cloudlets, the computationally intelligent approaches have been investigated in terms of performance. Hence the contribution is subjected towards building a trusted cloudlet-based fog platform.
Superconducting technology is being seriously explored for certain applications. We propose a new clean-slate method to derive fault models from large numbers of simulation results. For this technology, our method identifies completely new fault models – overflow, pulse-escape, and pattern-sensitive – in addition to the well-known stuck-at faults.
In recent years, cyberattack techniques have become more and more sophisticated each day. Even if defense measures are taken against cyberattacks, it is difficult to prevent them completely. It can also be said that people can only fight defensively against cyber criminals. To address this situation, it is necessary to predict cyberattacks and take appropriate measures in advance, and the use of intelligence is important to make this possible. In general, many malicious hackers share information and tools that can be used for attacks on the dark web or in the specific communities. Therefore, we assume that a lot of intelligence, including this illegal content exists in cyber space. By using the threat intelligence, detecting attacks in advance and developing active defense is expected these days. However, such intelligence is currently extracted manually. In order to do this more efficiently, we apply machine learning to various forum posts that exist on the dark web, with the aim of extracting forum posts containing threat information. By doing this, we expect that detecting threat information in cyber space in a timely manner will be possible so that the optimal preventive measures will be taken in advance.
In the last couple of years, the move to cyberspace provides a fertile environment for ransomware criminals like ever before. Notably, since the introduction of WannaCry, numerous ransomware detection solution has been proposed. However, the ransomware incidence report shows that most organizations impacted by ransomware are running state of the art ransomware detection tools. Hence, an alternative solution is an urgent requirement as the existing detection models are not sufficient to spot emerging ransomware treat. With this motivation, our work proposes "DeepGuard," a novel concept of modeling user behavior for ransomware detection. The main idea is to log the file-interaction pattern of typical user activity and pass it through deep generative autoencoder architecture to recreate the input. With sufficient training data, the model can learn how to reconstruct typical user activity (or input) with minimal reconstruction error. Hence, by applying the three-sigma limit rule on the model's output, DeepGuard can distinguish the ransomware activity from the user activity. The experiment result shows that DeepGuard effectively detects a variant class of ransomware with minimal false-positive rates. Overall, modeling the attack detection with user-behavior permits the proposed strategy to have deep visibility of various ransomware families.
Machine-to-Machine (M2M) communication is a essential subset of the Internet of Things (IoT). Secure access to communication network systems by M2M devices requires the support of a secure and efficient anonymous authentication protocol. The Direct Anonymous Attestation (DAA) scheme in Trustworthy Computing is a verified security protocol. However, the existing defense system uses a static architecture. The “mimic defense” strategy is characterized by active defense, which is not effective against continuous detection and attack by the attacker. Therefore, in this paper, we propose a Mimic-DAA scheme that incorporates mimic defense to establish an active defense scheme. Multiple heterogeneous and redundant actuators are used to form a DAA verifier and optimization is scheduled so that the behavior of the DAA verifier unpredictable by analysis. The Mimic-DAA proposed in this paper is capable of forming a security mechanism for active defense. The Mimic-DAA scheme effectively safeguard the unpredictability, anonymity, security and system-wide security of M2M communication networks. In comparison with existing DAA schemes, the scheme proposed in this paper improves the safety while maintaining the computational complexity.
Statistical structure learning (SSL)-based approaches have been employed in the recent years to detect different types of anomalies in a variety of cyber-physical systems (CPS). Although these approaches outperform conventional methods in the literature, their computational complexity, need for large number of measurements and centralized computations have limited their applicability to large-scale networks. In this work, we propose a distributed, multi-agent maximum likelihood (ML) approach to detect anomalies in smart grid applications aiming at reducing computational complexity, as well as preserving data privacy among different players in the network. The proposed multi-agent detector breaks the original ML problem into several local (smaller) ML optimization problems coupled by the alternating direction method of multipliers (ADMM). Then, these local ML problems are solved by their corresponding agents, eventually resulting in the construction of the global solution (network's information matrix). The numerical results obtained from two IEEE test (power transmission) systems confirm the accuracy and efficiency of the proposed approach for anomaly detection.
In this research, we examine and develop an expert system with a mechanism to automate crime category classification and threat level assessment, using the information collected by crawling the dark web. We have constructed a bag of words from 250 posts on the dark web and developed an expert system which takes the frequency of terms as an input and classifies sample posts into 6 criminal category dealing with drugs, stolen credit card, passwords, counterfeit products, child porn and others, and 3 threat levels (high, middle, low). Contrary to prior expectations, our simple and explainable expert system can perform competitively with other existing systems. For short, our experimental result with 1500 posts on the dark web shows 76.4% of recall rate for 6 criminal category classification and 83% of recall rate for 3 threat level discrimination for 100 random-sampled posts.
Cyber threat intelligence (CTI) necessitates automated monitoring of dark web platforms (e.g., Dark Net Markets and carding shops) on a large scale. While there are existing methods for collecting data from the surface web, large-scale dark web data collection is commonly hindered by anti-crawling measures. Text-based CAPTCHA serves as the most prohibitive type of these measures. Text-based CAPTCHA requires the user to recognize a combination of hard-to-read characters. Dark web CAPTCHA patterns are intentionally designed to have additional background noise and variable character length to prevent automated CAPTCHA breaking. Existing CAPTCHA breaking methods cannot remedy these challenges and are therefore not applicable to the dark web. In this study, we propose a novel framework for breaking text-based CAPTCHA in the dark web. The proposed framework utilizes Generative Adversarial Network (GAN) to counteract dark web-specific background noise and leverages an enhanced character segmentation algorithm. Our proposed method was evaluated on both benchmark and dark web CAPTCHA testbeds. The proposed method significantly outperformed the state-of-the-art baseline methods on all datasets, achieving over 92.08% success rate on dark web testbeds. Our research enables the CTI community to develop advanced capabilities of large-scale dark web monitoring.
Digital identity is the key element of digital transformation in representing any real-world entity in the digital form. To ensure a successful digital future the requirement for an effective digital identity is paramount, especially as demand increases for digital services. Several Identity Management (IDM) systems are developed to cope with identity effectively, nonetheless, existing IDM systems have some limitations corresponding to identity and its management such as sovereignty, storage and access control, security, privacy and safeguarding, all of which require further improvement. Self-Sovereign Identity (SSI) is an emerging IDM system which incorporates several required features to ensure that identity is sovereign, secure, reliable and generic. It is an evolving IDM system, thus it is essential to analyse its various features to determine its effectiveness in coping with the dynamic requirements of identity and its current challenges. This paper proposes numerous governing principles of SSI to analyse any SSI ecosystem and its effectiveness. Later, based on the proposed governing principles of SSI, it performs a comparative analysis of the two most popular SSI ecosystems uPort and Sovrin to present their effectiveness and limitations.
Personally identifiable information (PII) has become a major target of cyber-attacks, causing severe losses to data breach victims. To protect data breach victims, researchers focus on collecting exposed PII to assess privacy risk and identify at-risk individuals. However, existing studies mostly rely on exposed PII collected from either the dark web or the surface web. Due to the wide exposure of PII on both the dark web and surface web, collecting from only the dark web or the surface web could result in an underestimation of privacy risk. Despite its research and practical value, jointly collecting PII from both sources is a non-trivial task. In this paper, we summarize our effort to systematically identify, collect, and monitor a total of 1,212,004,819 exposed PII records across both the dark web and surface web. Our effort resulted in 5.8 million stolen SSNs, 845,000 stolen credit/debit cards, and 1.2 billion stolen account credentials. From the surface web, we identified and collected over 1.3 million PII records of the victims whose PII is exposed on the dark web. To the best of our knowledge, this is the largest academic collection of exposed PII, which, if properly anonymized, enables various privacy research inquiries, including assessing privacy risk and identifying at-risk populations.