Biblio
Software vulnerabilities are weaknesses in software systems that can have serious consequences when exploited. Examples of side effects include unauthorized authentication, data breaches, and financial losses. Due to the nature of the software industry, companies are increasingly pressured to deploy software as quickly as possible, leading to a large number of undetected software vulnerabilities. Static code analysis, with the support of Static Analysis Tools (SATs), can generate security alerts that highlight potential vulnerabilities in an application's source code. Software Metrics (SMs) have also been used to predict software vulnerabilities, usually with the support of Machine Learning (ML) classification algorithms. Several datasets are available to support the development of improved software vulnerability detection techniques. However, they suffer from the same issues: they are either outdated or use a single type of information. In this paper, we present a methodology for collecting software vulnerabilities from known vulnerability databases and enhancing them with static information (namely SAT alerts and SMs). The proposed methodology aims to define a mechanism capable of more easily updating the collected data.
Algorithms for unsupervised anomaly detection have proven their effectiveness and flexibility, however, first it is necessary to calculate with what ratio a certain class begins to be considered anomalous by the autoencoder. For this reason, we propose to conduct a study of the efficiency of autoencoders depending on the ratio of anomalous and non-anomalous classes. The emergence of high-speed networks in electric power systems creates a tight interaction of cyberinfrastructure with the physical infrastructure and makes the power system susceptible to cyber penetration and attacks. To address this problem, this paper proposes an innovative approach to develop a specification-based intrusion detection framework that leverages available information provided by components in a contemporary power system. An autoencoder is used to encode the causal relations among the available information to create patterns with temporal state transitions, which are used as features in the proposed intrusion detection. This allows the proposed method to detect anomalies and cyber attacks.
In new technological world pervasive computing plays the important role in data computing and communication. The pervasive computing provides the mobile environment for decentralized computational services at anywhere, anytime at any context and location. Pervasive computing is flexible and makes portable devices and computing surrounded us as part of our daily life. Devices like Laptop, Smartphones, PDAs, and any other portable devices can constitute the pervasive environment. These devices in pervasive environments are worldwide and can receive various communications including audio visual services. The users and the system in this pervasive environment face the challenges of user trust, data privacy and user and device node identity. To give the feasible determination for these challenges. This paper aims to propose a dynamic learning in pervasive computing environment refer the challenges proposed efficient security model (ESM) for trustworthy and untrustworthy attackers. ESM model also compared with existing generic models; it also provides better accuracy rate than existing models.
With the rapid development of IoT in recent years, IoT is increasingly being used as an endpoint of supply chains. In general, as the majority of data is now being stored and shared over the network, information security is an important issue in terms of secure supply chain management. In response to cyber security breaches and threats, there has been much research and development on the secure storage and transfer of data over the network. However, there is a relatively limited amount of research and proposals for the security of endpoints, such as IoT linked in the supply chain network. In addition, it is difficult to ensure reliability for IoT itself due to a lack of resources such as CPU power and storage. Ensuring the reliability of IoT is essential when IoT is integrated into the supply chain. Thus, in order to secure the supply chain, we need to improve the reliability of IoT, the endpoint of the supply chain. In this work, we examine the use of IoT gateways, client certificates, and IdP as methods to compensate for the lack of IoT resources. The results of our qualitative evaluation demonstrate that using the IdP method is the most effective.
Cybersecurity has become an emerging challenge for business information management and critical infrastructure protection in recent years. Artificial Intelligence (AI) has been widely used in different fields, but it is still relatively new in the area of Cyber-Physical Systems (CPS) security. In this paper, we provide an approach based on Machine Learning (ML) to intelligent threat recognition to enable run-time risk assessment for superior situation awareness in CPS security monitoring. With the aim of classifying malicious activity, several machine learning methods, such as k-nearest neighbours (kNN), Naïve Bayes (NB), Support Vector Machine (SVM), Decision Tree (DT) and Random Forest (RF), have been applied and compared using two different publicly available real-world testbeds. The results show that RF allowed for the best classification performance. When used in reference industrial applications, the approach allows security control room operators to get notified of threats only when classification confidence will be above a threshold, hence reducing the stress of security managers and effectively supporting their decisions.
Software developers can use diverse techniques and tools to reduce the number of vulnerabilities, but the effectiveness of existing solutions in real projects is questionable. For example, Static Analysis Tools (SATs) report potential vulnerabilities by analyzing code patterns, and Software Metrics (SMs) can be used to predict vulnerabilities based on high-level characteristics of the code. In theory, both approaches can be applied from the early stages of the development process, but it is well known that they fail to detect critical vulnerabilities and raise a large number of false alarms. This paper studies the hypothesis of using Machine Learning (ML) to combine alerts from SATs with SMs to predict vulnerabilities in a large software project (under development for many years). In practice, we use four ML algorithms, alerts from two SATs, and a large number of SMs to predict whether a source code file is vulnerable or not (binary classification) and to predict the vulnerability category (multiclass classification). Results show that one can achieve either high precision or high recall, but not both at the same time. To understand the reason, we analyze and compare snippets of source code, demonstrating that vulnerable and non-vulnerable files share similar characteristics, making it hard to distinguish vulnerable from non-vulnerable code based on SAT alerts and SMs.
Aiming at the problems of imperfect dynamic verification of power grid security and stability control strategy and high test cost, a reliability test method of power grid security control system based on BP neural network and dynamic group simulation is proposed. Firstly, the fault simulation results of real-time digital simulation system (RTDS) software are taken as the data source, and the dynamic test data are obtained with the help of the existing dispatching data network, wireless virtual private network, global positioning system and other communication resources; Secondly, the important test items are selected through the minimum redundancy maximum correlation algorithm, and the test items are used to form a feature set, and then the BP neural network model is used to predict the test results. Finally, the dynamic remote test platform is tested by the dynamic whole group simulation of the security and stability control system. Compared with the traditional test methods, the proposed method reduces the test cost by more than 50%. Experimental results show that the proposed method can effectively complete the reliability test of power grid security control system based on dynamic group simulation, and reduce the test cost.
The Internet-of-Things (IoT) paradigm at large continues to be compromised, hindering the privacy, dependability, security, and safety of our nations. While the operational security communities (i.e., CERTS, SOCs, CSIRT, etc.) continue to develop capabilities for monitoring cyberspace, tools which are IoT-centric remain at its infancy. To this end, we address this gap by innovating an actionable Cyber Threat Intelligence (CTI) feed related to Internet-scale infected IoT devices. The feed analyzes, in near real-time, 3.6TB of daily streaming passive measurements ( ≈ 1M pps) by applying a custom-developed learning methodology to distinguish between compromised IoT devices and non-IoT nodes, in addition to labeling the type and vendor. The feed is augmented with third party information to provide contextual information. We report on the operation, analysis, and shortcomings of the feed executed during an initial deployment period. We make the CTI feed available for ingestion through a public, authenticated API and a front-end platform.
In today's digital era, data is most important in every phase of work. The storage and processing on data with security is the need of each and every application field. Data need to be tamper resistant due to possibility of alteration. Data can be represented and stored in heterogeneous format. There are chances of attack on information which is vital for particular organization. With rapid increase in cyber crime, attackers behave maliciously to alter those data. But it is having great impact on forensic evidences which is required for provenance. Therefore, it is required to maintain the reliability and provenance of digital evidences as it travels through various stages during forensic investigation. In this approach, there is a forensic chain in which generated report passes through various levels or intermediaries such as pathology laboratory, doctor, police department etc. To build the transparent system with immutability of forensic evidences, blockchain technology is more suitable. Blockchain technology provides the transfer of assets or evidence reports in transparent environment without central authority. In this paper blockchain based secure system for forensic evidences is proposed. The proposed system is implemented on Ethereum platform. The tampering of forensic evidence can be easily traced at any stage by anyone in the forensic chain. The security enhancement of forensic evidences is achieved through implementation on Ethereum platform with high integrity, traceability and immutability.
Cyber security is a topic of increasing relevance in relation to industrial networks. The higher intensity and intelligent use of data pushed by smart technology (Industry 4.0) together with an augmented integration between the operational technology (production) and the information technology (business) parts of the network have considerably raised the level of vulnerabilities. On the other hand, many industrial facilities still use serial networks as underlying communication system, and they are notoriously limited from a cyber security perspective since protection mechanisms available for ТСР/IР communication do not apply. Therefore, an attacker gaining access to a serial network can easily control the industrial components, potentially causing catastrophic incidents, jeopardizing assets and human lives. This study proposes a framework to act as an anomaly detection system (ADS) for industrial serial networks. It has three ingredients: an unsupervised К-means component to analyse message content, a knowledge-based Expert System component to analyse message metadata, and a voting process to generate alerts for security incidents, anomalous states, and faults. The framework was evaluated using the Proflbus-DP, a network simulator which implements a serial bus system. Results for the simulated traffic were promising: 99.90% for accuracy, 99,64% for precision, and 99.28% for F1-Score. They indicate feasibility of the framework applied to serial-based industrial networks.
This paper addresses security and risk management of hardware and embedded systems across several applications. There are three companies involved in the research. First is an energy technology company that aims to leverage electric- vehicle batteries through vehicle to grid (V2G) services in order to provide energy storage for electric grids. Second is a defense contracting company that provides acquisition support for the DOD's conventional prompt global strike program (CPGS). These systems need protections in their production and supply chains, as well as throughout their system life cycles. Third is a company that deals with trust and security in advanced logistics systems generally. The rise of interconnected devices has led to growth in systems security issues such as privacy, authentication, and secure storage of data. A risk analysis via scenario-based preferences is aided by a literature review and industry experts. The analysis is divided into various sections of Criteria, Initiatives, C-I Assessment, Emergent Conditions (EC), Criteria-Scenario (C-S) relevance and EC Grouping. System success criteria, research initiatives, and risks to the system are compiled. In the C-I Assessment, a rating is assigned to signify the degree to which criteria are addressed by initiatives, including research and development, government programs, industry resources, security countermeasures, education and training, etc. To understand risks of emergent conditions, a list of Potential Scenarios is developed across innovations, environments, missions, populations and workforce behaviors, obsolescence, adversaries, etc. The C-S Relevance rates how the scenarios affect the relevance of the success criteria, including cost, schedule, security, return on investment, and cascading effects. The Emergent Condition Grouping (ECG) collates the emergent conditions with the scenarios. The generated results focus on ranking Initiatives based on their ability to negate the effects of Emergent Conditions, as well as producing a disruption score to compare a Potential Scenario's impacts to the ranking of Initiatives. The results presented in this paper are applicable to the testing and evaluation of security and risk for a variety of embedded smart devices and should be of interest to developers, owners, and operators of critical infrastructure systems.
A human-swarm cooperative system, which mixes multiple robots and a human supervisor to form a mission team, has been widely used for emergent scenarios such as criminal tracking and victim assistance. These scenarios are related to human safety and require a robot team to quickly transit from the current undergoing task into the new emergent task. This sudden mission change brings difficulty in robot motion adjustment and increases the risk of performance degradation of the swarm. Trust in human-human collaboration reflects a general expectation of the collaboration; based on the trust humans mutually adjust their behaviors for better teamwork. Inspired by this, in this research, a trust-aware reflective control (Trust-R), was developed for a robot swarm to understand the collaborative mission and calibrate its motions accordingly for better emergency response. Typical emergent tasks “transit between area inspection tasks”, “response to emergent target - car accident” in social security with eight fault-related situations were designed to simulate robot deployments. A human user study with 50 volunteers was conducted to model trust and assess swarm performance. Trust-R's effectiveness in supporting a robot team for emergency response was validated by improved task performance and increased trust scores.
Cyber ranges are proven to be effective towards the direction of cyber security training. Nevertheless, the existing literature in the area of cyber ranges does not cover, to our best knowledge, the field of 5G security training. 5G networks, though, reprise a significant field for modern cyber security, introducing a novel threat landscape. In parallel, the demand for skilled cyber security specialists is high and still rising. Therefore, it is of utmost importance to provide all means to experts aiming to increase their preparedness level in the case of an unwanted event. The EU funded SPIDER project proposes an innovative Cyber Range as a Service (CRaaS) platform for 5G cyber security testing and training. This paper aims to present the evaluation framework, followed by SPIDER, for the extraction of the user requirements. To validate the defined user requirements, SPIDER leveraged of questionnaires which included both closed and open format questions and were circulated among the personnel of telecommunication providers, vendors, security service providers, managers, engineers, cyber security personnel and researchers. Here, we demonstrate a selected set of the most critical questions and responses received. From the conducted analysis we reach to some important conclusions regarding 5G testing and training capabilities that should be offered by a cyber range, in addition to the analysis of the different perceptions between cyber security and 5G experts.
Companies like Netflix increasingly use the cloud to deploy their business processes. Those processes often involve partnerships with other companies, and can be modeled as workflows where the owner of the data at risk interacts with contractors to realize a sequence of tasks on the data to be secured.In practice, access control is an essential building block to deploy these secured workflows. This component is generally managed by administrators using high-level policies meant to represent the requirements and restrictions put on the workflow. Handling access control with a high-level scheme comes with the benefit of separating the problem of specification, i.e. defining the desired behavior of the system, from the problem of implementation, i.e. enforcing this desired behavior. However, translating such high-level policies into a deployed implementation can be error-prone.Even though semi-automatic and automatic tools have been proposed to assist this translation, policy verification remains highly challenging in practice. In this paper, our aim is to define and propose structures assisting the checking and correction of potential errors introduced on the ground due to a faulty translation or corrupted deployments. In particular, we investigate structures with formal foundations able to naturally model policies. Metagraphs, a generalized graph theoretic structure, fulfill those requirements: their usage enables to compare high-level policies to their implementation. In practice, we consider Rego, a language used by companies like Netflix and Plex for their release process, as a valuable representative of most common policy languages. We propose a suite of tools transforming and checking policies as metagraphs, and use them in a global framework to show how policy verification can be achieved with such structures. Finally, we evaluate the performance of our verification method.
Experience shows that even with a well-intentioned user at the keyboard, a motivated attacker can compromise a computer system at a layer below or adjacent to the shallow forms of authentication that are now accepted as commonplace[3]. Therefore, rather than asking "Can we trust the person behind the keyboard", a still better question might be: "Can we trust the computer system underneath?". An emerging technology for gaining trust in a remote computing system is remote attestation. Remote attestation is the activity of making a claim about properties of a target by supplying evidence to an appraiser over a network[2]. Although many existing approaches to remote attestation wisely adopt a layered architecture-where the bottom layers measure layers above-the dependencies between components remain static and measurement orderings fixed. For modern computing environments with diverse topologies, we can no longer fix a target architecture any more than we can fix a protocol to measure that architecture.Copland [1] is a domain-specific language and formal framework that provides a vocabulary for specifying the goals of layered attestation protocols. It also provides a reference semantics that characterizes system measurement events and evidence handling; a foundation for comparing protocol alternatives. The aim of this work is to refine the Copland semantics to a more fine-grained notion of attestation manager execution-a high-privilege thread of control responsible for invoking attestation services and bundling evidence results. This refinement consists of two cooperating components called the Copland Compiler and the Attestation Virtual Machine (AVM). The Copland Compiler translates a Copland protocol description into a sequence of primitive attestation instructions to be executed in the AVM. When considered in combination with advances in virtualization, trusted hardware, and high-assurance system software components-like compilers, file-systems, and OS kernels-a formally verified remote attestation infrastructure creates exciting opportunities for building system-level security arguments.