Biblio
Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.
In this work, we propose a design flow for automatic generation of hardware sandboxes purposed for IP security in trusted system-on-chips (SoCs). Our tool CAPSL, the Component Authentication Process for Sandboxed Layouts, is capable of detecting trojan activation and nullifying possible damage to a system at run-time, avoiding complex pre-fabrication and pre-deployment testing for trojans. Our approach captures the behavioral properties of non-trusted IPs, typically from a third-party or components off the shelf (COTS), with the formalism of interface automata and the Property Specification Language's sequential extended regular expressions (SERE). Using the concept of hardware sandboxing, we translate the property specifications to checker automata and partition an untrusted sector of the system, with included virtualized resources and controllers, to isolate sandbox-system interactions upon deviation from the behavioral checkers. Our design flow is verified with benchmarks from Trust-Hub.org, which show 100% trojan detection with reduced checker overhead compared to other run-time verification techniques.
Cloud computing is a revolution in IT technology that provides scalable, virtualized on-demand resources to the end users with greater flexibility, less maintenance and reduced infrastructure cost. These resources are supervised by different management organizations and provided over Internet using known networking protocols, standards and formats. The underlying technologies and legacy protocols contain bugs and vulnerabilities that can open doors for intrusion by the attackers. Attacks as DDoS (Distributed Denial of Service) are ones of the most frequent that inflict serious damage and affect the cloud performance. In a DDoS attack, the attacker usually uses innocent compromised computers (called zombies) by taking advantages of known or unknown bugs and vulnerabilities to send a large number of packets from these already-captured zombies to a server. This may occupy a major portion of network bandwidth of the victim cloud infrastructures or consume much of the servers time. Thus, in this work, we designed a DDoS detection system based on the C.4.5 algorithm to mitigate the DDoS threat. This algorithm, coupled with signature detection techniques, generates a decision tree to perform automatic, effective detection of signatures attacks for DDoS flooding attacks. To validate our system, we selected other machine learning techniques and compared the obtained results.
Quantitative risk assessment is a critical first step in risk management and assured design of networked computer systems. It is challenging to evaluate the marginal probabilities of target states/conditions when using a probabilistic attack graph to represent all possible attack paths and the probabilistic cause-consequence relations among nodes. The brute force approach has the exponential complexity and the belief propagation method gives approximation when the corresponding factor graph has cycles. To improve the approximation accuracy, a region-based method is adopted, which clusters some highly dependent nodes into regions and messages are passed among regions. Experiments are conducted to compare the performance of the different methods.
Today the technology advancement in communication technology permits a malware author to introduce code obfuscation technique, for example, Application Programming Interface (API) hook, to make detecting the footprints of their code more difficult. A signature-based model such as Antivirus software is not effective against such attacks. In this paper, an API graph-based model is proposed with the objective of detecting hook attacks during malicious code execution. The proposed model incorporates techniques such as graph-generation, graph partition and graph comparison to distinguish a legitimate system call from malicious system call. The simulation results confirm that the proposed model outperforms than existing approaches.
Deep web, a hidden and encrypted network that crawls beneath the surface web today has become a social hub for various criminals who carry out their crime through the cyber space and all the crime is being conducted and hosted on the Deep Web. This research paper is an effort to bring forth various techniques and ways in which an internet user can be safe online and protect his privacy through anonymity. Understanding how user's data and private information is phished and what are the risks of sharing personal information on social media.
The article issue is the enterprise information protection within the internet of things concept. The aim of research is to develop arrangements set to ensure secure enterprise IPv6 network operating. The object of research is the enterprise IPv6 network. The subject of research is modern switching equipment as a tool to ensure network protection. The research task is to prioritize functioning of switches in production and corporation enterprise networks, to develop a network host protection algorithm, to test the developed algorithm on the Cisco Packet Tracer 7 software emulator. The result of research is the proposed approach to IPv6-network security based on analysis of modern switches functionality, developed and tested enterprise network host protection algorithm under IPv6-protocol with an automated network SLAAC-configuration control, a set of arrangements for resisting default enterprise gateway attacks, using ACL, VLAN, SEND, RA Guard security technology, which allows creating sufficiently high level of networks security.
The implementation of RFID technology in computer systems gives access to quality information on the location or object tracking in real time, thereby improving workflow and lead to safer, faster and better business decisions. This paper discusses the quantitative indicators of the quality of the computer system supported by RFID technology applied in monitoring facilities (pallets, packages and people) marked with RFID tag. Results of analysis of quantitative indicators of quality compute system supported by RFID technology are presented in tables.
Unmanned systems are increasing in number, while their manning requirements remain the same. To decrease manpower demands, machine learning techniques and autonomy are gaining traction and visibility. One barrier is human perception and understanding of autonomy. Machine learning techniques can result in “black box” algorithms that may yield high fitness, but poor comprehension by operators. However, Interactive Machine Learning (IML), a method to incorporate human input over the course of algorithm development by using neuro-evolutionary machine-learning techniques, may offer a solution. IML is evaluated here for its impact on developing autonomous team behaviors in an area search task. Initial findings show that IML-generated search plans were chosen over plans generated using a non-interactive ML technique, even though the participants trusted them slightly less. Further, participants discriminated each of the two types of plans from each other with a high degree of accuracy, suggesting the IML approach imparts behavioral characteristics into algorithms, making them more recognizable. Together the results lay the foundation for exploring how to team humans successfully with ML behavior.
Used by both information systems designers and security personnel, the Attack Tree method provides a graphical analysis of the ways in which an entity (a computer system or network, an entire organization, etc.) can be attacked and indicates the countermeasures that can be taken to prevent the attackers to reach their objective. In this paper, we built an Attack Tree focused on the goal “compromising the security of a Web platform”, considering the most common vulnerabilities of the WordPress platform identified by CVE (Common Vulnerabilities and Exposures), a global reference system for recording information regarding computer security threats. Finally, based on the likelihood of the attacks, we made a quantitative analysis of the probability that the security of the Web platform can be compromised.
The urgent task of the organization of confidential calculations in crucial objects of informatization on the basis of domestic TPM technologies (Trusted Platform Module) is considered. The corresponding recommendations and architectural concepts of the special hardware TPM module (Trusted Platform Module) which is built in a computing platform are proposed and realize a so-called ``root of trust''. As a result it gave the organization the confidential calculations on the basis of domestic electronic base.
The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.
This work presents the proof of concept implementation for the first hardware-based design of Moving Target Defense over IPv6 (MT6D) in full Register Transfer Level (RTL) logic, with future sights on an embedded Application-Specified Integrated Circuit (ASIC) implementation. Contributions are an IEEE 802.3 Ethernet stream-based in-line network packet processor with a specialized Complex Instruction Set Computer (CISC) instruction set architecture, RTL-based Network Time Protocol v4 synchronization, and a modular crypto engine. Traditional static network addressing allows attackers the incredible advantage of taking time to plan and execute attacks against a network. To counter, MT6D provides a network host obfuscation technique that offers network-based keyed access to specific hosts without altering existing network infrastructure and is an excellent technique for protecting the Internet of Things, IPv6 over Low Power Wireless Personal Area Networks, and high value globally routable IPv6 interfaces. This is done by crypto-graphically altering IPv6 network addresses every few seconds in a synchronous manner at all endpoints. A border gateway device can be used to intercept select packets to unobtrusively perform this action. Software driven implementations have posed many challenges, namely, constant code maintenance to remain compliant with all library and kernel dependencies, the need for a host computing platform, and less than optimal throughput. This work seeks to overcome these challenges in a lightweight system to be developed for practical wide deployment.
Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.
Computer security has become an increasingly important hot topic in computer and communication industry, since it is important to support critical business process and to protect personal and sensitive information. Computer security is to keep security attributes (confidentiality, integrity and availability) of computer systems, which face the threats such as deny-of-service (DoS), virus and intrusion. To ensure high computer security, the intrusion tolerance technique based on fault-tolerant scheme has been widely applied. This paper presents the quantitative performance evaluation of a virtual machine (VM) based intrusion tolerant system. Concretely, two security measures are derived; MTTSF (mean time to security failure) and the effective traffic intensity. The mathematical analysis is achieved by using Laplace-Stieltjes transforms according to the analysis of M/G/1 queueing system.
Computer systems face the threat of deliberate security intrusions due to malicious attacks that exploit security holes or vulnerabilities. In practice, these security holes or vulnerabilities still remain in the system and applications even if developers carefully execute system testing. Thus it is necessary and important to develop the mechanism to prevent and/or tolerate security intrusions. As a result, the computer systems are often evaluated with confidentiality, integrity and availability (CIA) criteria from the viewpoint of security, and security is treated as a QoS (Quality of Service) attribute at par with other QoS attributes such as capacity and performance. In this paper, we present the method for quantifying a security attribute called mean time to security failure (MTTSF) of a VM-based intrusion tolerant system based on queueing theory.
This research paper identifies security issues; especially energy based security attacks and enhances security of the system. It is very essential to consider Security of the system to be developed in the initial Phases of the software Cycle of Software Development (SDLC) as many billions of bucks are drained owing to security flaws in software caused due to improper or no security process. Security breaches that occur on software system are in umpteen numbers. Scientific Literature propose many solutions to overcome security issues, all security mechanisms are reactive in nature. In this paper new security solution is proposed that is proactive in nature especially for energy based denial of service attacks which is frequent in the recent past. Proposed solution is based on energy consumption by system known as energy points.
Machine learning has been widely used and achieved considerable results in various research areas. On the other hand, machine learning becomes a big threat when malicious attackers make use it for the wrong purpose. As such a threat, self-evolving botnets have been considered in the past. The self-evolving botnets autonomously predict vulnerabilities by implementing machine learning with computing resources of zombie computers. Furthermore, they evolve based on the vulnerability, and thus have high infectivity. In this paper, we consider several models of Markov chains to counter the spreading of the self-evolving botnets. Through simulation experiments, this paper shows the behaviors of these models.
In this paper we analyse possibilities of application of post-quantum code based signature schemes for message authentication purposes. An error-correcting code based digital signature algorithm is presented. There also shown results of computer simulation for this algorithm in case of Reed-Solomon codes and the estimated efficiency of its software implementation. We consider perspectives of error-correcting codes for message authentication and outline further research directions.
With the popularization and development of network knowledge, network intruders are increasing, and the attack mode has been updated. Intrusion detection technology is a kind of active defense technology, which can extract the key information from the network system, and quickly judge and protect the internal or external network intrusion. Intrusion detection is a kind of active security technology, which provides real-time protection for internal attacks, external attacks and misuse, and it plays an important role in ensuring network security. However, with the diversification of intrusion technology, the traditional intrusion detection system cannot meet the requirements of the current network security. Therefore, the implementation of intrusion detection needs diversifying. In this context, we apply neural network technology to the network intrusion detection system to solve the problem. In this paper, on the basis of intrusion detection method, we analyze the development history and the present situation of intrusion detection technology, and summarize the intrusion detection system overview and architecture. The neural network intrusion detection is divided into data acquisition, data analysis, pretreatment, intrusion behavior detection and testing.
This paper has presented an approach of vTPM (virtual Trusted Platform Module) Dynamic Trust Extension (DTE) to satisfy the requirements of frequent migrations. With DTE, vTPM is a delegation of the capability of signing attestation data from the underlying pTPM (physical TPM), with one valid time token issued by an Authentication Server (AS). DTE maintains a strong association between vTPM and its underlying pTPM, and has clear distinguishability between vTPM and pTPM because of the different security strength of the two types of TPM. In DTE, there is no need for vTPM to re-acquire Identity Key (IK) certificate(s) after migration, and pTPM can have a trust revocation in real time. Furthermore, DTE can provide forward security. Seen from the performance measurements of its prototype, DTE is feasible.
Attacks of Ransomware are increasing, this form of malware bypasses many technical solutions by leveraging social engineering methods. This means established methods of perimeter defence need to be supplemented with additional systems. Honeypots are bogus computer resources deployed by network administrators to act as decoy computers and detect any illicit access. This study investigated whether a honeypot folder could be created and monitored for changes. The investigations determined a suitable method to detect changes to this area. This research investigated methods to implement a honeypot to detect ransomware activity, and selected two options, the File Screening service of the Microsoft File Server Resource Manager feature and EventSentry to manipulate the Windows Security logs. The research developed a staged response to attacks to the system along with thresholds when there were triggered. The research ascertained that witness tripwire files offer limited value as there is no way to influence the malware to access the area containing the monitored files.