Biblio
Routing security has a great importance to the security of Mobile Ad Hoc Networks (MANETs). There are various kinds of attacks when establishing routing path between source and destination. The adversaries attempt to deceive the source node and get the privilege of data transmission. Then they try to launch the malicious behaviors such as passive or active attacks. Due to the characteristics of the MANETs, e.g. dynamic topology, open medium, distributed cooperation, and constrained capability, it is difficult to verify the behavior of nodes and detect malicious nodes without revealing any privacy. In this paper, we present PVad, an approach conducting privacy-preserving verification in the routing discovery phase of MANETs. PVad tries to find the existing communication rules by association rules instead of making the rules. PVad consists of two phases, a reasoning phase deducing the expected log data of the peers, and a verification phase using Merkle Hash Tree to verify the correctness of derived information without revealing any privacy of nodes on expected routing paths. Without deploying any special nodes to assist the verification, PVad can detect multiple malicious nodes by itself. To show our approach can be used to guarantee the security of the MANETs, we conduct our experiments in NS3 as well as the real router environment, and we improved the detection accuracy by 4% on average compared to our former work.
Wireless sensor networks are responsible for sensing, gathering and processing the information of the objects in the network coverage area. Basic data fusion technology generally does not provide data privacy protection mechanism, and the privacy protection mechanism in health care, military reconnaissance, smart home and other areas of the application is usually indispensable. In this paper, we consider the privacy, confidentiality, and the accuracy of fusion results, and propose a data fusion algorithm for privacy preserving. This algorithm relies on the characteristics of data fusion, and uses the method of pre-distribution random number in the node to get the privacy protection requirements of the original data. Theoretical analysis shows that the malicious attacker attempts to steal the difficulty of node privacy in PPND algorithm. At the same time in the TOSSIM simulation results also show that, compared with TAG, SMART algorithm, PPND algorithm in the data traffic, the convergence accuracy of the good performance.
Cloud computing presents unlimited prospects for Information Technology (IT) industry and business enterprises alike. Rapid advancement brings a dark underbelly of new vulnerabilities and challenges unfolding with alarming regularity. Although cloud technology provides a ubiquitous environment facilitating business enterprises to conduct business across disparate locations, security effectiveness of this platform interspersed with threats which can bring everything that subscribes to the cloud, to a halt raises questions. However advantages of cloud platforms far outweighs drawbacks and study of new challenges helps overcome drawbacks of this technology. One such emerging security threat is of ransomware attack on the cloud which threatens to hold systems and data on cloud network to ransom with widespread damaging implications. This provides huge scope for IT security specialists to sharpen their skillset to overcome this new challenge. This paper covers the broad cloud architecture, current inherent cloud threat mechanisms, ransomware vulnerabilities posed and suggested methods to mitigate it.
This paper proposes a prototype of a level 3 autonomous vehicle using Raspberry Pi, capable of detecting the nearby vehicles using an IR sensor. We make the first attempt to analyze autonomous vehicles from a microscopic level, focusing on each vehicle and their communications with the nearby vehicles and road-side units. Two sets of passive and active experiments on a pair of prototypes were run, demonstrating the interconnectivity of the developed prototype. Several sensors were incorporated into an emulation based on System-on-Chip to further demonstrate the feasibility of the proposed model.
Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.
Software attacks are commonly performed against embedded systems in order to access private data or to run restricted services. In this work, we demonstrate some vulnerabilities of commonly use processor which can be leveraged by hackers to attack a system. The targeted devices are based on open processor architectures OpenRISC and RISC-V. Several software exploits are discussed and demonstrated while a hardware countermeasure is proposed and validated on OpenRISC against Return Oriented Programming attack.
There are several security requirements identification methods proposed by researchers in up-front requirements engineering (RE). However, in open source software (OSS) projects, developers use lightweight representation and refine requirements frequently by writing comments. They also tend to discuss security aspect in comments by providing code snippets, attachments, and external resource links. Since most security requirements identification methods in up-front RE are based on textual information retrieval techniques, these methods are not suitable for OSS projects or just-in-time RE. In our study, we propose a new model based on logistic regression to identify security requirements in OSS projects. We used five metrics to build security requirements identification models and tested the performance of these metrics by applying those models to three OSS projects. Our results show that four out of five metrics achieved high performance in intra-project testing.
The implementation of RFID technology in computer systems gives access to quality information on the location or object tracking in real time, thereby improving workflow and lead to safer, faster and better business decisions. This paper discusses the quantitative indicators of the quality of the computer system supported by RFID technology applied in monitoring facilities (pallets, packages and people) marked with RFID tag. Results of analysis of quantitative indicators of quality compute system supported by RFID technology are presented in tables.
Compared to other remote attestation methods, the binary-based approach is the most direct and complete one, but privacy protection has become an important problem. In this paper, we presented an Extended Hash Algorithm (EHA) for privacy protection based on remote attestation method. Based on the traditional Merkle Hash Tree, EHA altered the algorithm of node connection. The new algorithm could ensure the same result in any measure order. The security key is added when the node connection calculation is performed, which ensures the security of the value calculated by the Merkle node. By the final analysis, we can see that the remote attestation using EHA has better privacy protection and execution performance compared to other methods.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
Exploratory evaluation is an effective way to analyze and improve the security of information system. The information system structure model for security protection capability is set up in view of the exploratory evaluation requirements of security protection capability, and the requirements of agility, traceability and interpretation for exploratory evaluation are obtained by analyzing the relationship between information system, protective equipment and protection policy. Aimed at the exploratory evaluation description problem of security protection capability, the exploratory evaluation problem and exploratory evaluation process are described based on the Granular Computing theory, and a general mathematical description is established. Analysis shows that the standardized description established meets the exploratory evaluation requirements, and it can provide an analysis basis and description specification for exploratory evaluation of information system security protection capability.
In order to ensure the security of electric power supervisory control and data acquisition (SCADA) system, this paper proposes a dynamic awareness security protection model based on security policy, the design idea of which regards safety construction protection as a dynamic analysis process and the security policy should adapt to the network dynamics. According to the current situation of the power SCADA system, the related security technology and the investigation results of system security threat, the paper analyzes the security requirements and puts forward the construction ideas of security protection based on policy protection detection response (P2DR) policy model. The dynamic awareness security protection model proposed in this paper is an effective and useful tool for protecting the security of power-SCADA system.
Generative policies enable devices to generate their own policies that are validated, consistent and conflict free. This autonomy is required for security policy generation to deal with the large number of smart devices per person that will soon become reality. In this paper, we discuss the research issues that have to be addressed in order for devices involved in security enforcement to automatically generate their security policies - enabling policy-based autonomous security management. We discuss the challenges involved in the task of automatic security policy generation, and outline some approaches based om machine learning that may potentially provide a solution to the same.
The consistency checking of network security policy is an important issue of network security field, but current studies lack of overall security strategy modeling and entire network checking. In order to check the consistency of policy in distributed network system, a security policy model is proposed based on network topology, which checks conflicts of security policies for all communication paths in the network. First, the model uniformly describes network devices, domains and links, abstracts the network topology as an undirected graph, and formats the ACL (Access Control List) rules into quintuples. Then, based on the undirected graph, the model searches all possible paths between all domains in the topology, and checks the quintuple consistency by using a classifying algorithm. The experiments in campus network demonstrate that this model can effectively detect the conflicts of policy globally in the distributed network and ensure the consistency of the network security policies.
The paper presents the study of protecting wireless sensor network (WSNs) by using game theory for malicious node. By means of game theory the malicious attack nodes can be effectively modeled. In this research there is study on different game theoretic strategies for WSNs. Wireless sensor network are made upon the open shared medium which make easy to built attack. Jamming is the most serious security threats for information preservation. The key purpose of this paper is to present a general synopsis of jamming technique, a variety of types of jammers and its prevention technique by means of game theory. There is a network go through from numerous kind of external and internal attack. The jamming of attack that can be taking place because of the high communication inside the network execute by the nodes in the network. As soon as the weighty communications raise the power expenditure and network load also increases. In research work a game theoretic representation is define for the safe communication on the network.
In the development of smart cities across the world VANET plays a vital role for optimized route between source and destination. The VANETs is based on infra-structure less network. It facilitates vehicles to give information about safety through vehicle to vehicle communication (V2V) or vehicle to infrastructure communication (V2I). In VANETs wireless communication between vehicles so attackers violate authenticity, confidentiality and privacy properties which further effect security. The VANET technology is encircled with security challenges these days. This paper presents overview on VANETs architecture, a related survey on VANET with major concern of the security issues. Further, prevention measures of those issues, and comparative analysis is done. From the survey, found out that encryption and authentication plays an important role in VANETS also some research direction defined for future work.
Information-leakage is one of the most important security issues in the current Internet. In Named-Data Networking (NDN), Interest names introduce novel vulnerabilities that can be exploited. By setting up a malware, Interest names can be used to encode critical information (steganography embedded) and to leak information out of the network by generating anomalous Interest traffic. This security threat based on Interest names does not exist in IP network, and it is essential to solve this issue to secure the NDN architecture. This paper performs risk analysis of information-leakage in NDN. We first describe vulnerabilities with Interest names and, as countermeasures, we propose a name-based filter using search engine information, and another filter using one-class Support Vector Machine (SVM). We collected URLs from the data repository provided by Common Crawl and we evaluate the performances of our per-packet filters. We show that our filters can choke drastically the throughput of information-leakage, which makes it easier to detect anomalous Interest traffic. It is therefore possible to mitigate information-leakage in NDN network and it is a strong incentive for future deployment of this architecture at the Internet scale.
In this paper we present a case study of applying fitness dimensions in API design assessment. We argue that API assessment is company specific and should take into consideration various stakeholders in the API ecosystem. We identified new fitness dimensions and introduced the notion of design considerations for fitness dimensions such as priorities, tradeoffs, and technical versus cognitive classification.
Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis.
The Internet of Things (IoT) is the latest Internet evolution that interconnects billions of devices, such as cameras, sensors, RFIDs, smart phones, wearable devices, ODBII dongles, etc. Federations of such IoT devices (or things) provides the information needed to solve many important problems that have been too difficult to harness before. Despite these great benefits, privacy in IoT remains a great concern, in particular when the number of things increases. This presses the need for the development of highly scalable and computationally efficient mechanisms to prevent unauthorised access and disclosure of sensitive information generated by things. In this paper, we address this need by proposing a lightweight, yet highly scalable, data obfuscation technique. For this purpose, a digital watermarking technique is used to control perturbation of sensitive data that enables legitimate users to de-obfuscate perturbed data. To enhance the scalability of our solution, we also introduce a contextualisation service that achieve real-time aggregation and filtering of IoT data for large number of designated users. We, then, assess the effectiveness of the proposed technique by considering a health-care scenario that involves data streamed from various wearable and stationary sensors capturing health data, such as heart-rate and blood pressure. An analysis of the experimental results that illustrate the unconstrained scalability of our technique concludes the paper.
Digital fingerprinting refers to as method that can assign each copy of an intellectual property (IP) a distinct fingerprint. It was introduced for the purpose of protecting legal and honest IP users. The unique fingerprint can be used to identify the IP or a chip that contains the IP. However, existing fingerprinting techniques are not practical due to expensive cost of creating fingerprints and the lack of effective methods to verify the fingerprints. In the paper, we study a practical scan chain based fingerprinting method, where the digital fingerprint is generated by selecting the Q-SD or Q'-SD connection during the design of scan chains. This method has two major advantages. First, fingerprints are created as a post-silicon procedure and therefore there will be little fabrication overhead. Second, altering the Q-SD or Q'-SD connection style requires the modification of test vectors for each fingerprinted IP in order to maintain the fault coverage. This enables us to verify the fingerprint by inspecting the test vectors without opening up the chip to check the Q-SD or Q'-SD connection styles. We perform experiment on standard benchmarks to demonstrate that our approach has low design overhead. We also conduct security analysis to show that such fingerprints are robust against various attacks.
Software Defined Networking (SDN) presents a unique opportunity to manage and orchestrate cloud networks. The educational institutions, like many other industries face a lot of security threats. We have established an SDN enabled Demilitarized Zone (DMZ) — Science DMZ to serve as testbed for securing ASU Internet2 environment. Science DMZ allows researchers to conduct in-depth analysis of security attacks and take necessary countermeasures using SDN based command and control (C&C) center. Demo URL: https : //www.youtube.corn/watchlv = 8yo2lTNV 3r4.
Lehigh University has set a goal to implement System Center Configuration Manager by the end of 2017. This project is being spearheaded by one of our Senior Computing Consultants who has been researching and trained in the Microsoft Virtualization stack. We will discuss our roadmaps, results from our proof-of-concept environments, and discussions in driving this project.
Cloud data centers are critical infrastructures to deliver cloud services. Although security and performance of cloud data centers have been well studied in the past, their networking aspects are overlooked. Current network infrastructures in cloud data centers limit the ability of cloud provider to offer guaranteed cloud network resources to users. In order to ensure security and performance requirements as defined in the service level agreement (SLA) between cloud user and provider, cloud providers need the ability to provision network resources dynamically and on the fly. The main challenge for cloud provider in utilizing network resource can be addressed by provisioning virtual networks that support information centric services by separating the control plane from the cloud infrastructure. In this paper, we propose an sdn based information centric cloud framework to provision network resources in order to support elastic demands of cloud applications depending on SLA requirements. The framework decouples the control plane and data plane wherein the conceptually centralized control plane controls and manages the fully distributed data plane. It computes the path to ensure security and performance of the network. We report initial experiment on average round-trip delay between consumers and producers.