Biblio
The Internet of Things (IoT) is the latest Internet evolution that interconnects billions of devices, such as cameras, sensors, RFIDs, smart phones, wearable devices, ODBII dongles, etc. Federations of such IoT devices (or things) provides the information needed to solve many important problems that have been too difficult to harness before. Despite these great benefits, privacy in IoT remains a great concern, in particular when the number of things increases. This presses the need for the development of highly scalable and computationally efficient mechanisms to prevent unauthorised access and disclosure of sensitive information generated by things. In this paper, we address this need by proposing a lightweight, yet highly scalable, data obfuscation technique. For this purpose, a digital watermarking technique is used to control perturbation of sensitive data that enables legitimate users to de-obfuscate perturbed data. To enhance the scalability of our solution, we also introduce a contextualisation service that achieve real-time aggregation and filtering of IoT data for large number of designated users. We, then, assess the effectiveness of the proposed technique by considering a health-care scenario that involves data streamed from various wearable and stationary sensors capturing health data, such as heart-rate and blood pressure. An analysis of the experimental results that illustrate the unconstrained scalability of our technique concludes the paper.
It is well-known that online services resort to various cookies to track users through users' online service identifiers (IDs) - in other words, when users access online services, various "fingerprints" are left behind in the cyberspace. As they roam around in the physical world while accessing online services via mobile devices, users also leave a series of "footprints" – i.e., hints about their physical locations - in the physical world. This poses a potent new threat to user privacy: one can potentially correlate the "fingerprints" left by the users in the cyberspace with "footprints" left in the physical world to infer and reveal leakage of user physical world privacy, such as frequent user locations or mobility trajectories in the physical world - we refer to this problem as user physical world privacy leakage via user cyberspace privacy leakage. In this paper we address the following fundamental question: what kind - and how much - of user physical world privacy might be leaked if we could get hold of such diverse network datasets even without any physical location information. In order to conduct an in-depth investigation of these questions, we utilize the network data collected via a DPI system at the routers within one of the largest Internet operator in Shanghai, China over a duration of one month. We decompose the fundamental question into the three problems: i) linkage of various online user IDs belonging to the same person via mobility pattern mining; ii) physical location classification via aggregate user mobility patterns over time; and iii) tracking user physical mobility. By developing novel and effective methods for solving each of these problems, we demonstrate that the question of user physical world privacy leakage via user cyberspace privacy leakage is not hypothetical, but indeed poses a real potent threat to user privacy.
Decoy Routing, the use of routers (rather than end hosts) as proxies, is a new direction in anti-censorship research. Decoy Routers (DRs), placed in Autonomous Systems, proxy traffic from users; so the adversary, e.g. a censorious government, attempts to avoid them. It is quite difficult to place DRs so the adversary cannot route around them – for example, we need the cooperation of 850 ASes to contain China alone [1]. In this paper, we consider a different approach. We begin by noting that DRs need not intercept all the network paths from a country, just those leading to Overt Destinations, i.e. unfiltered websites hosted outside the country (usually popular ones, so that client traffic to the OD does not make the censor suspicious). Our first question is – How many ASes are required for installing DRs to intercept a large fraction of paths from e.g. China to the top-n websites (as per Alexa)? How does this number grow with n ? To our surprise, the same few ($\approx$ 30) ASes intercept over 90% of paths to the top n sites worldwide, for n = 10, 20...200 and also to other destinations. Investigating further, we find that this result fits perfectly with the hierarchical model of the Internet [2]; our first contribution is to demonstrate with real paths that the number of ASes required for a world-wide DR framework is small ($\approx$ 30). Further, censor nations' attempts to filter traffic along the paths transiting these 30 ASes will not only block their own citizens, but others residing in foreign ASes. Our second contribution in this paper is to consider the details of DR placement: not just in which ASes DRs should be placed to intercept traffic, but exactly where in each AS. We find that even with our small number of ASes, we still need a total of about 11, 700 DRs. We conclude that, even though a DR system involves far fewer ASes than previously thought, it is still a major undertaking. For example, the current routers cost over 10.3 billion USD, so if Decoy Routing at line speed requires all-new hardware, the cost alone would make such a project unfeasible for most actors (but not for major nation states).
An operating system kernel written in the Rust language would have extremely fine-grained isolation boundaries, have no memory leaks, and be safe from a wide range of security threats and memory bugs. Previous efforts towards this end concluded that writing a kernel requires changing Rust. This paper reaches a different conclusion, that no changes to Rust are needed and a kernel can be implemented with a very small amount of unsafe code. It describes how three sample kernel mechanisms–-DMA, USB, and buffer caches–-can be built using these abstractions.
Artificial software diversity is an effective way to prevent software vulnerabilities and errors to be exploited in code-reuse attacks. This is achieved by lowering the individual probability of a successful attack to a level that makes the attack unfeasible. Unfortunately, the existing approaches are not applicable to safety-critical real-time systems as they induce unacceptable performance overheads, they violate safety and timing guarantees, or they assume hardware resources which are typically not available in embedded systems. To overcome these problems, we propose a safe diversity approach that preserves the timing properties of real-time processes by controlling its impact on the worst case execution time (WCET). Our main idea is to use block-level diversity with a large, but fixed set of movable instruction sequences, and to use static WCET analysis to identify non-critical areas of code where it can safely be split into more movable instruction sequences.
Trustworthy and safe operation of the power grid critical infrastructures relies on secure execution of low-level substation controller devices such as programmable logic controllers (PLCs). Currently, there are very few security protection solutions deployed on these devices to ensure provenance control: to execute controller code on the device that is developed by trusted parties and complies with safety/security policies that are defined by the code developer as well as the power grid operators. Resource-limited PLC controllers have been becoming increasingly popular among not only legitimate system operators, but also malicious adversaries such as the most recent Stuxnet and BlackEnergy malware that caused various damages such as unauthorized infrastructural safety and integrity violations. We present PLCtrust, a domain-specific solution that deploys virtual micro security-perimeters, so-called capsules, and the corresponding device-level runtime power system-safety policy enforcement dynamically. PLCtrust makes use of data taint analysis to monitor and control data flow among the capsules based on data owner-defined policies. PLCtrust provides the operators with a transparent and lightweight solution to address various safety-critical data protection requirements. PLCtrust also provides the legitimate third-party controller code developers with a taint-aware programming interface to develop applications in compliance with the dynamic power system safety/security policies. Our experimental results on real-world settings show that PLCtrust is transparent to the end-users while ensuring the power grid safety maintenance with minimal performance overhead.
The exploitation of Industrial Control Systems (ICSs) has been described as both easy and impossible, where is the truth? PostStuxnet works have included a plethora of ICS focused cyber security research activities, with topics covering device maturity, network protocols, and overall cyber security culture. We often hear the notion of ICSs being highly vulnerable due to a lack of inbuilt security mechanisms, considered a low hanging fruit to a variety of low skilled threat actors. While there is substantial evidence to support such a notion, when considering targeted attacks on ICS, it is hard to believe an attacker with limited resources, such as a script kiddie or hacktivist, using publicly accessible tools and exploits alone, would have adequate knowledge and resources to achieve targeted operational process manipulation, while simultaneously evade detection. Through use of a testbed environment, this paper provides two practical examples based on a Man-In-The-Middle scenario, demonstrating the types of information an attacker would need obtain, collate, and comprehend, in order to begin targeted process manipulation and detection avoidance. This allows for a clearer view of associated challenges, and illustrate why targeted ICS exploitation might not be possible for every malicious actor.
Developing Big Data Analytics workloads often involves trial and error debugging, due to the unclean nature of datasets or wrong assumptions made about data. When errors (e.g., program crash, outlier results, etc.) arise, developers are often interested in identifying a subset of the input data that is able to reproduce the problem. BigSift is a new faulty data localization approach that combines insights from automated fault isolation in software engineering and data provenance in database systems to find a minimum set of failure-inducing inputs. BigSift redefines data provenance for the purpose of debugging using a test oracle function and implements several unique optimizations, specifically geared towards the iterative nature of automated debugging workloads. BigSift improves the accuracy of fault localizability by several orders-of-magnitude ($\sim$103 to 107×) compared to Titian data provenance, and improves performance by up to 66× compared to Delta Debugging, an automated fault-isolation technique. For each faulty output, BigSift is able to localize fault-inducing data within 62% of the original job running time.
Reliable operation of electrical power systems in the presence of multiple critical N - k contingencies is an important challenge for the system operators. Identifying all the possible N - k critical contingencies to design effective mitigation strategies is computationally infeasible due to the combinatorial explosion of the search space. This paper describes two heuristic algorithms based on the iterative pruning of the candidate contingency set to effectively and efficiently identify all the critical N - k contingencies resulting in system failure. These algorithms are applied to the standard IEEE-14 bus system, IEEE-39 bus system, and IEEE-57 bus system to identify multiple critical N - k contingencies. The algorithms are able to capture all the possible critical N - k contingencies (where 1 ≤ k ≤ 9) without missing any dangerous contingency.
Mobile ad-hoc network (MANET) contains various wireless movable nodes which can communicate with each other and they don't require any centralized administrator or network infrastructure and also can communicate with full capacity because it is composed of mobile nodes. They transmit data to each other with the help of intermediate nodes by establishing a path. But sometime malicious node can easily enter in network due to the mobility of nodes. That malicious node can harm the network by dropping the data packets. These type of attack is called gray hole attack. For detection and prevention from this type of attack a mechanism is proposed in this paper. By using network simulator, the simulation will be carried out for reporting the difficulties of prevention and detection of multiple gray hole attack in the Mobile ad-hoc network (MANET). Particle Swarm Optimization is used in this paper. Because of ad-hoc nature it observers the changing values of the node, if the value is infinite then node has been attacked and it prevents other nodes from sending data to that node. In this paper, we present possible solutions to prevent the network. Firstly, find more than one route to transmit packets to destination. Second, we provide minimum time delay to deliver the packet. The simulation shows the higher throughput, less time delay and less packet drop.
Software Defined Networking (SDN) is a paradigm shift that changes the working principles of IP networks by separating the control logic from routers and switches, and logically centralizing it within a controller. In this architecture the control plane (controller) communicates with the data plane (switches) through a control channel using a standards-compliant protocol, that is, OpenFlow. While having a centralized controller creates an opportunity to monitor and program the entire network, as a side effect, it causes the control plane to become a single point of failure. Denial of service (DoS) attacks or even heavy control traffic conditions can easily become real threats to the proper functioning of the controller, which indirectly detriments the entire network. In this paper, we propose a solution to reduce the control traffic generated primarily during table-miss events. We utilize the buffer\_id feature of the OpenFlow protocol, which has been designed to identify individually buffered packets within a switch, reusing it to identify flows buffered as a series of packets during table-miss, which happens when there is no related rule in the switch flow tables that matches the received packet. Thus, we allow the OpenFlow switch to send only the first packet of a flow to the controller for a table-miss while buffering the rest of the packets in the switch memory until the controller responds or time out occurs. The test results show that OpenFlow traffic is significantly reduced when the proposed method is used.
We present a novel approach to proving the absence of timing channels. The idea is to partition the programâs execution traces in such a way that each partition component is checked for timing attack resilience by a time complexity analysis and that per-component resilience implies the resilience of the whole program. We construct a partition by splitting the program traces at secret-independent branches. This ensures that any pair of traces with the same public input has a component containing both traces. Crucially, the per-component checks can be normal safety properties expressed in terms of a single execution. Our approach is thus in contrast to prior approaches, such as self-composition, that aim to reason about multiple (k⥠2) executions at once. We formalize the above as an approach called quotient partitioning, generalized to any k-safety property, and prove it to be sound. A key feature of our approach is a demand-driven partitioning strategy that uses a regex-like notion called trails to identify sets of execution traces, particularly those influenced by tainted (or secret) data. We have applied our technique in a prototype implementation tool called Blazer, based on WALA, PPL, and the brics automaton library. We have proved timing-channel freedom of (or synthesized an attack specification for) 24 programs written in Java bytecode, including 6 classic examples from the literature and 6 examples extracted from the DARPA STAC challenge problems.
In Vehicular networks, privacy, especially the vehicles' location privacy is highly concerned. Several pseudonymous based privacy protection mechanisms have been established and standardized in the past few years by IEEE and ETSI. However, vehicular networks are still vulnerable to Sybil attack. In this paper, a Sybil attack detection method based on k-Nearest Neighbours (kNN) classification algorithm is proposed. In this method, vehicles are classified based on the similarity in their driving patterns. Furthermore, the kNN methods' high runtime complexity issue is also optimized. The simulation results show that our detection method can reach a high detection rate while keeping error rate low.
Vehicular networks have been drawing special atten- tion in recent years, due to its importance in enhancing driving experience and improving road safety in future smart city. In past few years, several security services, based on cryptography, PKI and pseudonymous, have been standardized by IEEE and ETSI. However, vehicular networks are still vulnerable to various attacks, especially Sybil attack. In this paper, a Support Vector Machine (SVM) based Sybil attack detection method is proposed. We present three SVM kernel functions based classifiers to distinguish the malicious nodes from benign ones via evaluating the variance in their Driving Pattern Matrices (DPMs). The effectiveness of our proposed solution is evaluated through extensive simulations based on SUMO simulator and MATLAB. The results show that the proposed detection method can achieve a high detection rate with low error rate even under a dynamic traffic environment.
Vehicular ad hoc networks (VANETs) are designed to provide traffic safety by exploiting the inter-vehicular communications. Vehicles build awareness of traffic in their surroundings using information broadcast by other vehicles, such as speed, location and heading, to proactively avoid collisions. The effectiveness of these VANET traffic safety applications is particularly dependent on the accuracy of the location information advertised by each vehicle. Therefore, traffic safety can be compromised when Sybil attackers maliciously advertise false locations or other inaccurate GPS readings are sent. The most effective way to detect a Sybil attack or correct the noise in the GPS readings is localizing vehicles based on the physical features of their transmission signals. The current localization techniques either are designed for networks where the nodes are immobile or suffer from inaccuracy in high-interference environments. In this paper, we present a RSSI-based localization technique that uses mobile nodes for localizing another mobile node and adjusts itself based on the heterogeneous interference levels in the environment. We show via simulation that our localization mechanism is more accurate than the other mechanisms and more resistant to environments with high interference and mobility.
Embry-Riddle Aeronautical University (ERAU) is working with the Air Force Research Lab (AFRL) to develop a distributed multi-layer autonomous UAS planning and control technology for gathering intelligence in Anti-Access Area Denial (A2/AD) environments populated by intelligent adaptive adversaries. These resilient autonomous systems are able to navigate through hostile environments while performing Intelligence, Surveillance, and Reconnaissance (ISR) tasks, and minimizing the loss of assets. Our approach incorporates artificial life concepts, with a high-level architecture divided into three biologically inspired layers: cyber-physical, reactive, and deliberative. Each layer has a dynamic level of influence over the behavior of the agent. Algorithms within the layers act on a filtered view of reality, abstracted in the layer immediately below. Each layer takes input from the layer below, provides output to the layer above, and provides direction to the layer below. Fast-reactive control systems in lower layers ensure a stable environment supporting cognitive function on higher layers. The cyber-physical layer represents the central nervous system of the individual, consisting of elements of the vehicle that cannot be changed such as sensors, power plant, and physical configuration. On the reactive layer, the system uses an artificial life paradigm, where each agent interacts with the environment using a set of simple rules regarding wants and needs. Information is communicated explicitly via message passing and implicitly via observation and recognition of behavior. In the deliberative layer, individual agents look outward to the group, deliberating on efficient resource management and cooperation with other agents. Strategies at all layers are developed using machine learning techniques such as Genetic Algorithm (GA) or NN applied to system training that takes place prior to the mission.
In this paper, we propose a new regularization scheme for the well-known Support Vector Machine (SVM) classifier that operates on the training sample level. The proposed approach is motivated by the fact that Maximum Margin-based classification defines decision functions as a linear combination of the selected training data and, thus, the variations on training sample selection directly affect generalization performance. We show that the exploitation of the proposed regularization scheme is well motivated and intuitive. Experimental results show that the proposed regularization scheme outperforms standard SVM in human action recognition tasks as well as classical recognition problems.
Code smells may be introduced in software due to market rivalry, work pressure deadline, improper functioning, skills or inexperience of software developers. Code smells indicate problems in design or code which makes software hard to change and maintain. Detecting code smells could reduce the effort of developers, resources and cost of the software. Many researchers have proposed different techniques like DETEX for detecting code smells which have limited precision and recall. To overcome these limitations, a new technique named as SVMCSD has been proposed for the detection of code smells, based on support vector machine learning technique. Four code smells are specified namely God Class, Feature Envy, Data Class and Long Method and the proposed technique is validated on two open source systems namely ArgoUML and Xerces. The accuracy of SVMCSD is found to be better than DETEX in terms of two metrics, precision and recall, when applied on a subset of a system. While considering the entire system, SVMCSD detect more occurrences of code smells than DETEX.
Existing works on Three-dimensional (3D) hardware security focus on leveraging the unique 3D characteristics to address the supply chain attacks that exist in 2D design. However, 3D ICs introduce specific and unexplored challenges as well as new opportunities for managing hardware security. In this paper, we analyze new security threats unique to 3D ICs. The corresponding attack models are summarized for future research. Furthermore, existing representative countermeasures, including split manufacturing, camouflaging, transistor locking, techniques against thermal signal based side-channel attacks, and network-on-chip based shielding plane (NoCSIP) for different hardware threats are reviewed and categorized. Moreover, preliminary countermeasures are proposed to thwart TSV-based hardware Trojan insertion attacks.
Information security policies are not easy to create unless organizations explicitly recognize the various steps required in the development process of an information security policy, especially in institutions of higher education that use enormous amounts of IT. An improper development process or a copied security policy content from another organization might also fail to execute an effective job. The execution could be aimed at addressing an issue such as the non-compliance to applicable rules and regulations even if the replicated policy is properly developed, referenced, cited in laws or regulations and interpreted correctly. A generic framework was proposed to improve and establish the development process of security policies in institutions of higher education. The content analysis and cross-case analysis methods were used in this study in order to gain a thorough understanding of the information security policy development process in institutions of higher education.
This paper proposes a hybrid metric sorting method (HMS) of successive cancellation list decoders for polar codes, which plays a critical role in decoding process. We review the state-of-the-art metric sorting methods and combine the advantages of them to generate the proposed method. Due to the optimized architecture, the proposed HMS method reduces the number of comparing stages effectively with little increase in comparisons. Evaluation results show that about 25 percent of comparing stages can be removed by HMS, compared with state-of-the-art methods. The proposed method enjoys a latency reduction for hardware implementation.