Biblio
The past ten years has seen increasing calls to make security research more “scientific”. On the surface, most agree that this is desirable, given universal recognition of “science” as a positive force. However, we find that there is little clarity on what “scientific” means in the context of computer security research, or consensus on what a “Science of Security” should look like. We selectively review work in the history and philosophy of science and more recent work under the label “Science of Security”. We explore what has been done under the theme of relating science and security, put this in context with historical science, and offer observations and insights we hope may motivate further exploration and guidance. Among our findings are that practices on which the rest of science has reached consensus appear little used or recognized in security, and a pattern of methodological errors continues unaddressed.
Though controversial, surveillance activities are more and more performed for security reasons. However, such activities are extremely privacy-intrusive. This is seen as a necessary side-effect to ensure the success of such operations. In this paper, we propose an accountability-aware protocol designed for surveillance purposes. It relies on a strong incentive for a surveillance organisation to register its activity to a data protection authority. We first elicit a list of account-ability requirements, we provide an architecture showing the interaction of the different involved parties, and we propose an accountability-aware protocol which is formally specified in the applied pi calculus. We use the ProVerif tool to automatically verify that the protocol respects confidentiality, integrity and authentication properties.
Mobile applications have grown from knowing basic personal information to knowing intimate details of consumer's lives. The explosion of knowledge that applications contain and share can be contributed to many factors. Mobile devices are equipped with advanced sensors including GPS and cameras, while storing large amounts of personal information including photos and contacts. With millions of applications available to install, personal data is at constant risk of being misused. While mobile operating systems provide basic security and privacy controls, they are insufficient, leaving the consumer unaware of how applications are using permissions that were granted. In this paper, we propose a solution that aims to provide consumers awareness of applications misusing data and policies that can protect their data. From this investigation we present SPEProxy. SPEProxy utilizes a knowledge based approach to provide consumer's an ability to understand how applications are using permissions beyond their stated intent. Additionally, SPEProxy provides an awareness of fine grained policies that would allow the user to protect their data. SPEProxy is device and mobile operating system agnostic, meaning it does not require a specific device or operating system nor modification to the operating system or applications. This approach allows consumers to utilize the solution without requiring a high degree of technical expertise. We evaluated SPEProxy across 817 of the most popular applications in the iOS App Store and Google Play. In our evaluation, SPEProxy was highly effective across 86.55% applications where several well known applications exhibited misusing granted permissions.
SEAndroid is a mandatory access control (MAC) framework that can confine faulty applications on Android. Nevertheless, the effectiveness of SEAndroid enforcement depends on the employed policy. The growing complexity of Android makes it difficult for policy engineers to have complete domain knowledge on every system functionality. As a result, policy engineers sometimes craft over-permissive and ineffective policy rules, which unfortunately increased the attack surface of the Android system and have allowed multiple real-world privilege escalation attacks. We propose SPOKE, an SEAndroid Policy Knowledge Engine, that systematically extracts domain knowledge from rich-semantic functional tests and further uses the knowledge for characterizing the attack surface of SEAndroid policy rules. Our attack surface analysis is achieved by two steps: 1) It reveals policy rules that cannot be justified by the collected domain knowledge. 2) It identifies potentially over-permissive access patterns allowed by those unjustified rules as the attack surface. We evaluate SPOKE using 665 functional tests targeting 28 different categories of functionalities developed by Samsung Android Team. SPOKE successfully collected 12,491 access patterns for the 28 categories as domain knowledge, and used the knowledge to reveal 320 unjustified policy rules and 210 over-permissive access patterns defined by those rules, including one related to the notorious libstagefright vulnerability. These findings have been confirmed by policy engineers.
We provide an analysis of IEEE standard P1735, which describes methods for encrypting electronic-design intellectual property (IP), as well as the management of access rights for such IP. We find a surprising number of cryptographic mistakes in the standard. In the most egregious cases, these mistakes enable attack vectors that allow us to recover the entire underlying plaintext IP. Some of these attack vectors are well-known, e.g. padding-oracle attacks. Others are new, and are made possible by the need to support the typical uses of the underlying IP; in particular, the need for commercial system-on-chip (SoC) tools to synthesize multiple pieces of IP into a fully specified chip design and to provide syntax errors. We exploit these mistakes in a variety of ways, leveraging a commercial SoC tool as a black-box oracle. In addition to being able to recover entire plaintext IP, we show how to produce standard-compliant ciphertexts of IP that have been modified to include targeted hardware Trojans. For example, IP that correctly implements the AES block cipher on all but one (arbitrary) plaintext that induces the block cipher to return the secret key. We outline a number of other attacks that the standard allows, including on the cryptographic mechanism for IP licensing. Unfortunately, we show that obvious "quick fixes" to the standard (and the tools that support it) do not stop all of our attacks. This suggests that the standard requires a significant overhaul, and that IP-authors using P1735 encryption should consider themselves at risk.
Software-Defined Networking (SDN) allows for fast reactions to security threats by dynamically enforcing simple forwarding rules as counter-measures. However, in classic SDN all the intelligence resides at the controller, with the switches only capable of performing stateless forwarding as ruled by the controller. It follows that the controller, in addition to network management and control duties, must collect and process any piece of information required to take advanced (stateful) forwarding decisions. This threatens both to overload the controller and to congest the control channel. On the other hand, stateful SDN represents a new concept, developed both to improve reactivity and to offload the controller and the control channel by delegating local treatments to the switches. In this paper, we adopt this stateful paradigm to protect end-hosts from Distributed Denial of Service (DDoS). We propose StateSec, a novel approach based on in-switch processing capabilities to detect and mitigate DDoS attacks. StateSec monitors packets matching configurable traffic features (e.g., IP src/dst, port src/dst) without resorting to the controller. By feeding an entropy-based algorithm with such monitoring features, StateSec detects and mitigates several threats such as (D)DoS and port scans with high accuracy. We implemented StateSec and compared it with a state-of-the-art approach to monitor traffic in SDN. We show that StateSec is more efficient: it achieves very accurate detection levels, limiting at the same time the control plane overhead.
In recent years, nonparallel support vector machine (NPSVM) is proposed as a nonparallel hyperplane classifier with superior performance than standard SVM and existing nonparallel classifiers such as the twin support vector machine (TWSVM). With the perfect theoretical underpinnings and great practical success, NPSVM has been used to dealing with the classification tasks on different scales. Tackling large-scale classification problem is a challenge yet significant work. Although large-scale linear NPSVM model has already been efficiently solved by the dual coordinate descent (DCD) algorithm or alternating direction method of multipliers (ADMM), we present a new strategy to solve the primal form of linear NPSVM different from existing work in this paper. Our algorithm is designed in the framework of the stochastic gradient descent (SGD), which is well suited to large-scale problem. Experiments are conducted on five large-scale data sets to confirm the effectiveness of our method.
Cyber risk management largely reduces to a race for information between defenders of ICT systems and attackers. Defenders can gain advantage in this race by sharing cyber risk information with each other. Yet, they often exchange less information than is socially desirable, because sharing decisions are guided by selfish rather than altruistic reasons. A growing line of research studies these strategic aspects that drive defenders’ sharing decisions. The present survey systematizes these works in a novel framework. It provides a consolidated understanding of defenders’ strategies to privately or publicly share information and enables us to distill trends in the literature and identify future research directions. We reveal that many theoretical works assume cyber risk information sharing to be beneficial, while empirical validations are often missing.
Intellectual property is inextricably linked to the innovative development of mass innovation spaces. The synthetic development of intellectual property and mass innovation spaces will fundamentally support the new economic model of “mass entrepreneurship and innovation”. As such, it is critical to explore intellectual property service standards for mass innovation spaces and to steer mass innovation spaces to the creation of an intellectual property service system catering to “makers”. In addition, it is crucial to explore intellectual cluster management innovations for mass innovation spaces.
Along with the growing popularisation of Cloud Computing. Cloud storage technology has been paid more and more attention as an emerging network storage technology which is extended and developed by cloud computing concepts. Cloud computing environment depends on user services such as high-speed storage and retrieval provided by cloud computing system. Meanwhile, data security is an important problem to solve urgently for cloud storage technology. In recent years, There are more and more malicious attacks on cloud storage systems, and cloud storage system of data leaking also frequently occurred. Cloud storage security concerns the user's data security. The purpose of this paper is to achieve data security of cloud storage and to formulate corresponding cloud storage security policy. Those were combined with the results of existing academic research by analyzing the security risks of user data in cloud storage and approach a subject of the relevant security technology, which based on the structural characteristics of cloud storage system.
Situational awareness during sophisticated cyber attacks on the power grid is critical for the system operator to perform suitable attack response and recovery functions to ensure grid reliability. The overall theme of this paper is to identify existing practical issues and challenges that utilities face while monitoring substations, and to suggest potential approaches to enhance the situational awareness for the grid operators. In this paper, we provide a broad discussion about the various gaps that exist in the utility industry today in monitoring substations, and how those gaps could be addressed by identifying the various data sources and monitoring tools to improve situational awareness. The paper also briefly describes the advantages of contextualizing and correlating substation monitoring alerts using expert systems at the control center to obtain a holistic systems-level view of potentially malicious cyber activity at the substations before they cause impacts to grid operation.
Vehicular networks have been drawing special atten- tion in recent years, due to its importance in enhancing driving experience and improving road safety in future smart city. In past few years, several security services, based on cryptography, PKI and pseudonymous, have been standardized by IEEE and ETSI. However, vehicular networks are still vulnerable to various attacks, especially Sybil attack. In this paper, a Support Vector Machine (SVM) based Sybil attack detection method is proposed. We present three SVM kernel functions based classifiers to distinguish the malicious nodes from benign ones via evaluating the variance in their Driving Pattern Matrices (DPMs). The effectiveness of our proposed solution is evaluated through extensive simulations based on SUMO simulator and MATLAB. The results show that the proposed detection method can achieve a high detection rate with low error rate even under a dynamic traffic environment.
Data deduplication has attracted many cloud service providers (CSPs) as a way to reduce storage costs. Even though the general deduplication approach has been increasingly accepted, it comes with many security and privacy problems due to the outsourced data delivery models of cloud storage. To deal with specific security and privacy issues, secure deduplication techniques have been proposed for cloud data, leading to a diverse range of solutions and trade-offs. Hence, in this article, we discuss ongoing research on secure deduplication for cloud data in consideration of the attack scenarios exploited most widely in cloud storage. On the basis of classification of deduplication system, we explore security risks and attack scenarios from both inside and outside adversaries. We then describe state-of-the-art secure deduplication techniques for each approach that deal with different security issues under specific or combined threat models, which include both cryptographic and protocol solutions. We discuss and compare each scheme in terms of security and efficiency specific to different security goals. Finally, we identify and discuss unresolved issues and further research challenges for secure deduplication in cloud storage.
In cognitive radio networks with mobile terminals, it is not enough for spectrum sensing only to determine whether primary user (PU) occupy the spectrum band. Sometimes we also want to know more priori information, such as, the number of PUs, which can help to estimate its carrier frequency, direction of arrival, and location. In this paper, a machine learning based method is proposed to estimate a large number of primary users. In the proposed method, support vector machine (SVM) is used to achieve the number of primary users while genetic algorithm (GA) is to optimize the parameters of SVM kernel. The first class feature of SVM is the ratio of the element sum and the trace of sample covariance matrix, and the second class feature is the mean of Gerschgorin radii. The simulation results show that our proposed SVM-GA algorithm has higher accuracy than SVM.
Internet of Things (IoT) is an emerging paradigm in information technology (IT) that integrates advancements in sensing, computing and communication to offer enhanced services in everyday life. IoTs are vulnerable to sybil attacks wherein an adversary fabricates fictitious identities or steals the identities of legitimate nodes. In this paper, we model sybil attacks in IoT and evaluate its impact on performance. We also develop a defense mechanism based on behavioural profiling of nodes. We develop an enhanced AODV (EAODV) protocol by using the behaviour approach to obtain the optimal routes. In EAODV, the routes are selected based on the trust value and hop count. Sybil nodes are identified and discarded based on the feedback from neighbouring nodes. Evaluation of our protocol in ns-2 simulator demonstrates the effectiveness of our approach in identifying and detecting sybil nodes in IoT network.
Quantum Key Distribution (QKD) is a revolutionary technology which leverages the laws of quantum mechanics to distribute cryptographic keying material between two parties with theoretically unconditional security. Terrestrial QKD systems are limited to distances of \textbackslashtextless;200 km in both optical fiber and line-of-sight free-space configurations due to severe losses during single photon propagation and the curvature of the Earth. Thus, the feasibility of fielding a low Earth orbit (LEO) QKD satellite to overcome this limitation is being explored. Moreover, in August 2016, the Chinese Academy of Sciences successfully launched the world's first QKD satellite. However, many of the practical engineering performance and security tradeoffs associated with space-based QKD are not well understood for global secure key distribution. This paper presents several system-level considerations for modeling and studying space-based QKD architectures and systems. More specifically, this paper explores the behaviors and requirements that researchers must examine to develop a model for studying the effectiveness of QKD between LEO satellites and ground stations.
The applications of 3D Virtual Environments are taking giant leaps with more sophisticated 3D user interfaces and immersive technologies. Interactive 3D and Virtual Reality platforms present a great opportunity for data analytics and can represent large amounts of data to help humans in decision making and insight. For any of the above to be effective, it is essential to understand the characteristics of these interfaces in displaying different types of content. Text is an essential and widespread content and legibility acts as an important criterion to determine the style, size and quantity of the text to be displayed. This study evaluates the maximum amount of text per visual angle, that is, the maximum density of text that will be legible in a virtual environment displayed on different platforms. We used Extensible 3D (X3D) to provide the portable (cross-platform) stimuli. The results presented here are based on a user study conducted in DeepSix (a tiled LCD display with 5750×2400 resolution) and the Hypercube (an immersive CAVE-style active stereo projection system with three walls and floor at 2560×2560 pixels active stereo per wall). We found that more legible text can be displayed on an immersive projection due to its larger Field of Regard; in the immersive case, stereo versus monoscopic rendering did not have a significant effect on legibility.
In our era, most of the communication between people is realized in the form of electronic messages and especially through smart mobile devices. As such, the written text exchanged suffers from bad use of punctuation, misspelling words, continuous chunk of several words without spaces, tables, internet addresses etc. which make traditional text analytics methods difficult or impossible to be applied without serious effort to clean the dataset. Our proposed method in this paper can work in massive noisy and scrambled texts with minimal preprocessing by removing special characters and spaces in order to create a continuous string and detect all the repeated patterns very efficiently using the Longest Expected Repeated Pattern Reduced Suffix Array (LERP-RSA) data structure and a variant of All Repeated Patterns Detection (ARPaD) algorithm. Meta-analyses of the results can further assist a digital forensics investigator to detect important information to the chunk of text analyzed.
Modern vehicles are opening up, with wireless interfaces such as Bluetooth integrated in order to enable comfort and safety features. Furthermore a plethora of aftermarket devices introduce additional connectivity which contributes to the driving experience. This connectivity opens the vehicle to potentially malicious attacks, which could have negative consequences with regards to safety. In this paper, we survey vehicles with Bluetooth connectivity from a threat intelligence perspective to gain insight into conditions during real world driving. We do this in two ways: firstly, by examining Bluetooth implementation in vehicles and gathering information from inside the cabin, and secondly, using war-nibbling (general monitoring and scanning for nearby devices). We find that as the vehicle age decreases, the security (relatively speaking) of the Bluetooth implementation increases, but that there is still some technological lag with regards to Bluetooth implementation in vehicles. We also find that a large proportion of vehicles and aftermarket devices still use legacy pairing (and are therefore more insecure), and that these vehicles remain visible for sufficient time to mount an attack (assuming some premeditation and preparation). We demonstrate a real-world threat scenario as an example of the latter. Finally, we provide some recommendations on how the security risks we discover could be mitigated.
The concept of Extension Headers, newly introduced with IPv6, is elusive and enables new types of threats in the Internet. Simply dropping all traffic containing any Extension Header - a current practice by operators-seemingly is an effective solution, but at the cost of possibly dropping legitimate traffic as well. To determine whether threats indeed occur, and evaluate the actual nature of the traffic, measurement solutions need to be adapted. By implementing these specific parsing capabilities in flow exporters and performing measurements on two different production networks, we show it is feasible to quantify the metrics directly related to these threats, and thus allow for monitoring and detection. Analysing the traffic that is hidden behind Extension Headers, we find mostly benign traffic that directly affects end-user QoE: simply dropping all traffic containing Extension Headers is thus a bad practice with more consequences than operators might be aware of.
A novel optical fiber sensing network is proposed to eliminate the effect of multiple fiber failures. Simulation results show that if the number of breakpoint in each subnet is less than four, the optical routing paths can be reset to avoid those breakpoints by changing the status of optical switches in the remote nodes.
We consider Delay Tolerant Mobile Social Networks (DTMSNs), made of wireless nodes with intermittent connections and clustered into social communities. The lack of infrastructure and its reliance on nodes' mobility make routing a challenge. Network Coding (NC) is a generalization of routing and has been shown to bring a number of advantages over routing. We consider the problem of pollution attacks in these networks, that are a very important issue both for NC and for DTMSNs. Our first contribution is to propose a protocol which allows controlling adversary's capacity by combining cryptographic hash dissemination and error-correction to ensure message recovery at the receiver. Our second contribution is the modeling of the performance of such a protection scheme. To do so, we adapt an inter-session NC model based on a fluid approximation of the dissemination process. We provide a numerical validation of the model. We are eventually able to provide a workflow to set the correct parameters and counteract the attacks. We conclude by highlighting how these contributions can help secure a real-world DTMSN application (e.g., a smart-phone app.).
Network-connected embedded systems grow on a large scale as a critical part of Internet of Things, and these systems are under the risk of increasing malware. Anomaly-based detection methods can detect malware in embedded systems effectively and provide the advantage of detecting zero-day exploits relative to signature-based detection methods, but existing approaches incur significant performance overheads and are susceptible to mimicry attacks. In this article, we present a formal runtime security model that defines the normal system behavior including execution sequence and execution timing. The anomaly detection method in this article utilizes on-chip hardware to non-intrusively monitor system execution through trace port of the processor and detect malicious activity at runtime. We further analyze the properties of the timing distribution for control flow events, and select subset of monitoring targets by three selection metrics to meet hardware constraint. The designed detection method is evaluated by a network-connected pacemaker benchmark prototyped in FPGA and simulated in SystemC, with several mimicry attacks implemented at different levels. The resulting detection rate and false positive rate considering constraints on the number of monitored events supported in the on-chip hardware demonstrate good performance of our approach.
In the age of IOT, as more and more devices are getting connected to the internet through wireless networks, a better security infrastructure is required to protect these devices from massive attacks. For long SSIDs and passwords have been used to authenticate and secure Wi-Fi networks. But the SSID and password combination is vulnerable to security exploits like phishing and brute-forcing. In this paper, a completely automated Wi-Fi authentication system is proposed, that generates Time-based One-Time Passwords (TOTP) to secure Wi-Fi networks. This approach aims to black box the process of connecting to a Wi-Fi network for the user and the process of generating periodic secure passwords for the network without human intervention.
Redundant capacity in filesystem timestamps is recently proposed in the literature as an effective means for information hiding and data leakage. Here, we evaluate the steganographic capabilities of such channels and propose techniques to aid digital forensics investigation towards identifying and detecting manipulated filesystem timestamps. Our findings indicate that different storage media and interfaces exhibit different timestamp creation patterns. Such differences can be utilized to characterize file source media and increase the analysis capabilities of the incident response process.