Biblio
This paper concerns the role of human errors in the field of Early Risk assessment in Software Project Management. Researchers have recently begun to focus on human errors in early risk assessment in large software projects; statistics show it to be major components of problems in software over 80% of economic losses are attributed to this problem. There has been comparatively diminutive experimental research on the role of human errors in this context, particularly evident at the organizational level, largely because of reluctance to share information and statistics on security issues in online software application. Grounded theory has been employed to investigate the main root of human errors in online security risks as a research methodology. An open-ended question was asked of 103 information security experts around the globe and the responses used to develop a list of human errors causes by open coding. The paper represents a contribution to our understanding of the causes of human errors in information security contexts. It is also one of the first information security research studies of the kind utilizing Strauss and Glaser's grounded theory approaches together, during data collection phases to achieve the required number of participants' responses and is a significant contribution to the field.
This paper concerns the role of human errors in the field of Early Risk assessment in Software Project Management. Researchers have recently begun to focus on human errors in early risk assessment in large software projects; statistics show it to be major components of problems in software over 80% of economic losses are attributed to this problem. There has been comparatively diminutive experimental research on the role of human errors in this context, particularly evident at the organizational level, largely because of reluctance to share information and statistics on security issues in online software application. Grounded theory has been employed to investigate the main root of human errors in online security risks as a research methodology. An open-ended question was asked of 103 information security experts around the globe and the responses used to develop a list of human errors causes by open coding. The paper represents a contribution to our understanding of the causes of human errors in information security contexts. It is also one of the first information security research studies of the kind utilizing Strauss and Glaser's grounded theory approaches together, during data collection phases to achieve the required number of participants' responses and is a significant contribution to the field.
Remote attestation is a crucial security service particularly relevant to increasingly popular IoT (and other embedded) devices. It allows a trusted party (verifier) to learn the state of a remote, and potentially malware-infected, device (prover). Most existing approaches are static in nature and only check whether benign software is initially loaded on the prover. However, they are vulnerable to runtime attacks that hijack the application's control or data flow, e.g., via return-oriented programming or data-oriented exploits. As a concrete step towards more comprehensive runtime remote attestation, we present the design and implementation of Control-FLow ATtestation (C-FLAT) that enables remote attestation of an application's control-flow path, without requiring the source code. We describe a full prototype implementation of C-FLAT on Raspberry Pi using its ARM TrustZone hardware security extensions. We evaluate C-FLAT's performance using a real-world embedded (cyber-physical) application, and demonstrate its efficacy against control-flow hijacking attacks.
Fixing a non-deadlock concurrency bug is a difficult job that sometimes introduces additional bugs and requires a long time. To overcome this difficulty and efficiently perform fixing jobs, engineers should have broad knowledge of various fix patterns, and the ability to select the most proper one among those patterns based on quantitative data gathered from real-world bug databases. In this paper, we provide a real-world characteristic study on the fixes of non-deadlock concurrency bugs to help engineers responsible for program maintenance. In particular, we examine various fix patterns and the factors that influence the selection of those patterns with respect to the preexistence of locks and failure types. Our results will provide useful information for engineers who write bug patches, and researchers who study efficient testing and fixing techniques.
Security patterns are generic solutions that can be applied since early stages of software life to overcome recurrent security weaknesses. Their generic nature and growing number make their choice difficult, even for experts in system design. To help them on the pattern choice, this paper proposes a semi-automatic methodology of classification and the classification itself, which exposes relationships among software weaknesses, security principles and security patterns. It expresses which patterns remove a given weakness with respect to the security principles that have to be addressed to fix the weakness. The methodology is based on seven steps, which anatomize patterns and weaknesses into set of more precise sub-properties that are associated through a hierarchical organization of security principles. These steps provide the detailed justifications of the resulting classification and allow its upgrade. Without loss of generality, this classification has been established for Web applications and covers 185 software weaknesses, 26 security patterns and 66 security principles. Research supported by the industrial chair on Digital Confidence (http://confiance-numerique.clermont-universite.fr/index-en.html).
Most insider attacks done by people who have the knowledge and technical know-how of launching such attacks. This topic has long been studied and many detection techniques were proposed to deal with insider threats. This short paper summarized and classified insider threat detection techniques based on strategies used for detection.
The function of query auto-completion in modern search engines is to help users formulate queries fast and precisely. Conventional context-aware methods primarily rank candidate queries according to term- and query- relationships to the context. However, most sessions are extremely short. How to capture search intents with such relationships becomes difficult when the context generally contains only few queries. In this paper, we investigate the feasibility of discovering search intents within short context for query auto-completion. The class distribution of the search session (i.e., issued queries and click behavior) is derived as search intents. Several distribution-based features are proposed to estimate the proximity between candidates and search intents. Finally, we apply learning-to-rank to predict the user's intended query according to these features. Moreover, we also design an ensemble model to combine the benefits of our proposed features and term-based conventional approaches. Extensive experiments have been conducted on the publicly available AOL search engine log. The experimental results demonstrate that our approach significantly outperforms six competitive baselines. The performance of keystrokes is also evaluated in experiments. Furthermore, an in-depth analysis is made to justify the usability of search intent classification for query auto-completion.
Past generations of software developers were well on the way to building a software engineering mindset/gestalt, preferring tools and techniques that concentrated on safety, security, reliability, and code re-usability. Computing education reflected these priorities and was, to a great extent organized around these themes, providing beginning software developers a basis for professional practice. In more recent times, economic and deadline pressures and the de-professionalism of practitioners have combined to drive a development agenda that retains little respect for quality considerations. As a result, we are now deep into a new and severe software crisis. Scarcely a day passes without news of either a debilitating data or website hack, or the failure of a mega-software project. Vendors, individual developers, and possibly educators can anticipate an equally destructive flood of malpractice litigation, for the argument that they systematically and recklessly ignored known best development practice of long standing is irrefutable. Yet we continue to instruct using methods and to employ development tools we know, or ought to know, are inherently insecure, unreliable, and unsafe, and that produce software of like ilk. The authors call for a renewed professional and educational focus on software quality, focusing on redesigned tools that enable and encourage known best practice, combined with reformed educational practices that emphasize writing human readable, safe, secure, and reliable software. Practitioners can only deploy sound management techniques, appropriate tool choice, and best practice development methodologies such as thorough planning and specification, scope management, factorization, modularity, safety, appropriate team and testing strategies, if those ideas and techniques are embedded in the curriculum from the beginning. The authors have instantiated their ideas in the form of their highly disciplined new version of Niklaus Wirth's 1980s Modula-2 programming notation under the working moniker Modula-2 R10. They are now working on an implementation that will be released under a liberal open source license in the hope that it will assist in reforming the CS curriculum around a best practices core so as to empower would-be professionals with the intellectual and practical mindset to begin resolving the software crisis. They acknowledge there is no single software engineering silver bullet, but assert that professional techniques can be inculcated throughout a student's four-year university tenure, and if implemented in the workplace, these can greatly reduce the likelihood of multiplied IT failures at the hands of our graduates. The authors maintain that professional excellence is a necessary mindset, a habit of self-discipline that must be intentionally embedded in all aspects of one's education, and subsequently drive all aspects of one's practice, including, but by no means limited to, the choice and use of programming tools.
In the past decade, researchers have proposed various cloud storage integrity checking protocols to enable a cloud storage user to validate the integrity of the user's outsourced data. While the proposed solutions can in principle solve the cloud storage integrity checking problem, they are not sufficient for current cloud storage practices. In this position paper, we show the gaps between theoretical and practical cloud storage integrity checking solutions, through a categorization of existing solutions and an analysis of their underlying assumptions. To bridge the gap, we also call for practical cloud storage integrity checking solutions for three scenarios.
In the past decade, researchers have proposed various cloud storage integrity checking protocols to enable a cloud storage user to validate the integrity of the user's outsourced data. While the proposed solutions can in principle solve the cloud storage integrity checking problem, they are not sufficient for current cloud storage practices. In this position paper, we show the gaps between theoretical and practical cloud storage integrity checking solutions, through a categorization of existing solutions and an analysis of their underlying assumptions. To bridge the gap, we also call for practical cloud storage integrity checking solutions for three scenarios.
Early Internet of Things(IoT) applications have been build around cloud-centric architectures where information generated at the edge by the "things" in conveyed and processed in a cloud infrastructure. These architectures centralise processing and decision on the data-centre assuming sufficient connectivity, bandwidth and latency. As applications of the Internet of Things extend to industrial and more demanding consumer applications, the assumptions underlying cloud-centric architectures start to be violated as for several of these applications connectivity, bandwidth and latency to the data-centre are a challenge. Fog and Mist computing have emerged as forms of "Cloud Computing" closer to the "Edge" and to the "Things" that should alleviate the connectivity, bandwidth and latency challenges faced by Industrial and extremely demanding Consumer Internet of Things Applications. This keynote, will (1) introduce Cloud, Fog and Mist Computing architectures for the Internet of Things, (2) motivate their need and explain their applicability with real-world use cases, and (3) assess their technological maturity and highlight the areas that require further academic and industrial research.
As digital objects become increasingly important in people's lives, people may need to understand the provenance, or lineage and history, of an important digital object, to understand how it was produced. This is particularly important for objects created from large, multi-source collections of personal data. As the metadata describing provenance, Provenance Data, is commonly represented as a labelled directed acyclic graph, the challenge is to create effective interfaces onto such graphs so that people can understand the provenance of key digital objects. This unsolved problem is especially challenging for the case of novice and intermittent users and complex provenance graphs. We tackle this by creating an interface based on a clustering approach. This was designed to enable users to view provenance graphs, and to simplify complex graphs by combining several nodes. Our core contribution is the design of a prototype interface that supports clustering and its analytic evaluation in terms of desirable properties of visualisation interfaces.
Increase in M2M use cases, the availability of narrow band spectrum with operators and a need for very low cost modems for M2M applications has led to the discussions around what is called as Cellular IOT (CIOT). In order to develop the Cellular IOT network, discussions are focused around developing a new air interface that can leverage narrow band spectrum as well as lead to low cost modems which can be embedded into M2M/IOT devices. One key issue that arises during the development of a clean slate CIOT network is that of coexistence with the 4G networks. In this paper we explore architectures for Cellular IOT and 4G network harmonization that also addresses the one key requirement of possibly using narrow channels for IOT on the existing 4G networks and not just as a separate standalone Cellular IOT system. We analyze the architectural implication on the core network load in a tightly coupled CIOT-LTE architecture propose a offload mechanism from LTE to CIOT cells.
Keys for symmetric cryptography are usually stored in RAM and therefore susceptible to various attacks, ranging from simple buffer overflows to leaks via cold boot, DMA or side channels. A common approach to mitigate such attacks is to move the keys to an external cryptographic token. For low-throughput applications like asymmetric signature generation, the performance of these tokens is sufficient. For symmetric, data-intensive use cases, like disk encryption on behalf of the host, the connecting interface to the token often is a serious bottleneck. In order to overcome this problem, we present CoKey, a novel concept for partially moving symmetric cryptography out of the host into a trusted detachable token. CoKey combines keys from both entities and securely encrypts initialization vectors on the token which are then used in the cryptographic operations on the host. This forces host and token to cooperate during the whole encryption and decryption process. Our concept strongly and efficiently binds encrypted data on the host to the specific token used for their encryption, while still allowing for fast operation. We implemented the concept using Linux hosts and the USB armory, a USB thumb drive sized ARM computer, as detachable crypto token. Our detailed performance evaluation shows that our prototype is easily fast enough even for data-intensive and performance-critical use cases like full disk encryption, thus effectively improving security for symmetric cryptography in a usable way.
With limited battery supply, power is a scarce commodity in wireless sensor networks. Thus, to prolong the lifetime of the network, it is imperative that the sensor resources are managed effectively. This task is particularly challenging in heterogeneous sensor networks for which decisions and compromises regarding sensing strategies are to be made under time and resource constraints. In such networks, a sensor has to reason about its current state to take actions that are deemed appropriate with respect to its mission, its energy reserve, and the survivability of the overall network. Sensor Management controls and coordinates the use of the sensory suites in a manner that maximizes the success rate of the system in achieving its missions. This article focuses on formulating and developing an autonomous energy-aware sensor management system that strives to achieve network objectives while maximizing its lifetime. A team-theoretic formulation based on the Belief-Desire-Intention (BDI) model and the Joint Intention theory is proposed as a mechanism for effective and energy-aware collaborative decision-making. The proposed system models the collective behavior of the sensor nodes using the Joint Intention theory to enhance sensors’ collaboration and success rate. Moreover, the BDI modeling of the sensor operation and reasoning allows a sensor node to adapt to the environment dynamics, situation-criticality level, and availability of its own resources. The simulation scenario selected in this work is the surveillance of the Waterloo International Airport. Various experiments are conducted to investigate the effect of varying the network size, number of threats, threat agility, environment dynamism, as well as tracking quality and energy consumption, on the performance of the proposed system. The experimental results demonstrate the merits of the proposed approach compared to the state-of-the-art centralized approach adapted from Atia et al. [2011] and the localized approach in Hilal and Basir [2015] in terms of energy consumption, adaptability, and network lifetime. The results show that the proposed approach has 12 × less energy consumption than that of the popular centralized approach.
In the public clouds, an adversary can co-locate his or her virtual machines (VMs) with others on the same physical servers to start an attack against the integrity, confidentiality or availability. The one important factor to decrease the likelihood of this co-location attack is the VMs placement strategy. However, a co-location resistant strategy will compromise the resources optimization of the cloud providers. The tradeoff between security and resources optimization introduces one of the most crucial challenges in the cloud security. In this work we propose a placement strategy allowing the decrease of co-location rate by compromising the VM startup time instead of the optimization of resources. We give a mathematical analysis to quantify the co-location resistance. The proposed strategy is evaluated against the abusing placement locality, where the attack and target VMs are launched simultaneously or within a short time window. Referring to EC2 placement strategy, the best co-location resistant strategy out of the existing public cloud providers strategies, our strategy decreases enormously the co-location attacks with a slight VM startup delay (relatively to the actual VM startup delay in the public cloud providers).
This article deals with color images steganalysis based on machine learning. The proposed approach enriches the features from the Color Rich Model by adding new features obtained by applying steerable Gaussian filters and then computing the co-occurrence of pixel pairs. Adding these new features to those obtained from Color-Rich Models allows us to increase the detectability of hidden messages in color images. The Gaussian filters are angled in different directions to precisely compute the tangent of the gradient vector. Then, the gradient magnitude and the derivative of this tangent direction are estimated. This refined method of estimation enables us to unearth the minor changes that have occurred in the image when a message is embedded. The efficiency of the proposed framework is demonstrated on three stenographic algorithms designed to hide messages in images: S-UNIWARD, WOW, and Synch-HILL. Each algorithm is tested using different payload sizes. The proposed approach is compared to three color image steganalysis methods based on computation features and Ensemble Classifier classification: the Spatial Color Rich Model, the CFA-aware Rich Model and the RGB Geometric Color Rich Model.
In this paper, a dual-field elliptic curve cryptographic processor is proposed to support arbitrary curves within 576-bit in dual field. Besides, two heterogeneous function units are coupled with the processor for the parallel operations in finite field based on the analysis of the characteristics of elliptic curve cryptographic algorithms. To simplify the hardware complexity, the clustering technology is adopted in the processor. At last, a fast Montgomery modular division algorithm and its implementation is proposed based on the Kaliski's Montgomery modular inversion. Using UMC 90-nm CMOS 1P9M technology, the proposed processor occupied 0.86-mm2 can perform the scalar multiplication in 0.34ms in GF(p160) and 0.22ms in GF(2160), respectively. Compared to other elliptic curve cryptographic processors, our design is advantageous in hardware efficiency and speed moderation.
The phenomenon of metal-insulator-transition (MIT) in strongly correlated oxides, such as NbO2, have shown the oscillation behavior in recent experiments. In this work, the MIT based two-terminal device is proposed as a compact oscillation neuron for the parallel read operation from the resistive synaptic array. The weighted sum is represented by the frequency of the oscillation neuron. Compared to the complex CMOS integrate-and-fire neuron with tens of transistors, the oscillation neuron achieves significant area reduction, thereby alleviating the column pitch matching problem of the peripheral circuitry in resistive memories. Firstly, the impact of MIT device characteristics on the weighted sum accuracy is investigated when the oscillation neuron is connected to a single resistive synaptic device. Secondly, the array-level performance is explored when the oscillation neurons are connected to the resistive synaptic array. To address the interference of oscillation between columns in simple cross-point arrays, a 2-transistor-1-resistor (2T1R) array architecture is proposed at negligible increase in array area. Finally, the circuit-level benchmark of the proposed oscillation neuron with the CMOS neuron is performed. At single neuron node level, oscillation neuron shows textgreater12.5X reduction of area. At 128×128 array level, oscillation neuron shows a reduction of ˜4% total area, textgreater30% latency, ˜5X energy and ˜40X leakage power, demonstrating its advantage of being integrated into the resistive synaptic array for neuro-inspired computing.
Physical attacks especially fault attacks represent one the major threats against embedded systems. In the state of the art, software countermeasures against fault attacks are either applied at the source code level where it will very likely be removed at compilation time, or at assembly level where several transformations need to be performed on the assembly code and lead to significant overheads both in terms of code size and execution time. This paper presents the use of compiler techniques to efficiently automate the application of software countermeasures against instruction-skip fault attacks. We propose a modified LLVM compiler that considers our security objectives throughout the compilation process. Experimental results illustrate the effectiveness of this approach on AES implementations running on an ARM-based microcontroller in terms of security overhead compared to existing solutions.
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of Chierichetti et al. (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
Cryptographic systems are vulnerable to random errors and injected faults. Soft errors can inadvertently happen in critical cryptographic modules and attackers can inject faults into systems to retrieve the embedded secret. Different schemes have been developed to improve the security and reliability of cryptographic systems. As the new SHA-3 standard, Keccak algorithm will be widely used in various cryptographic applications, and its implementation should be protected against random errors and injected faults. In this paper, we devise different parity checking methods to protect the operations of Keccak. Results show that our schemes can be easily implemented and can effectively protect Keccak system against random errors and fault attacks.
In this paper, we present a new kind of near-optimal double-layered syndrome-trellis codes (STCs) for spatial domain steganography. The STCs can hide longer message or improve the security with the same-length message comparing to the previous double-layered STCs. In our scheme, according to the theoretical deduction we can more precisely divide the secret payload into two parts which will be embedded in the first layer and the second layer of the cover respectively with binary STCs. When embed the message, we encourage to realize the double-layered embedding by ±1 modifications. But in order to further decrease the modifications and improve the time efficient, we allow few pixels to be modified by ±2. Experiment results demonstrate that while applying this double-layered STCs to the adaptive steganographic algorithms, the embedding modifications become more concentrative and the number decreases, consequently the security of steganography is improved.
In this work, we constructively combine adaptive wormholes with channel-reciprocity based key establishment (CRKE), which has been proposed as a lightweight security solution for IoT devices and might be even more important for the 5G Tactile Internet and its embedded low-end devices. We present a new secret key generation protocol where two parties compute shared cryptographic keys under narrow-band multi-path fading models over a delayed digital channel. The proposed approach furthermore enables distance-bounding the key establishment process via the coherence time dependencies of the wireless channel. Our scheme is thoroughly evaluated both theoretically and practically. For the latter, we used a testbed based on the IEEE 802.15.4 standard and performed extensive experiments in a real-world manufacturing environment. Additionally, we demonstrate adaptive wormhole attacks (AWOAs) and their consequences on several physical-layer security schemes. Furthermore, we proposed a countermeasure that minimizes the risk of AWOAs.
In most adaptive video streaming systems adaptation decisions rely solely on the available network resources. As the content of a video has a large influence on the perception of quality our belief is that this is not sufficient. Thus, we have proposed a support service for content-aware video adaptation on mobile devices: Video Adaptation Service (VAS). Based on the content of a streamed video, the adaptation process is improved by setting a target quality level for a session based on an objective video quality metric. In this work, we demonstrate VAS and its advantages of a reduced data traffic by only streaming the lowest video representation which is necessary to reach a desired quality. By leveraging the content properties of a video stream, the system is able to keep a stable video quality and at the same time reduce the network load.