Biblio
Modern shared memory multiprocessors permit reordering of memory operations for performance reasons. These reorderings are often a source of subtle bugs in programs written for such architectures. Traditional approaches to verify weak memory programs often rely on interleaving semantics, which is prone to state space explosion, and thus severely limits the scalability of the analysis. In recent times, there has been a renewed interest in modelling dynamic executions of weak memory programs using partial orders. However, such an approach typically requires ad-hoc mechanisms to correctly capture the data and control-flow choices/conflicts present in real-world programs. In this work, we propose a novel, conflict-aware, composable, truly concurrent semantics for programs written using C/C++ for modern weak memory architectures. We exploit our symbolic semantics based on general event structures to build an efficient decision procedure that detects assertion violations in bounded multi-threaded programs. Using a large, representative set of benchmarks, we show that our conflict-aware semantics outperforms the state-of-the-art partial-order based approaches.
Massively Open Online Courses (MOOCs) provide a unique opportunity to reach out to students who would not normally be reached by alleviating the need to be physically present in the classroom. However, teaching software security coursework outside of a classroom setting can be challenging. What are the challenges when converting security material from an on-campus course to the MOOC format? The goal of this research is to assist educators in constructing software security coursework by providing a comparison of classroom courses and MOOCs. In this work, we compare demographic information, student motivations, and student results from an on-campus software security course and a MOOC version of the same course. We found that the two populations of students differed, with the MOOC reaching a more diverse set of students than the on-campus course. We found that students in the on-campus course had higher quiz scores, on average, than students in the MOOC. Finally, we document our experience running the courses and what we would do differently to assist future educators constructing similar MOOC's.
An increasingly important concern of software engineers is handling uncertainty at runtime. Over the last decade researchers have applied architecture-based self-adaptation approaches to address this concern. However, providing guarantees required by current software systems has shown to be challenging with these approaches. To tackle this challenge, we study the application of control theory to realize self-adaptation and develop novel control-based adaptation mechanisms that guarantee desired system properties. Results are validated on systems with strict requirements.
Software development and web applications have become fundamental in our lives. Millions of users access these applications to communicate, obtain information and perform transactions. However, these users are exposed to many risks; commonly due to the developer's lack of experience in security protocols. Although there are many researches about web security and hacking protection, there are plenty of vulnerable websites. This article focuses in analyzing 3 main hacking techniques: XSS, CSRF, and SQL Injection over a representative group of Colombian websites. Our goal is to obtain information about how Colombian companies and organizations give (or not) relevance to security; and how the final user could be affected.
As the amount of mobile devices populating the Internet keeps growing at tremendous pace, context-aware services have gained a lot of traction thanks to the wide set of potential use cases they can be applied to. Environmental sensing applications, emergency services, and location-aware messaging are just a few examples of applications that are expected to increase in popularity in the next few years. The MobilityFirst future Internet architecture, a clean-slate Internet architecture design, provides the necessary abstractions for creating and managing context-aware services. Starting from these abstractions we design a context services framework, which is based on a set of three fundamental mechanisms: an easy way to specify context based on human understandable techniques, i.e. use of names, an architecture supported management mechanism that allows both to conveniently deploy the service and efficiently provide management capabilities, and a native delivery system that reduces the tax on the network components and on the overhead cost of deploying such applications. In this paper, we present an emergency alert system for vehicles assisting first responders that exploits users location awareness to support quick and reliable alert messages for interested vehicles. By deploying a demo of the system on a nationwide testbed, we aim to provide better understanding of the dynamics involved in our designed framework.
The individual distinguishing proof number or (PIN) and Passwords are the remarkable well known verification strategy used in different gadgets, for example, Atms, cell phones, and electronic gateway locks. Unfortunately, the traditional PIN-entrance technique is helpless vulnerable against shoulder-surfing attacks. However, the security examinations used to support these proposed system are not focused around only quantitative investigation, but instead on the results of experiments and testing performed on proposed system. We propose a new theoretical and experimental technique for quantitative security investigation of PIN-entry method. In this paper we first introduce new security idea know as Grid Based Authentication System and rules for secure PIN-entry method by examining the current routines under the new structure. Thus by consider the existing systems guidelines; we try to develop a new PIN-entry method that definitely avoids human shoulder-surfing attacks by significantly increasing the amount of calculations complexity that required for an attacker to penetrate through the secure system.
Continuous Authentication by analysing the user's behaviour profile on the computer input devices is challenging due to limited information, variability of data and the sparse nature of the information. As a result, most of the previous research was done as a periodic authentication, where the analysis was made based on a fixed number of actions or fixed time period. Also, the experimental data was obtained for most of the previous research in a very controlled condition, where the task and environment were fixed. In this paper, we will focus on actual continuous authentication that reacts on every single action performed by the user. The experimental data was collected in a complete uncontrolled condition from 52 users by using our data collection software. In our analysis, we have considered both keystroke and mouse usages behaviour pattern to avoid a situation where an attacker avoids detection by restricting to one input device because the continuous authentication system only checks the other input device. The result we have obtained from this research is satisfactory enough for further investigation on this domain.
In this paper we present an approach to implement security as a Virtualized Network Function (VNF) that is implemented within a Software-Defined Infrastructure (SDI). We present a scalable, flexible, and seamless design for a Deep Packet Inspection (DPI) system for network intrusion detection and prevention. We discuss how our design introduces significant reductions in both capital and operational expenses (CAPEX and OPEX). As proof of concept, we describe an implementation for a modular security solution that uses the SAVI SDI testbed to first detect and then block an attack or to re-direct it to a honey-pot for further analysis. We discuss our testing methodology and provide measurement results for the test cases where an application faces various security attacks.
At the RELENG 2014 Q&A, the question was asked, “What is your greatest concern?” and the response was “someone subverting our deployment pipeline”. That is the motivation for this paper. We explore what it means to subvert a pipeline and provide several different scenarios of subversion. We then focus on the issue of securing a pipeline. As a result, we provide an engineering process that is based on having trusted components mediate access to sensitive portions of the pipeline from other components, which can remain untrusted. Applying our process to a pipeline we constructed involving Chef, Jenkins, Docker, Github, and AWS, we find that some aspects of our process result in easy to make changes to the pipeline, whereas others are more difficult. Consequently, we have developed a design that hardens the pipeline, although it does not yet completely secure it.
This paper describes multiple system security engineering techniques for assessing system security vulnerabilities and discusses the application of these techniques at different system maturity points. The proposed vulnerability assessment approach allows a systems engineer to identify and assess vulnerabilities early in the life cycle and to continually increase the fidelity of the vulnerability identification and assessment as the system matures.
The QR codes have gained wide popularity in mobile marketing and advertising campaigns. However, the hidden security threat on the involved information system might endanger QR codes' success, and this issue has not been adequately addressed. In this paper we propose to examine the life cycle of a redesigned QR code ecosystem to identify the possible security risks. On top of this examination, we further propose standard changes to enhance security through a digital signature mechanism.
Software structure analysis is crucial in software testing. Using complex network theory, we present a series of methods and build a two-layer network model for software analysis, including network metrics calculation and features extraction. Through identifying the critical functions and reused modules, we can reduce nearly 80% workload in software testing on average. Besides, the structure network shows some interesting features that can assist to understand the software more clearly.
This paper presents an initial framework for managing emergent ethical concerns during software engineering in society projects. We argue that such emergent considerations can neither be framed as absolute rules about how to act in relation to fixed and measurable conditions. Nor can they be addressed by simply framing them as non-functional requirements to be satisficed. Instead, a continuous process is needed that accepts the 'messiness' of social life and social research, seeks to understand complexity (rather than seek clarity), demands collective (not just individual) responsibility and focuses on dialogue over solutions. The framework has been derived based on retrospective analysis of ethical considerations in four software engineering in society projects in three different domains.
This paper argues the need for considering mitigating circumstances in cybercrime. Mitigating circumstances are conditions which moderate the culpability of an offender of a committed offence. Our argument is based on several observations. The cyberspace introduces a new family of communication and interaction styles and designs which could facilitate, make available, deceive, and in some cases persuade, a user to commit an offence. User's lack of awareness could be a valid mitigation when using software features introduced without a proper management of change and enough precautionary mechanisms, e.g. warning messages. The cyber behaviour of users may not be necessarily a reflection of their real character and intention. Their irrational and unconscious actions may result from their immersed and prolonged presence in a particular cyber context. Hence, the consideration of the cyberspace design, the "cyber psychological" status of an offender and their inter-relation could form a new family of mitigating circumstances inherent and unique to cybercrime. This paper elaborates on this initial argument from different perspectives including software engineering, cyber psychology, digital forensics, social responsibility and law.
As the centers of knowledge, discovery, and intellectual exploration, US universities provide appealing cybersecurity targets. Cyberattack origin patterns and relationships are not evident until data is visualized in maps and tested with statistical models. The current cybersecurity threat detection software utilized by University of North Florida's IT department records large amounts of attacks and attempted intrusions by the minute. This paper presents GIS mapping and spatial analysis of cybersecurity attacks on UNF. First, locations of cyberattack origins were detected by geographic Internet Protocol (GEO-IP) software. Second, GIS was used to map the cyberattack origin locations. Third, we used advanced spatial statistical analysis functions (exploratory spatial data analysis and spatial point pattern analysis) and R software to explore cyberattack patterns. The spatial perspective we promote is novel because there are few studies employing location analytics and spatial statistics in cyber-attack detection and prevention research.
The internet has had a major impact on how information is shared within supply chains, and in commerce in general. This has resulted in the establishment of information systems such as e-supply chains amongst others which integrate the internet and other information and communications technology (ICT) with traditional business processes for the swift transmission of information between trading partners. Many organisations have reaped the benefits of adopting the eSC model, but have also faced the challenges with which it comes. One such major challenge is information security. Digital forensic readiness is a relatively new exciting field which can prepare and prevent incidents from occurring within an eSC environment if implemented strategically. With the current state of cybercrime, tool developers are challenged with the task of developing cutting edge digital forensic readiness tools that can keep up with the current technological advancements, such as (eSCs), in the business world. Therefore, the problem addressed in this paper is that there are no DFR tools that are designed to support eSCs specifically. There are some general-purpose monitoring tools that have forensic readiness functionality, but currently there are no tools specifically designed to serve the eSC environment. Therefore, this paper discusses the limitations of current digital forensic readiness tools for the eSC environment and an architectural design for next-generation eSC DFR systems is proposed, along with the system requirements that such systems must satisfy. It is the view of the authors that the conclusions drawn from this paper can spearhead the development of cutting-edge next-generation digital forensic readiness tools, and bring attention to some of the shortcomings of current tools.
During the last years, criminals have become aware of how digital evidences that lead them to courts and jail are collected and analyzed. Hence, they have started to develop antiforensic techniques to evade, hamper, or nullify their evidences. Nowadays, these techniques are broadly used by criminals, causing the forensic analysis to be in a state of decay. To defeat against these techniques, forensic analyst need to first identify them, and then to mitigate somehow their effects. In this paper, wereview the anti-forensic techniques and propose a new taxonomy that relates them to the initial phase of a forensic process mainly affected by each technique. Furthermore, we introduce mitigation techniques for these anti-forensic techniques, considering the chance to overcome the anti-forensic techniques and the difficulty to apply them.
Vulnerabilities usually represents the risk level of software, and it is of high value to forecast vulnerabilities so as to evaluate the security level of software. Current researches mainly focus on predicting the number of vulnerabilities or the occurrence time of vulnerabilities, however, to our best knowledge, there are no other researches focusing on the prediction of vulnerabilities' severity, which we think is an important aspect reflecting vulnerabilities and software security. To compensate for this deficiency, we borrows the grey model GM(1,1) from grey system theory to forecast the severity of vulnerabilities. The experiment is carried on the real data collected from CVE and proves the feasibility of our predicting method.
Due to the fact that the cyber security risks exist in industrial control system, risk assessment on Industrial Automation Platform (IAP) is discussed in this paper. The cyber security assessment model for IAP is built based on relevant standards at abroad. Fuzzy analytic hierarchy process and fuzzy comprehensive evaluation method based on entropy theory are utilized to evaluate the communication links' risk of IAP software. As a result, the risk weight of communication links which have impacts on platform and the risk level of this platform are given for further study on protective strategy. The assessment result shows that the methods used can evaluate this platform efficiently and practically.
Soft microprocessors are vital components of many embedded FPGA systems. As the application domain for FPGAs expands, the security of the software used by soft processors increases in importance. Although software confidentiality approaches (e.g. encryption) are effective, code obfuscation is known to be an effective enhancement that further deters code understanding for attackers. The availability of specialization in FPGAs provides a unique opportunity for code obfuscation on a per-application basis with minimal hardware overhead. In this paper we describe a new technique to obfuscate soft microprocessor code which is located outside the FPGA chip in an unprotected area. Our approach provides customizable, data-dependent control flow modification to make it difficult for attackers to easily understand program behavior. The application of the approach to three benchmarks illustrates a control flow cyclomatic complexity increase of about 7× with a modest logic overhead for the soft processor.
The main problem in designing effective code obfuscation is to guarantee security. State of the art obfuscation techniques rely on an unproven concept of security, and therefore are not regarded as provably secure. In this paper, we undertake a theoretical investigation of code obfuscation security based on Kolmogorov complexity and algorithmic mutual information. We introduce a new definition of code obfuscation that requires the algorithmic mutual information between a code and its obfuscated version to be minimal, allowing for controlled amount of information to be leaked to an adversary. We argue that our definition avoids the impossibility results of Barak et al. and is more advantageous then obfuscation indistinguishability definition in the sense it is more intuitive, and is algorithmic rather than probabilistic.
NSTX used MDSplus extensively to record data, relay information and control data acquisition hardware. For NSTX-U the same functionality is expected as well as an expansion into the realm of securely maintaining parameters for machine protection. Specifically, we designed the Digital Coil Protection System (DCPS) to use MDSplus to manage our physical and electrical limit values and relay information about the state of our acquisition system to DCPS. Additionally, test and development systems need to use many of the same resources concurrently without causing interference with other critical systems. Further complications include providing access to critical, protected data without risking changes being made to it by unauthorized users or through unsupported or uncontrolled methods either maliciously or unintentionally. To achieve a level of confidence with an existing software system designed with minimal security controls, a number of changes to how MDSplus is used were designed and implemented. Trees would need to be verified and checked for changes before use. Concurrent creation of trees from vastly different use-cases and varying requirements would need to be supported. This paper will further discuss the impetus for developing such designs and the methods used to implement them.
Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.