Biblio
In the last years, networking scenarios have been evolving, hand-in-hand with new and varied applications with heterogeneous Quality of Service (QoS) requirements. These requirements must be efficiently and effectively delivered. Given its static layered structure and almost complete lack of built-in QoS support, the current TCP/IP-based Internet hinders such an evolution. In contrast, the clean-slate Recursive InterNetwork Architecture (RINA) proposes a new recursive and programmable networking model capable of evolving with the network requirements, solving in this way most, if not all, TCP/IP protocol stack limitations. Network providers can better deliver communication services across their networks by taking advantage of the RINA architecture and its support for QoS. This support allows providing complete information of the QoS needs of the supported traffic flows, and thus, fulfilment of these needs becomes possible. In this work, we focus on the importance of path selection to better ensure QoS guarantees in long-haul RINA networks. We propose and evaluate a programmable strategy for path selection based on flow QoS parameters, such as the maximum allowed latency and packet losses, comparing its performance against simple shortest-path, fastest-path and connection-oriented solutions.
Structure editors allow programmers to edit the tree structure of a program directly. This can have cognitive benefits, particularly for novice and end-user programmers. It also simplifies matters for tool designers, because they do not need to contend with malformed program text. This paper introduces Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and a cursor. Hazelnut goes one step beyond syntactic well-formedness: its edit actions operate over statically meaningful incomplete terms. Naïvely, this would force the programmer to construct terms in a rigid “outside-in” manner. To avoid this problem, the action semantics automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This meaningfully defers the type consistency check until the term inside the hole is finished. Hazelnut is not intended as an end-user tool itself. Instead, it serves as a foundational account of typed structure editing. To that end, we describe how Hazelnut’s rich metatheory, which we have mechanized using the Agda proof assistant, serves as a guide when we extend the calculus to include binary sum types. We also discuss various interpretations of holes, and in so doing reveal connections with gradual typing and contextual modal type theory, the Curry-Howard interpretation of contextual modal logic. Finally, we discuss how Hazelnut’s semantics lends itself to implementation as an event-based functional reactive program. Our simple reference implementation is written using js_of_ocaml.
We typically only consider running programs that are completely written. Programmers end up inserting ad hoc dummy values into their incomplete programs to receive feedback about dynamic behavior. In this work we suggest an evaluation mechanism for incomplete programs, represented as terms with holes. Rather than immediately failing when a hole is encountered, evaluation propagates holes as far as possible. The result is a substantially tighter development loop.
Many applications require not only representing variability in software and data, but also computing with it. To do so efficiently requires variational data structures that make the variability explicit in the underlying data and the operations used to manipulate it. Variational data structures have been developed ad hoc for many applications, but there is little general understanding of how to design them or what tradeoffs exist among them. In this paper, we strive for a more systematic exploration and analysis of a variational data structure. We want to know how different design decisions affect the performance and scalability of a variational data structure, and what properties of the underlying data and operation sequences need to be considered. Specifically, we study several alternative designs of a variational stack, a data structure that supports efficiently representing and computing with multiple variants of a plain stack, and that is a common building block in many algorithms. The different variational stacks are presented as a small product line organized by three design decisions. We analyze how these design decisions affect the performance of a variational stack with different usage profiles. Finally, we evaluate how these design decisions affect the performance of the variational stack in a real-world scenario: in the interpreter VarexJ when executing real software containing variability.
The C preprocessor is used in many C projects to support variability and portability. However, researchers and practitioners criticize the C preprocessor because of its negative effect on code understanding and maintainability and its error proneness. More importantly, the use of the preprocessor hinders the development of tool support that is standard in other languages, such as automated refactoring. Developers aggravate these problems when using the preprocessor in undisciplined ways (e.g., conditional blocks that do not align with the syntactic structure of the code). In this article, we proposed a catalogue of refactorings and we evaluated the number of application possibilities of the refactorings in practice, the opinion of developers about the usefulness of the refactorings, and whether the refactorings preserve behavior. Overall, we found 5670 application possibilities for the refactorings in 63 real-world C projects. In addition, we performed an online survey among 246 developers, and we submitted 28 patches to convert undisciplined directives into disciplined ones. According to our results, 63% of developers prefer to use the refactored (i.e., disciplined) version of the code instead of the original code with undisciplined preprocessor usage. To verify that the refactorings are indeed behavior preserving, we applied them to more than 36 thousand programs generated automatically using a model of a subset of the C language, running the same test cases in the original and refactored programs. Furthermore, we applied the refactorings to three real-world projects: BusyBox, OpenSSL, and SQLite. This way, we detected and fixed a few behavioral changes, 62% caused by unspecified behavior in the C language.
Software defined networking is a rapidly expanding networking paradigm that aims to separate the control logic from the forwarding devices. Through centralized control, network operators are able to deploy and manage more efficient forwarding strategies. Traditionally, when the network undergoes a change through maintenance, failure, or cyber attack, the centralized controller processes these events and deploys new forwarding rules reactively. This work provides a strategy that does not require a controller in order to maintain connectivity while only using features within the existing OpenFlow protocol version 1.3 or greater. In this paper we illustrate why forwarding resiliency is desired in OpenFlow networks and provide an algorithm that computes the flow entries required to achieve maximal forwarding resiliency in presence of both multiple link and controller failures on any arbitrary network.
Scientific advancement is fueled by solid fundamental research, followed by replication, meta-analysis, and theory building. To support such advancement, researchers and government agencies have been working towards a "science of security". As in other sciences, security science requires high-quality fundamental research addressing important problems and reporting approaches that capture the information necessary for replication, meta-analysis, and theory building. The goal of this paper is to aid security researchers in establishing a baseline of the state of scientific reporting in security through an analysis of indicators of scientific research as reported in top security conferences, specifically the 2015 ACM CCS and 2016 IEEE S&P proceedings. To conduct this analysis, we employed a series of rubrics to analyze the completeness of information reported in papers relative to the type of evaluation used (e.g. empirical study, proof, discussion). Our findings indicated some important information is often missing from papers, including explicit documentation of research objectives and the threats to validity. Our findings show a relatively small number of replications reported in the literature. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
Scientific advancement is fueled by solid fundamental research, followed by replication, meta-analysis, and theory building. To support such advancement, researchers and government agencies have been working towards a "science of security". As in other sciences, security science requires high-quality fundamental research addressing important problems and reporting approaches that capture the information necessary for replication, meta-analysis, and theory building. The goal of this paper is to aid security researchers in establishing a baseline of the state of scientific reporting in security through an analysis of indicators of scientific research as reported in top security conferences, specifically the 2015 ACM CCS and 2016 IEEE S&P proceedings. To conduct this analysis, we employed a series of rubrics to analyze the completeness of information reported in papers relative to the type of evaluation used (e.g. empirical study, proof, discussion). Our findings indicated some important information is often missing from papers, including explicit documentation of research objectives and the threats to validity. Our findings show a relatively small number of replications reported in the literature. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
Programming language definitions assign formal meaning to complete programs. Programmers, however, spend a substantial amount of time interacting with incomplete programs -- programs with holes, type inconsistencies and binding inconsistencies -- using tools like program editors and live programming environments (which interleave editing and evaluation). Semanticists have done comparatively little to formally characterize (1) the static and dynamic semantics of incomplete programs; (2) the actions available to programmers as they edit and inspect incomplete programs; and (3) the behavior of editor services that suggest likely edit actions to the programmer based on semantic information extracted from the incomplete program being edited, and from programs that the system has encountered in the past. As such, each tool designer has largely been left to develop their own ad hoc heuristics.
This paper serves as a vision statement for a research program that seeks to develop these "missing" semantic foundations. Our hope is that these contributions, which will take the form of a series of simple formal calculi equipped with a tractable metatheory, will guide the design of a variety of current and future interactive programming tools, much as various lambda calculi have guided modern language designs. Our own research will apply these principles in the design of Hazel, an experimental live lab notebook programming environment designed for data science tasks. We plan to co-design the Hazel language with the editor so that we can explore concepts such as edit-time semantic conflict resolution mechanisms and mechanisms that allow library providers to install library-specific editor services.
The Security Behavior Observatory (SBO) is a longitudinal field-study of computer security habits that provides a novel dataset for validating computer security metrics. This paper demonstrates a new strategy for validating phishing detection ability metrics by comparing performance on a phishing signal detection task with data logs found in the SBO. We report: (1) a test of the robustness of performance on the signal detection task by replicating Canfield, Fischhoff and Davis (2016), (2) an assessment of the task's construct validity, and (3) evaluation of its predictive validity using data logs. We find that members of the SBO sample had similar signal detection ability compared to members of the previous mTurk sample and that performance on the task correlated with the Security Behavior Intentions Scale (SeBIS). However, there was no evidence of predictive validity, as the signal detection task performance was unrelated to computer security outcomes in the SBO, including the presence of malicious URLs, malware, and malicious files. We discuss the implications of these findings and the challenges of comparing behavior on structured experimental tasks to behavior in complex real-world settings.
In this paper, we analyze the security of cyber-physical systems using the ADversary VIew Security Evaluation (ADVISE) meta modeling approach, taking into consideration the efects of physical attacks. To build our model of the system, we construct an ontology that describes the system components and the relationships among them. The ontology also deines attack steps that represent cyber and physical actions that afect the system entities. We apply the ADVISE meta modeling approach, which admits as input our deined ontology, to a railway system use case to obtain insights regarding the system’s security. The ADVISE Meta tool takes in a system model of a railway station and generates an attack execution graph that shows the actions that adversaries may take to reach their goal. We consider several adversary proiles, ranging from outsiders to insider staf members, and compare their attack paths in terms of targeted assets, time to achieve the goal, and probability of detection. The generated results show that even adversaries with access to noncritical assets can afect system service by intelligently crafting their attacks to trigger a physical sequence of efects. We also identify the physical devices and user actions that require more in-depth monitoring to reinforce the system’s security.
Cyber researchers at Sandia National Laboratories are applying deceptive strategies in defending systems against hackers. Deception strategies are being applied through the use of a recently patented alternative reality by the name of HADES (High-fidelity Adaptive Deception & Emulation System). Instead of obstructing or removing a hacker upon infiltration into a system, HADES leads them to a simulated reality in which cloned virtual hard drives, data sets, and memory that have been inconspicuously altered, are presented. The goal is to introduce doubt to adversaries.
The growing complexity and frequency of cyberattacks call for advanced methods to enhance the detection and prevention of such attacks. Deception is a cyber defense technique that is drawing more attention from organizations. This technique could be used to detect, deceive, and lure attackers away from sensitive data upon infiltration into a system. It is important to look at the most common features of distributed deception platforms such as high-interaction deception, adaptive deception, and more.
Lots of traditional embedded systems can be called closed systems in that they do not connect and communicate with systems or devices outside of the entities they are embedded, and some part of these systems are designed based on proprietary protocols or standards. Open embedded systems connect and communicate with other systems or devices through the Internet or other networks, and are designed based on open protocols and standards. This paper discusses two types of security challenges facing open embedded systems: the security of the devices themselves that host embedded systems, and the security of information collected, processed, communicated, and consumed by embedded systems. We also discuss solution techniques to address these challenges.
A low power consumption three-position four-way direct drive control valve based on hybrid excited linear actuator (HELA-DDCV) was provided to meet the requirements of the response time and the power consumption. A coupling system numerical model was established and validated by experiments, which is based on Matlab/Simulink, from four points of view: electric circuit, electromagnetic field, mechanism and fluid mechanics. A dual-closed-loop PI control strategy for both spool displacement and coil current is adopted, and the process of displacement response was analyzed as well as the power consumption performances. The results show that the prototype valve spool displacement response time is less than 9.6ms. Furthermore, the holding current is less than 30% of the peak current in working process, which reduces the power consumption effectively and improves the system stability. Note that the holding current can be eliminated when the spool working at the ends of stroke, and 0.26 J energy is needed in once action independent of the working time.
While we have long had principles describing how access control enforcement should be implemented, such as the reference monitor concept, imprecision in access control mechanisms and access control policies leads to risks that may enable exploitation. In practice, least privilege access control policies often allow information flows that may enable exploits. In addition, the implementation of access control mechanisms often tries to balance security with ease of use implicitly (e.g., with respect to determining where to place authorization hooks) and approaches to tighten access control, such as accounting for program context, are ad hoc. In this paper, we define four types of risks in access control enforcement and explore possible approaches and challenges in tracking those types of risks. In principle, we advocate runtime tracking to produce risk estimates for each of these types of risk. To better understand the potential of risk estimation for authorization, we propose risk estimate functions for each of the four types of risk, finding that benign program deployments accumulate risks in each of the four areas for ten Android programs examined. As a result, we find that tracking of relative risk may be useful for guiding changes to security choices, such as authorized unsafe operations or placement of authorization checks, when risk differs from that expected.
As the use of low-power and low resource embedded devices continues to increase dramatically with the introduction of new Internet of Things (IoT) devices, security techniques are necessary which are compatible with these devices. This research advances the knowledge in the area of cyber security for the IoT through the exploration of a moving target defense to apply for limiting the time attackers may conduct reconnaissance on embedded systems while considering the challenges presented from IoT devices such as resource and performance constraints. We introduce the design and optimizations for a Micro-Moving Target IPv6 Defense including a description of the modes of operation, needed protocols, and use of lightweight hash algorithms. We also detail the testing and validation possibilities including a Cooja simulation configuration, and describe the direction to further enhance and validate the security technique through large scale simulations and hardware testing followed by providing information on other future considerations.
The study of spin waves (SW) excitation in magnetic devices is one of the most important topics in modern magnetism due to the applications of the information carrier and the signal processing. We experimentally realize a spin-wave generator, capable of frequency modulation, in a magnonic waveguide. The emission of spin waves was produced by the reversal or oscillation of nanoscale magnetic vortex cores in a NiFe disk array. The vortex cores in the disk array were excited by an out of plane radio frequency (rf) magnetic field. The dynamic behaviors of the magnetization of NiFe were studied using a micro-focused Brillouin light scattering spectroscopy (BLS) setup.
With the advent of QR readers and mobile phones the use of graphical codes like QR codes and data matrix code has become very popular. Despite the noise like appearance, it has the advantage of high data capacity, damage resistance and fast decoding robustness. The proposed system embeds the image chosen by the user to develop visually appealing QR codes with improved decoding robustness using BCH algorithm. The QR information bits are encoded into luminance value of the input image. The developed Picode can inspire perceptivity in multimedia applications and can ensure data security for instances like online payments. The system is implemented on Matlab and ARM cortex A8.
Today's increasingly populous cities require intelligent transportation systems that make efficient use of existing transportation infrastructure. However, inefficient traffic management is pervasive, costing US\$160 billion in the United States in 2015, including 6.9 billion hours of additional travel time and 3.1 billion gallons of wasted fuel. To mitigate these costs, the next generation of transportation systems will include connected vehicles, connected infrastructure, and increased automation. In addition, these advances must coexist with legacy technology into the foreseeable future. This complexity makes the goal of improved mobility and safety even more daunting.
The Dark Web is known as the part of the Internet operated by decentralized and anonymous-preserving protocols like Tor. To date, the research community has focused on understanding the size and characteristics of the Dark Web and the services and goods that are offered in its underground markets. However, little is still known about the attacks landscape in the Dark Web. For the traditional Web, it is now well understood how websites are exploited, as well as the important role played by Google Dorks and automated attack bots to form some sort of "background attack noise" to which public websites are exposed. This paper tries to understand if these basic concepts and components have a parallel in the Dark Web. In particular, by deploying a high interaction honeypot in the Tor network for a period of seven months, we conducted a measurement study of the type of attacks and of the attackers behavior that affect this still relatively unknown corner of the Web.
While the clean slate approach proposed by Software Defined Networking (SDN) promises radical changes in the stagnant state of network management, SDN innovation has not gone beyond the intra-domain level. For the inter-domain ecosystem to benefit from the advantages of SDN, Internet Exchange Points (IXPs) are the ideal place: a central interconnection hub through which a large share of the Internet can be affected. In this demo, we showcase the ENDEAVOUR platform: a new software defined exchange approach readily deployable in commercial IXPs. We demonstrate here our implementations of traffic engineering and Distributed Denial of Service mitigation, as well as how member networks cash in on the advanced SDN-features of ENDEAVOUR, typically not available in legacy networks.
Autonomous vehicles must communicate with each other effectively and securely to make robust decisions. However, today's Internet falls short in supporting efficient data delivery and strong data security, especially in a mobile ad-hoc environment. Named Data Networking (NDN), a new data-centric Internet architecture, provides a better foundation for secure data sharing among autonomous vehicles. We examine two potential threats, false data dissemination and vehicle tracking, in an NDN-based autonomous vehicular network. To detect false data, we propose a four-level hierarchical trust model and the associated naming scheme for vehicular data authentication. Moreover, we address vehicle tracking concerns using a pseudonym scheme to anonymize vehicle names and certificate issuing proxies to further protect vehicle identity. Finally, we implemented and evaluated our AutoNDN application on Raspberry Pi-based mini cars in a wireless environment.