Biblio
We consider a continuous analogue of (Babai et al. 1996)'s and (Cai et al. 2000)'s problem of solving multiplicative matrix equations. Given k + 1 square matrices A1, ..., Ak, C, all of the same dimension, whose entries are real algebraic, we examine the problem of deciding whether there exist non-negative reals t1, ..., tk such that We show that this problem is undecidable in general, but decidable under the assumption that the matrices A1, ..., Ak commute. Our results have applications to reachability problems for linear hybrid automata. Our decidability proof relies on a number of theorems from algebraic and transcendental number theory, most notably those of Baker, Kronecker, Lindemann, and Masser, as well as some useful geometric and linear-algebraic results, including the Minkowski-Weyl theorem and a new (to the best of our knowledge) result about the uniqueness of strictly upper triangular matrix logarithms of upper unitriangular matrices. On the other hand, our undecidability result is shown by reduction from Hilbert's Tenth Problem.
Global Positioning System (GPS) is used ubiquitously in a wide variety of applications ranging from navigation and tracking to modern smart grids and communication networks. However, it has been demonstrated that modern GPS receivers are vulnerable to signal spoofing attacks. For example, today it is possible to change the course of a ship or force a drone to land in a hostile area by simply spoofing GPS signals. Several countermeasures have been proposed in the past to detect GPS spoofing attacks. These counter-measures offer protection only against naive attackers. They are incapable of detecting strong attackers such as those capable of seamlessly taking over a GPS receiver, which is currently receiving legitimate satellite signals, and spoofing them to an arbitrary location. Also, there is no hardware platform that can be used to compare and evaluate the effectiveness of existing countermeasures in real-world scenarios. In this work, we present SPREE, which is, to the best of our knowledge, the first GPS receiver capable of detecting all spoofing attacks described in the literature. Our novel spoofing detection technique called auxiliary peak tracking enables detection of even a strong attacker capable of executing the seamless takeover attack. We implement and evaluate our receiver against three different sets of GPS signal traces: (i) a public repository of spoofing traces, (ii) signals collected through our own wardriving effort and (iii) using commercial GPS signal generators. Our evaluations show that SPREE constraints even a strong attacker (capable of seamless takeover attack) from spoofing the receiver to a location not more than 1 km away from its true location. This is a significant improvement over modern GPS receivers that can be spoofed to any arbitrary location. Finally, we release our implementation and datasets to the community for further research and development.
Eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University were assessed by two reviewers for Cialdini’s six principles of persuasion: authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation. A correlational analysis of email characteristics by year revealed that the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing. Results from this study can inform user training of phishing emails and help cybersecurity software to become more effective.
This paper presents a possible solution to a fundamental limitation facing all blockchain-based systems; scalability. We propose a temporal rolling blockchain which solves the problem of its current exponential growth, instead replacing it with a constant fixed-size blockchain. We conduct a thorough analysis of related work and present a formal analysis of the new rolling blockchain, comparing the results to a traditional blockchain model to demonstrate that the deletion of data from the blockchain does not impact on the security of the proposed blockchain model before concluding our work and presenting future work to be conducted.
Interactive systems are developed according to requirements, which may be, for instance, documentation, prototypes, diagrams, etc. The informal nature of system requirements may be a source of problems: it may be the case that a system does not implement the requirements as expected, thus, a way to validate whether an implementation follows the requirements is needed. We propose a novel approach to validating a system using formal models of the system. In this approach, a set of traces generated from the execution of the real interactive system is searched over the state space of the formal model. The scalability of the approach is demonstrated by an application to an industrial system in the nuclear plant domain. The combination of trace analysis and formal methods provides feedback that can bring improvements to both the real interactive system and the formal model.
Internet has shown itself to be a catalyst for economic growth and social equity but its potency is thwarted by the fact that the Internet is off limits for the vast majority of human beings. Mobile phones—the fastest growing technology in the world that now reaches around 80% of humanity—can enable universal Internet access if it can resolve coverage problems that have historically plagued previous cellular architectures (2G, 3G, and 4G). These conventional architectures have not been able to sustain universal service provisioning since these architectures depend on having enough users per cell for their economic viability and thus are not well suited to rural areas (which are by definition sparsely populated). The new generation of mobile cellular technology (5G), currently in a formative phase and expected to be finalized around 2020, is aimed at orders of magnitude performance enhancement. 5G offers a clean slate to network designers and can be molded into an architecture also amenable to universal Internet provisioning. Keeping in mind the great social benefits of democratizing Internet and connectivity, we believe that the time is ripe for emphasizing universal Internet provisioning as an important goal on the 5G research agenda. In this paper, we investigate the opportunities and challenges in utilizing 5G for global access to the Internet for all (GAIA). We have also identified the major technical issues involved in a 5G-based GAIA solution and have set up a future research agenda by defining open research problems.
Modern vehicles rely on a variety of electronic systems and components. One of those components is the vehicle key. Today, a key typically implements at least three functions: mechanical locking with a key blade, the electronic immobilizer to autorise the start of the engine, and the remote keyless entry (RKE) system that allows to wirelessly (un)lock the doors and disable the alarm system. These main components of a vehicle key are shown in Figure 1. For the mechanical part of the vehicle key, it is well known that the key blade can be easily copied and that the locking cylinder can be bypassed with other means (using so-called "decoders" or simply a screwdriver). In contrast, immobilizer and RKE rely on wireless protocols to cryptographically authenticate the vehicle key to the car. Immobilizers employ radio frequency identification (RFID) transponders to carry out a challenge-response protocol over a low-range bidirectional link at a frequency of 125 kHz. In the past, researchers have revealed severe aws in the cryptography and protocols used by immobilizers, leading to the break of the major systems Megamos, Hitag2, and DST40 [7, 6, 1]. In contrast to the immobilizer, the RKE part uses unidirectional communication (the vehicle only receives, the key fob only transmits) over a high-range wireless link with operating distances of tens to one hundred meters. These systems are based on rolling codes, which essentially transmit a counter (that is incremented on each button press) in a cryptographically authenticated manner. Until recently, the security of automotive RKE had been scrutinized to a lesser degree than that of immobilizers, even though vulnerabilities in similar systems have been known since 2008 with the attacks on KeeLoq [3]. Other results reported in the literature include an analytical attack on a single, outdated vehicle [2] and the so-called "RollJam" technique [5], which is based on a combination of replay and selective jamming. In 2016, it was shown that severe aws exist in the RKE systems of major automotive manufacturers [4]. On the one hand, the VWgroup (Volkswagen, Seat, Skoda, Audi) based the security of their RKE system on a few global cryptographic keys, potentially affecting hundreds of million vehicles world-wide. By extracting these global keys from the firmware of electronic controls units (ECUs) once, an adversary is able to create a duplicate of the owner's RKE fob by eavesdropping a single rolling code. The second case study in [4] exposes new cryptographic weaknesses in the Hitag2 cipher when used for RKE. Applying a correlation-based attack, an adversary can recover the 48-bit cryptographic key by eavesdropping four to eight rolling codes and performing a one-minute computation on a standard laptop. Again, this attack affects millions of vehicle world-wide. Manufacturers that used Hitag2 in their RKE system include Alfa Romeo, Peugeot, Lancia, Opel, Renault, and Ford among others. In this keynote talk, we will present the results of [4] and put them in into a broader context by revisiting the history of attacks on RKE systems and automotive electronics.
To help establish a more scientific basis for security science, which will enable the development of fundamental theories and move the field from being primarily reactive to primarily proactive, it is important for research results to be reported in a scientifically rigorous manner. Such reporting will allow for the standard pillars of science, namely replication, meta-analysis, and theory building. In this paper we aim to establish a baseline of the state of scientific work in security through the analysis of indicators of scientific research as reported in the papers from the 2015 IEEE Symposium on Security and Privacy. To conduct this analysis, we developed a series of rubrics to determine the completeness of the papers relative to the type of evaluation used (e.g. case study, experiment, proof). Our findings showed that while papers are generally easy to read, they often do not explicitly document some key information like the research objectives, the process for choosing the cases to include in the studies, and the threats to validity. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
Sociotechnical systems (STSs), where users interact with software components, support automated logging, i.e., what a user has performed in the system. However, most systems do not implement automated processes for inspecting the logs when a misuse happens. Deciding what needs to be logged is crucial as excessive amounts of logs might be overwhelming for human analysts to inspect. The goal of this research is to aid software practitioners to implement automated forensic logging by providing a systematic method of using attackers' malicious intentions to decide what needs to be logged. We propose Lokma: a normative framework to construct logging rules for forensic knowledge. We describe the general forensic process of Lokma, and discuss related directions.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
Institutions use the information security (InfoSec) policy document as a set of rules and guidelines to govern the use of the institutional information resources. However, a common problem is that these policies are often not followed or complied with. This study explores the extent to which the problem lies with the policy documents themselves. The InfoSec policies are documented in the natural languages, which are prone to ambiguity and misinterpretation. Subsequently such policies may be ambiguous, thereby making it hard, if not impossible for users to comply with. A case study approach with a content analysis was conducted. The research explores the extent of the problem by using a case study of an educational institution in South Africa.
Intrusion detection has been an active field of research for more than 35 years. Numerous systems had been built based on the two fundamental detection principles, knowledge-based and behavior-based detection. Anyway, having a look at day-to-day news about data breaches and successful attacks, detection effectiveness is still limited. Even more, heavy-weight intrusion detection systems cannot be installed in every endangered environment. For example, Industrial Control Systems are typically utilized for decades, charging off huge investments of companies. Thus, some of these systems have been in operation for years, but were designed afore without security in mind. Even worse, as systems often have connections to other networks and even the Internet nowadays, an adequate protection is mandatory, but integrating intrusion detection can be extremely difficult - or even impossible to date. We propose a new lightweight current-based IDS which is using a difficult to manipulate measurement base and verifiable ground truth. Focus of our system is providing intrusion detection for ICS and SCADA on a low-priced base, easy to integrate. Dr. WATTson, a prototype implemented based on our concept provides high detection and low false alarm rates.
Unlike a random, run-of-the-mill website infection, in a strategic web attack, the adversary carefully chooses the target frequently visited by an organization or a group of individuals to compromise, for the purpose of gaining a step closer to the organization or collecting information from the group. This type of attacks, called "watering hole", have been increasingly utilized by APT actors to get into the internal networks of big companies and government agencies or monitor politically oriented groups. With its importance, little has been done so far to understand how the attack works, not to mention any concrete step to counter this threat. In this paper, we report our first step toward better understanding this emerging threat, through systematically discovering and analyzing new watering hole instances and attack campaigns. This was made possible by a carefully designed methodology, which repeatedly monitors a large number potential watering hole targets to detect unusual changes that could be indicative of strategic compromises. Running this system on the HTTP traffic generated from visits to 61K websites for over 5 years, we are able to discover and confirm 17 watering holes and 6 campaigns never reported before. Given so far there are merely 29 watering holes reported by blogs and technical reports, the findings we made contribute to the research on this attack vector, by adding 59% more attack instances and information about how they work to the public knowledge. Analyzing the new watering holes allows us to gain deeper understanding of these attacks, such as repeated compromises of political websites, their long lifetimes, unique evasion strategy (leveraging other compromised sites to serve attack payloads) and new exploit techniques (no malware delivery, web only information gathering). Also, our study brings to light interesting new observations, including the discovery of a recent JSONP attack on an NGO website that has been widely reported and apparently forced the attack to stop.
We ask whether it is possible to anonymously communicate a large amount of data using only public (non-anonymous) communication together with a small anonymous channel. We think this is a central question in the theory of anonymous communication and to the best of our knowledge this is the first formal study in this direction. Towards this goal, we introduce the novel concept of anonymous steganography: think of a leaker Lea who wants to leak a large document to Joe the journalist. Using anonymous steganography Lea can embed this document in innocent looking communication on some popular website (such as cat videos on YouTube or funny memes on 9GAG). Then Lea provides Joe with a short decoding key dk which, when applied to the entire website, recovers the document while hiding the identity of Lea among the large number of users of the website. Our contributions include: Introducing and formally defining anonymous steganography, A construction showing that anonymous steganography is possible (which uses recent results in circuits obfuscation), A lower bound on the number of bits which are needed to bootstrap anonymous communication.
Dealing with increasing amounts of data creates the need to deal with redundant, inconsistent and/or complementary repositories which may be different in their data models and/or in their schema. Current data cleaning techniques developed to tackle data quality problems are just suitable for scenarios were all repositories share the same model and schema. Recently, an ontology-based methodology was proposed to overcome this limitation. In this paper, this methodology is briefly described and applied to a real scenario in the health domain with data quality problems.
One way of evaluating the reusability of a test collection is to determine whether removing the unique contributions of some system would alter the preference order between that system and others. Rank correlation measures such as Kendall's tau are often used for this purpose. Rank correlation measures are appropriate for ordinal measures in which only preference order is important, but many evaluation measures produce system scores in which both the preference order and the magnitude of the score difference are important. Such measures are referred to as interval. Pearson's rho offers one way in which correlation can be computed over results from an interval measure such that smaller errors in the gap size are preferred. When seeking to improve over existing systems, we care the most about comparisons among the best systems. For that purpose we prefer head-weighed measures such as tau\_AP, which is designed for ordinal data. No present head weighted measure fully leverages the information present in interval effectiveness measures. This paper introduces such a measure, referred to as Pearson Rank.
Contactless communications have become omnipresent in our daily lives, from simple access cards to electronic passports. Such systems are particularly vulnerable to relay attacks, in which an adversary relays the messages from a prover to a verifier. Distance-bounding protocols were introduced to counter such attacks. Lately, there has been a very active research trend on improving the security of these protocols, but also on ensuring strong privacy properties with respect to active adversaries and malicious verifiers. In particular, a difficult threat to address is the terrorist fraud, in which a far-away prover cooperates with a nearby accomplice to fool a verifier. The usual defence against this attack is to make it impossible for the accomplice to succeed unless the prover provides him with enough information to recover his secret key and impersonate him later on. However, the mere existence of a long-term secret key is problematic with respect to privacy. In this paper, we propose a novel approach in which the prover does not leak his secret key but a reusable session key along with a group signature on it. This allows the adversary to impersonate him even without knowing his signature key. Based on this approach, we give the first distance-bounding protocol, called SPADE, integrating anonymity, revocability and provable resistance to standard threat models.