Biblio
In this work we investigate existing and new metrics for evaluating transient stability of power systems to quantify the impact of distributed control schemes. Specifically, an energy storage system (ESS)-based control scheme that builds on feedback linearization theory is implemented in the power system to enhance its transient stability. We study the value of incorporating such ESS-based distributed control on specific transient stability metrics that include critical clearing time, critical control activation time, system stability time, rotor angle stability index, rotor speed stability index, rate of change of frequency, and control power. The stability metrics are evaluated using the IEEE 68-bus test power system. Numerical results demonstrate the value of the distributed control scheme in enhancing the transient stability metrics of power systems.
In Sybil attacks, a physical adversary takes multiple fabricated or stolen identities to maliciously manipulate the network. These attacks are very harmful for Internet of Things (IoT) applications. In this paper we implemented and evaluated the performance of RPL (Routing Protocol for Low-Power and Lossy Networks) routing protocol under mobile sybil attacks, namely SybM, with respect to control overhead, packet delivery and energy consumption. In SybM attacks, Sybil nodes take the advantage of their mobility and the weakness of RPL to handle identity and mobility, to flood the network with fake control messages from different locations. To counter these type of attacks we propose a trust-based intrusion detection system based on RPL.
Due to the increase in design complexity and cost of VLSI chips, a number of design houses outsource manufacturing and import designs in a way to reduce the cost. This results in a decrease of the authenticity and security of the manufactured product. Since product development involves outside sources, circuit designers can not guarantee that their hardware has not been altered. It is often possible that attackers include additional hardware in order to gain privileges over the original circuit or cause damage to the product. These added circuits are called ``Hardware Trojans''. In this paper, we investigate introducing necessary modules needed for detection of hardware Trojans. We also introduce necessary programmable logic fabric that can be used in the implementation of the hardware assertion checkers. Our target is to utilize the provided programable fabric in a System on Chip (SoC) and optimize the hardware assertion to cover the detection of most hardware trojans in each core of the target SoC.
"Our Laws are not generally known; they are kept secret by the small group of nobles who rule us. We are convinced that these ancient laws are scrupulously administered; nevertheless it is an extremely painful thing to be ruled by laws that one does not know."–Franz Kafka, Parables and Paradoxes. Post 9/11, journalists, scholars and activists have pointed out that it secret laws - a body of law whose details and sometime mere existence is classified as top secret - were on the rise in all three branches of the US government due to growing national security concerns. Amid heated current debates on governmental wishes for exceptional access to encrypted digital data, one of the key issues is: which mechanisms can be put in place to ensure that government agencies follow agreed-upon rules in a manner which does not compromise national security objectives? This promises to be especially challenging when the rules, according to which access to encrypted data is granted, may themselves be secret. In this work we show how the use of cryptographic protocols, and in particular, the idea of zero knowledge proofs can ensure accountability and transperancy of the government in this extraordinary, seemingly deadlocked, setting. We propose an efficient record-keeping infrastructure with versatile publicly verifiable audits that preserve (information-theoretic) privacy of record contents as well as of the rules by which the records are attested to abide. Our protocol is based on existing blockchain and cryptographic tools including commitments and zero-knowledge SNARKs, and satisfies the properties of indelibility (i.e., no back-dating), perfect data privacy, public auditability of secret data with secret laws, accountable deletion, and succinctness. We also propose a variant scheme where entities can be required to pay fees based on record contents (e.g., for violating regulations) while still preserving privacy. Our scheme can be directly instantiated on the Ethereum blockchain (and a simplified version with weaker guarantees can be instantiated with Bitcoin).
Blacklisting IP addresses is an important part of enterprise security today. Malware infections and Advanced Persistent Threats can be detected when blacklisted IP addresses are contacted. It can also thwart phishing attacks by blocking suspicious websites. An unknown binary file may be executed in a sandbox by a modern firewall. It is blocked if it attempts to contact a blacklisted IP address. However, today's providers of IP blacklists are based on observed malicious activities, collected from multiple sources around the world. Attackers can evade those reactive IP blacklist defense by using IP addresses that have not been recently engaged in malicious activities. In this paper, we report an approach that can predict IP addresses that are likely to be used in malicious activities in the near future. Our evaluation shows that this approach can detect 88% of zero-day malware instances missed by top five antivirus products. It can also block 68% of phishing websites before reported by Phishtank.
Most popular Web applications rely on persistent databases based on languages like SQL for declarative specification of data models and the operations that read and modify them. As applications scale up in user base, they often face challenges responding quickly enough to the high volume of requests. A common aid is caching of database results in the application's memory space, taking advantage of program-specific knowledge of which caching schemes are sound and useful, embodied in handwritten modifications that make the program less maintainable. These modifications also require nontrivial reasoning about the read-write dependencies across operations. In this paper, we present a compiler optimization that automatically adds sound SQL caching to Web applications coded in the Ur/Web domain-specific functional language, with no modifications required to source code. We use a custom cache implementation that supports concurrent operations without compromising the transactional semantics of the database abstraction. Through experiments with microbenchmarks and production Ur/Web applications, we show that our optimization in many cases enables an easy doubling or more of an application's throughput, requiring nothing more than passing an extra command-line flag to the compiler.
Establishing a secret and reliable wireless communication is a challenging task that is of paramount importance. In this paper, we investigate the physical layer security of a legitimate transmission link between a user that assists an Intrusion Detection System (IDS) in detecting eavesdropping and jamming attacks in the presence of an adversary that is capable of conducting an eavesdropping or a jamming attack. The user is being faced by a challenge of whether to transmit, thus becoming vulnerable to an eavesdropping or a jamming attack, or to keep silent and consequently his/her transmission will be delayed. The adversary is also facing a challenge of whether to conduct an eavesdropping or a jamming attack that will not get him/her to be detected. We model the interactions between the user and the adversary as a two-state stochastic game. Explicit solutions characterize some properties while highlighting some interesting strategies that are being embraced by the user and the adversary. Results show that our proposed system outperform current systems in terms of communication secrecy.
High-accuracy localization is a prerequisite for many wireless applications. To obtain accurate location information, it is often required to share users' positional knowledge and this brings the risk of leaking location information to adversaries during the localization process. This paper develops a theory and algorithms for protecting location secrecy. In particular, we first introduce a location secrecy metric (LSM) for a general measurement model of an eavesdropper. Compared to previous work, the measurement model accounts for parameters such as channel conditions and time offsets in addition to the positions of users. We determine the expression of the LSM for typical scenarios and show how the LSM depends on the capability of an eavesdropper and the quality of the eavesdropper's measurement. Based on the insights gained from the analysis, we consider a case study in wireless localization network and develop an algorithm that diminish the eavesdropper's capabilities by exploiting the reciprocity of channels. Numerical results show that the proposed algorithm can effectively increase the LSM and protect location secrecy.
In this paper, we present an algorithm for estimating the state of the power grid following a cyber-physical attack. We assume that an adversary attacks an area by: (i) disconnecting some lines within that area (failed lines), and (ii) obstructing the information from within the area to reach the control center. Given the phase angles of the buses outside the attacked area under the AC power flow model (before and after the attack), the algorithm estimates the phase angles of the buses and detects the failed lines inside the attacked area. The novelty of our approach is the transformation of the line failures detection problem, which is combinatorial in nature, to a convex optimization problem. As a result, our algorithm can detect any number of line failures in a running time that is independent of the number of failures and is solely dependent on the size of the network. To the best of our knowledge, this is the first convex relaxation for the problem of line failures detection using phase angle measurements under the AC power flow model. We evaluate the performance of our algorithm in the IEEE 118- and 300-bus systems, and show that it estimates the phase angles of the buses with less that 1% error, and can detect the line failures with 80% accuracy for single, double, and triple line failures.
Cooperative spectrum sensing is often necessary in cognitive radios systems to localize a transmitter by fusing the measurements from multiple sensing radios. However, revealing spectrum sensing information also generally leaks information about the location of the radio that made those measurements. We propose a protocol for performing cooperative spectrum sensing while preserving the privacy of the sensing radios. In this protocol, radios fuse sensing information through a distributed particle filter based on a tree structure. All sensing information is encrypted using public-key cryptography, and one of the radios serves as an anonymizer, whose role is to break the connection between the sensing radios and the public keys they use. We consider a semi-honest (honest-but-curious) adversary model in which there is at most a single adversary that is internal to the sensing network and complies with the specified protocol but wishes to determine information about the other participants. Under this scenario, an adversary may learn the sensing information of some of the radios, but it does not have any way to tie that information to a particular radio's identity. We test the performance of our proposed distributed, tree-based particle filter using physical measurements of FM broadcast stations.
Though the GNSS receiver baseband signal processing realizes more precise estimation by using Kalman Filter, traditional KF-based tracking loops estimate code phase and carrier frequency simultaneously by a single filter. In this case, the error of code phase estimate can affect the carrier frequency tracking loop, which is vulnerable than code tracking loop. This paper presents a tracking architecture based on dual filter. Filters can performing code locking and carrier tracking respectively, hence, the whole tracking loop ultimately avoid carrier tracking being subjected to code tracking errors. The control system is derived according to the mathematical expression of the Kalman system. Based on this model, the transfer function and equivalent noise bandwidth are derived in detail. As a result, the relationship between equivalent noise bandwidth and Kalman gain is presented. Owing to this relationship, the equivalent noise bandwidth for a well-designed tracking loop can adjust automatically with the change of environments. Finally, simulation and performance analysis for this novel architecture are presented. The simulation results show that dual Kalman filters can restrain phase noise more effectively than the loop filter of the classical GNSS tracking channel, therefore this whole system seems more suitable to working in harsh environments.
Information shared on Twitter is ever increasing and users-recipients are overwhelmed by the number of tweets they receive, many of which of no interest. Filters that estimate the interest of each incoming post can alleviate this problem, for example by allowing users to sort incoming posts by predicted interest (e.g., "top stories" vs. "most recent" in Facebook). Global and personal filters have been used to detect interesting posts in social networks. Global filters are trained on large collections of posts and reactions to posts (e.g., retweets), aiming to predict how interesting a post is for a broad audience. In contrast, personal filters are trained on posts received by a particular user and the reactions of the particular user. Personal filters can provide recommendations tailored to a particular user's interests, which may not coincide with the interests of the majority of users that global filters are trained to predict. On the other hand, global filters are typically trained on much larger datasets compared to personal filters. Hence, global filters may work better in practice, especially with new users, for which personal filters may have very few training instances ("cold start" problem). Following Uysal and Croft, we devised a hybrid approach that combines the strengths of both global and personal filters. As in global filters, we train a single system on a large, multi-user collection of tweets. Each tweet, however, is represented as a feature vector with a number of user-specific features.
Among many research efforts devoted to automated art investigations, the problem of quantification of literary style remains current. Meanwhile, linguists and computer scientists have tried to sort out texts according to their types or authors. We use the recently-introduced p-leader multifractal formalism to analyze a corpus of novels written for adults and young adults, with the goal of assessing if a difference in style can be found. Our results agree with the interpretation that novels written for young adults largely follow conventions of the genre, whereas novels written for adults are less homogeneous.
This work concerns distributed consensus algorithms and application to a network intrusion detection system (NIDS) [21]. We consider the problem of defending the system against multiple data falsification attacks (Byzantine attacks), a vulnerability of distributed peer-to-peer consensus algorithms that has not been widely addressed in its practicality. We consider both naive (independent) and colluding attackers. We test three defense strategy implementations, two classified as outlier detection methods and one reputation-based method. We have narrowed our attention to outlier and reputation-based methods because they are relatively light computationally speaking. We have left out control theoretic methods which are likely the most effective methods, however their computational cost increase rapidly with the number of attackers. We compare the efficiency of these three implementations for their computational cost, detection performance, convergence behavior and possible impacts on the intrusion detection accuracy of the NIDS. Tests are performed based on simulations of distributed denial of service attacks using the KSL-KDD data set.
Vehicular Ad-Hoc Networks (VANET) are the creation of several vehicles communicating with each other in order to create a network capable of communication and data exchange. One of the most promising methods for security and trust amongst vehicular networks is the usage of Public Key Infrastructure (PKI). However, current implementations of PKI as a security solution for determining the validity and authenticity of vehicles in a VANET is not efficient due to the usage of large amounts of delay and computational overhead. In this paper, we investigate the potential of PKI when predictively and preemptively passing along certificates to roadside units (RSU) in an effort to lower delay and computational overhead in a dynamic environment. We look to accomplish this through utilizing fog computing and propose a new protocol to pass certificates along the projected path.
Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system's behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications.
A filesystem capable of curtailing data theft and ensuring file integrity protection through deception is introduced and evaluated. The deceptive filesystem transparently creates multiple levels of stacking to protect the base filesystem and monitor file accesses, hide and redact sensitive files with baits, and inject decoys onto fake system views purveyed to untrusted subjects, all while maintaining a pristine state to legitimate processes. Our prototype implementation leverages a kernel hot-patch to seamlessly integrate the new filesystem module into live and existing environments. We demonstrate the utility of our approach with a use case on the nefarious Erebus ransomware. We also show that the filesystem adds no I/O overhead for legitimate users.
In this paper, a practical quantum public-key encryption model is proposed by studying the recent quantum public-key encryption. This proposed model makes explicit stipulations on the generation, distribution, authentication, and usage of the secret keys, thus forms a black-box operation. Meanwhile, this proposed model encapsulates the process of encryption and decryption for the users, and forms a blackbox client-side. In our models, each module is independent and can be replaced arbitrarily without affecting the proposed model. Therefore, this model has a good guiding significance for the design and development of the quantum public key encryption schemes.
Free text keystroke dynamics is a behavioral biometric that has the strong potential to offer unobtrusive and continuous user authentication. Unfortunately, due to the limited data availability, free text keystroke dynamics have not been tested adequately. Based on a novel large dataset of free text keystrokes from our ongoing data collection using behavior in natural settings, we present the first study to evaluate keystroke dynamics while respecting the temporal order of the data. Specifically, we evaluate the performance of different ways of forming a test sample using sessions, as well as a form of continuous authentication that is based on a sliding window on the keystroke time series. Instead of accumulating a new test sample of keystrokes, we update the previous sample with keystrokes that occur in the immediate past sliding window of n minutes. We evaluate sliding windows of 1 to 5, 10, and 30 minutes. Our best performer using a sliding window of 1 minute, achieves an FAR of 1% and an FRR of 11.5%. Lastly, we evaluate the sensitivity of the keystroke dynamics algorithm to short quick insider attacks that last only several minutes, by artificially injecting different portions of impostor keystrokes into the genuine test samples. For example, the evaluated algorithm is found to be able to detect insider attacks that last 2.5 minutes or longer, with a probability of 98.4%.
Despite the advent of numerous Internet-of-Things (IoT) applications, recent research demonstrates potential side-channel vulnerabilities exploiting sensors which are used for event and environment monitoring. In this paper, we propose a new side-channel attack, where a network of distributed non-acoustic sensors can be exploited by an attacker to launch an eavesdropping attack by reconstructing intelligible speech signals. Specifically, we present PitchIn to demonstrate the feasibility of speech reconstruction from non-acoustic sensor data collected offline across networked devices. Unlike speech reconstruction which requires a high sampling frequency (e.g., textgreater 5 KHz), typical applications using non-acoustic sensors do not rely on richly sampled data, presenting a challenge to the speech reconstruction attack. Hence, PitchIn leverages a distributed form of Time Interleaved Analog-Digital-Conversion (TIADC) to approximate a high sampling frequency, while maintaining low per-node sampling frequency. We demonstrate how distributed TI-ADC can be used to achieve intelligibility by processing an interleaved signal composed of different sensors across networked devices. We implement PitchIn and evaluate reconstructed speech signal intelligibility via user studies. PitchIn has word recognition accuracy as high as 79%. Though some additional work is required to improve accuracy, our results suggest that eavesdropping using a fusion of non-acoustic sensors is a real and practical threat.
Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries. Previous papers attack voice assistants with obfuscated voice commands by leveraging the gap between speech recognition system and human voice perception. The limitation is that these obfuscated commands are audible and thus conspicuous to device owners. In this poster, we propose a novel mechanism to directly attack the microphone used for sensing voice data with inaudible voice commands. We show that the adversary can exploit the microphone's non-linearity and play well-designed inaudible ultrasounds to cause the microphone to record normal voice commands, and thus control the victim device inconspicuously. We demonstrate via end-to-end real-world experiments that our inaudible voice commands can attack an Android phone and an Amazon Echo device with high success rates at a range of 2-3 meters.
This paper proposes a novel privacy-preserving smart metering system for aggregating distributed smart meter data. It addresses two important challenges: (i) individual users wish to publish sensitive smart metering data for specific purposes, and (ii) an untrusted aggregator aims to make queries on the aggregate data. We handle these challenges using two main techniques. First, we propose Fourier Perturbation Algorithm (FPA) and Wavelet Perturbation Algorithm (WPA) which utilize Fourier/Wavelet transformation and distributed differential privacy (DDP) to provide privacy for the released statistic with provable sensitivity and error bounds. Second, we leverage an exponential ElGamal encryption mechanism to enable secure communications between the users and the untrusted aggregator. Standard differential privacy techniques perform poorly for time-series data as it results in a Θ(n) noise to answer n queries, rendering the answers practically useless if n is large. Our proposed distributed differential privacy mechanism relies on Gaussian principles to generate distributed noise, which guarantees differential privacy for each user with O(1) error, and provides computational simplicity and scalability. Compared with Gaussian Perturbation Algorithm (GPA) which adds distributed Gaussian noise to the original data, the experimental results demonstrate the superiority of the proposed FPA and WPA by adding noise to the transformed coefficients.