Biblio

Found 12046 results

Filters: Keyword is Resiliency  [Clear All Filters]
2017-03-27
Eberly, Wayne.  2016.  Selecting Algorithms for Black Box Matrices: Checking For Matrix Properties That Can Simplify Computations. Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation. :207–214.

Processes to automate the selection of appropriate algorithms for various matrix computations are described. In particular, processes to check for, and certify, various matrix properties of black-box matrices are presented. These include sparsity patterns and structural properties that allow "superfast" algorithms to be used in place of black-box algorithms. Matrix properties that hold generically, and allow the use of matrix preconditioning to be reduced or eliminated, can also be checked for and certified –- notably including in the small-field case, where this presently has the greatest impact on the efficiency of the computation.

2017-03-20
Goldfeld, Ziv, Cuff, Paul, Permuter, Haim H..  2016.  Semantic-Security Capacity for the Physical Layer via Information Theory. :17–27.

Physical layer security can ensure secure communication over noisy channels in the presence of an eavesdropper with unlimited computational power. We adopt an information theoretic variant of semantic-security (SS) (a cryptographic gold standard), as our secrecy metric and study the open problem of the type II wiretap channel (WTC II) with a noisy main channel is, whose secrecy-capacity is unknown even under looser metrics than SS. Herein the secrecy-capacity is derived and shown to be equal to its SS capacity. In this setting, the legitimate users communicate via a discrete-memory less (DM) channel in the presence of an eavesdropper that has perfect access to a subset of its choosing of the transmitted symbols, constrained to a fixed fraction of the block length. The secrecy criterion is achieved simultaneously for all possible eavesdropper subset choices. On top of that, SS requires negligible mutual information between the message and the eavesdropper's observations even when maximized over all message distributions. A key tool for the achievability proof is a novel and stronger version of Wyner's soft covering lemma. Specifically, the lemma shows that a random codebook achieves the soft-covering phenomenon with high probability. The probability of failure is doubly-exponentially small in the block length. Since the combined number of messages and subsets grows only exponentially with the block length, SS for the WTC II is established by using the union bound and invoking the stronger soft-covering lemma. The direct proof shows that rates up to the weak-secrecy capacity of the classic WTC with a DM erasure channel (EC) to the eavesdropper are achievable. The converse follows by establishing the capacity of this DM wiretap EC as an upper bound for the WTC II. From a broader perspective, the stronger soft-covering lemma constitutes a tool for showing the existence of codebooks that satisfy exponentially many constraints, a beneficial ability for many other applications in information theoretic security.
 

Goldfeld, Ziv, Cuff, Paul, Permuter, Haim H..  2016.  Semantic-Security Capacity for the Physical Layer via Information Theory. :17–27.

Physical layer security can ensure secure communication over noisy channels in the presence of an eavesdropper with unlimited computational power. We adopt an information theoretic variant of semantic-security (SS) (a cryptographic gold standard), as our secrecy metric and study the open problem of the type II wiretap channel (WTC II) with a noisy main channel is, whose secrecy-capacity is unknown even under looser metrics than SS. Herein the secrecy-capacity is derived and shown to be equal to its SS capacity. In this setting, the legitimate users communicate via a discrete-memory less (DM) channel in the presence of an eavesdropper that has perfect access to a subset of its choosing of the transmitted symbols, constrained to a fixed fraction of the block length. The secrecy criterion is achieved simultaneously for all possible eavesdropper subset choices. On top of that, SS requires negligible mutual information between the message and the eavesdropper's observations even when maximized over all message distributions. A key tool for the achievability proof is a novel and stronger version of Wyner's soft covering lemma. Specifically, the lemma shows that a random codebook achieves the soft-covering phenomenon with high probability. The probability of failure is doubly-exponentially small in the block length. Since the combined number of messages and subsets grows only exponentially with the block length, SS for the WTC II is established by using the union bound and invoking the stronger soft-covering lemma. The direct proof shows that rates up to the weak-secrecy capacity of the classic WTC with a DM erasure channel (EC) to the eavesdropper are achievable. The converse follows by establishing the capacity of this DM wiretap EC as an upper bound for the WTC II. From a broader perspective, the stronger soft-covering lemma constitutes a tool for showing the existence of codebooks that satisfy exponentially many constraints, a beneficial ability for many other applications in information theoretic security.

2017-05-30
Cuzzocrea, Alfredo, Pirrò, Giuseppe.  2016.  A Semantic-web-technology-based Framework for Supporting Knowledge-driven Digital Forensics. Proceedings of the 8th International Conference on Management of Digital EcoSystems. :58–66.

The usage of Information and Communication Technologies (ICTs) pervades everyday's life. If it is true that ICT contributed to improve the quality of our life, it is also true that new forms of (cyber)crime have emerged in this setting. The diversity and amount of information forensic investigators need to cope with, when tackling a cyber-crime case, call for tools and techniques where knowledge is the main actor. Current approaches leave to the investigator the chore of integrating the diverse sources of evidence relevant for a case thus hindering the automatic generation of reusable knowledge. This paper describes an architecture that lifts the classical phases of a digital forensic investigation to a knowledge-driven setting. We discuss how the usage of languages and technologies originating from the Semantic Web proposal can complement digital forensics tools so that knowledge becomes a first-class citizen. Our architecture enables to perform in an integrated way complex forensic investigations and, as a by-product, build a knowledge base that can be consulted to gain insights from previous cases. Our proposal has been inspired by real-world scenarios emerging in the context of an Italian research project about cyber security.

Roychoudhury, Abhik.  2016.  SemFix and Beyond: Semantic Techniques for Program Repair. Proceedings of the International Workshop on Formal Methods for Analysis of Business Systems. :2–2.

Automated program repair is of great promise for future programming environments. It is also of obvious importance for patching vulnerabilities in software, or for building self-healing systems for critical infra-structure. Traditional program repair techniques tend to lift the fix from elsewhere in the program via syntax based approaches. In this talk, I will mention how the search problems in program repair can be solved by semantic analysis techniques. Here semantic analysis methods are not only used to guide the search, but also for extracting formal specifications from tests. I will conclude with positioning of the syntax based and semantic based methods for vulnerability patching, future generation programming, and self-healing systems.

2017-11-13
Chen, Ming, Zadok, Erez, Vasudevan, Arun Olappamanna, Wang, Kelong.  2016.  SeMiNAS: A Secure Middleware for Wide-Area Network-Attached Storage. Proceedings of the 9th ACM International on Systems and Storage Conference. :2:1–2:13.

Utility computing is being gradually realized as exemplified by cloud computing. Outsourcing computing and storage to global-scale cloud providers benefits from high accessibility, flexibility, scalability, and cost-effectiveness. However, users are uneasy outsourcing the storage of sensitive data due to security concerns. We address this problem by presenting SeMiNAS–-an efficient middleware system that allows files to be securely outsourced to providers and shared among geo-distributed offices. SeMiNAS achieves end-to-end data integrity and confidentiality with a highly efficient authenticated-encryption scheme. SeMiNAS leverages advanced NFSv4 features, including compound procedures and data-integrity extensions, to minimize extra network round trips caused by security meta-data. SeMiNAS also caches remote files locally to reduce accesses to providers over WANs. We designed, implemented, and evaluated SeMiNAS, which demonstrates a small performance penalty of less than 26% and an occasional performance boost of up to 19% for Filebench workloads.

2017-05-17
Nguyen, Lam M., Stolyar, Alexander L..  2016.  A Service System with Randomly Behaving On-demand Agents. Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science. :365–366.

We consider a service system where agents (or, servers) are invited on-demand. Customers arrive as a Poisson process and join a customer queue. Customer service times are i.i.d. exponential. Agents' behavior is random in two respects. First, they can be invited into the system exogenously, and join the agent queue after a random time. Second, with some probability they rejoin the agent queue after a service completion, and otherwise leave the system. The objective is to design a real-time adaptive agent invitation scheme that keeps both customer and agent queues/waiting-times small. We study an adaptive scheme, which controls the number of pending agent invitations, based on queue-state feedback. We study the system process fluid limits, in the asymptotic regime where the customer arrival rate goes to infinity. We use the machinery of switched linear systems and common quadratic Lyapunov functions to derive sufficient conditions for the local stability of fluid limits at the desired equilibrium point (with zero queues). We conjecture that, for our model, local stability is in fact sufficient for global stability of fluid limits; the validity of this conjecture is supported by numerical and simulation experiments. When the local stability conditions do hold, simulations show good overall performance of the scheme.

2017-03-27
Argyros, George, Stais, Ioannis, Jana, Suman, Keromytis, Angelos D., Kiayias, Aggelos.  2016.  SFADiff: Automated Evasion Attacks and Fingerprinting Using Black-box Differential Automata Learning. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1690–1701.

Finding differences between programs with similar functionality is an important security problem as such differences can be used for fingerprinting or creating evasion attacks against security software like Web Application Firewalls (WAFs) which are designed to detect malicious inputs to web applications. In this paper, we present SFADIFF, a black-box differential testing framework based on Symbolic Finite Automata (SFA) learning. SFADIFF can automatically find differences between a set of programs with comparable functionality. Unlike existing differential testing techniques, instead of searching for each difference individually, SFADIFF infers SFA models of the target programs using black-box queries and systematically enumerates the differences between the inferred SFA models. All differences between the inferred models are checked against the corresponding programs. Any difference between the models, that does not result in a difference between the corresponding programs, is used as a counterexample for further refinement of the inferred models. SFADIFF's model-based approach, unlike existing differential testing tools, also support fully automated root cause analysis in a domain-independent manner. We evaluate SFADIFF in three different settings for finding discrepancies between: (i) three TCP implementations, (ii) four WAFs, and (iii) HTML/JavaScript parsing implementations in WAFs and web browsers. Our results demonstrate that SFADIFF is able to identify and enumerate the differences systematically and efficiently in all these settings. We show that SFADIFF is able to find differences not only between different WAFs but also between different versions of the same WAF. SFADIFF is also able to discover three previously-unknown differences between the HTML/JavaScript parsers of two popular WAFs (PHPIDS 0.7 and Expose 2.4.0) and the corresponding parsers of Google Chrome, Firefox, Safari, and Internet Explorer. We confirm that all these differences can be used to evade the WAFs and launch successful cross-site scripting attacks.

2017-03-20
Pouliot, David, Wright, Charles V..  2016.  The Shadow Nemesis: Inference Attacks on Efficiently Deployable, Efficiently Searchable Encryption. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1341–1352.

Encrypting Internet communications has been the subject of renewed focus in recent years. In order to add end-to-end encryption to legacy applications without losing the convenience of full-text search, ShadowCrypt and Mimesis Aegis use a new cryptographic technique called "efficiently deployable efficiently searchable encryption" (EDESE) that allows a standard full-text search system to perform searches on encrypted data. Compared to other recent techniques for searching on encrypted data, EDESE schemes leak a great deal of statistical information about the encrypted messages and the keywords they contain. Until now, the practical impact of this leakage has been difficult to quantify. In this paper, we show that the adversary's task of matching plaintext keywords to the opaque cryptographic identifiers used in EDESE can be reduced to the well-known combinatorial optimization problem of weighted graph matching (WGM). Using real email and chat data, we show how off-the-shelf WGM solvers can be used to accurately and efficiently recover hundreds of the most common plaintext keywords from a set of EDESE-encrypted messages. We show how to recover the tags from Bloom filters so that the WGM solver can be used with the set of encrypted messages that utilizes a Bloom filter to encode its search tags. We also show that the attack can be mitigated by carefully configuring Bloom filter parameters.

2017-06-27
Atwater, Erinn, Hengartner, Urs.  2016.  Shatter: Using Threshold Cryptography to Protect Single Users with Multiple Devices. Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :91–102.

The average computer user is no longer restricted to one device. They may have several devices and expect their applications to work on all of them. A challenge arises when these applications need the cryptographic private key of the devices' owner. Here the device owner typically has to manage keys manually with a "keychain" app, which leads to private keys being transferred insecurely between devices – or even to other people. Even with intuitive synchronization mechanisms, theft and malware still pose a major risk to keys. Phones and watches are frequently removed or set down, and a single compromised device leads to the loss of the owner's private key, a catastrophic failure that can be quite difficult to recover from. We introduce Shatter, an open-source framework that runs on desktops, Android, and Android Wear, and performs key distribution on a user's behalf. Shatter uses threshold cryptography to turn the security weakness of having multiple devices into a strength. Apps that delegate cryptographic operations to Shatter have their keys compromised only when a threshold number of devices are compromised by the same attacker. We demonstrate how our framework operates with two popular Android apps (protecting identity keys for a messaging app, and encryption keys for a note-taking app) in a backwards-compatible manner: only Shatter users need to move to a Shatter-aware version of the app. Shatter has minimal impact on app performance, with signatures and decryption being calculated in 0.5s and security proofs in 14s.

2017-06-05
Mirsky, Yisroel, Shabtai, Asaf, Rokach, Lior, Shapira, Bracha, Elovici, Yuval.  2016.  SherLock vs Moriarty: A Smartphone Dataset for Cybersecurity Research. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. :1–12.

In this paper we describe and share with the research community, a significant smartphone dataset obtained from an ongoing long-term data collection experiment. The dataset currently contains 10 billion data records from 30 users collected over a period of 1.6 years and an additional 20 users for 6 months (totaling 50 active users currently participating in the experiment). The experiment involves two smartphone agents: SherLock and Moriarty. SherLock collects a wide variety of software and sensor data at a high sample rate. Moriarty perpetrates various attacks on the user and logs its activities, thus providing labels for the SherLock dataset. The primary purpose of the dataset is to help security professionals and academic researchers in developing innovative methods of implicitly detecting malicious behavior in smartphones. Specifically, from data obtainable without superuser (root) privileges. To demonstrate possible uses of the dataset, we perform a basic malware analysis and evaluate a method of continuous user authentication.

2017-05-30
Continella, Andrea, Guagnelli, Alessandro, Zingaro, Giovanni, De Pasquale, Giulio, Barenghi, Alessandro, Zanero, Stefano, Maggi, Federico.  2016.  ShieldFS: A Self-healing, Ransomware-aware Filesystem. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :336–347.

Preventive and reactive security measures can only partially mitigate the damage caused by modern ransomware attacks. Indeed, the remarkable amount of illicit profit and the cyber-criminals' increasing interest in ransomware schemes suggest that a fair number of users are actually paying the ransoms. Unfortunately, pure-detection approaches (e.g., based on analysis sandboxes or pipelines) are not sufficient nowadays, because often we do not have the luxury of being able to isolate a sample to analyze, and when this happens it is already too late for several users! We believe that a forward-looking solution is to equip modern operating systems with practical self-healing capabilities against this serious threat. Towards such a vision, we propose ShieldFS, an add-on driver that makes the Windows native filesystem immune to ransomware attacks. For each running process, ShieldFS dynamically toggles a protection layer that acts as a copy-on-write mechanism, according to the outcome of its detection component. Internally, ShieldFS monitors the low-level filesystem activity to update a set of adaptive models that profile the system activity over time. Whenever one or more processes violate these models, their operations are deemed malicious and the side effects on the filesystem are transparently rolled back. We designed ShieldFS after an analysis of billions of low-level, I/O filesystem requests generated by thousands of benign applications, which we collected from clean machines in use by real users for about one month. This is the first measurement on the filesystem activity of a large set of benign applications in real working conditions. We evaluated ShieldFS in real-world working conditions on real, personal machines, against samples from state of the art ransomware families. ShieldFS was able to detect the malicious activity at runtime and transparently recover all the original files. Although the models can be tuned to fit various filesystem usage profiles, our results show that our initial tuning yields high accuracy even on unseen samples and variants.

2017-05-18
Sealfon, Adam.  2016.  Shortest Paths and Distances with Differential Privacy. Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems. :29–41.

We introduce a model for differentially private analysis of weighted graphs in which the graph topology (υ,ε) is assumed to be public and the private information consists only of the edge weights ω : ε → R+. This can express hiding congestion patterns in a known system of roads. Differential privacy requires that the output of an algorithm provides little advantage, measured by privacy parameters ε and δ, for distinguishing between neighboring inputs, which are thought of as inputs that differ on the contribution of one individual. In our model, two weight functions w,w' are considered to be neighboring if they have l1 distance at most one. We study the problems of privately releasing a short path between a pair of vertices and of privately releasing approximate distances between all pairs of vertices. We are concerned with the approximation error, the difference between the length of the released path or released distance and the length of the shortest path or actual distance. For the problem of privately releasing a short path between a pair of vertices, we prove a lower bound of Ω(textbarυtextbar) on the additive approximation error for fixed privacy parameters ε,δ. We provide a differentially private algorithm that matches this error bound up to a logarithmic factor and releases paths between all pairs of vertices, not just a single pair. The approximation error achieved by our algorithm can be bounded by the number of edges on the shortest path, so we achieve better accuracy than the worst-case bound for pairs of vertices that are connected by a low-weight path consisting of o(textbarυtextbar) vertices. For the problem of privately releasing all-pairs distances, we show that for trees we can release all-pairs distances with approximation error \$O(log2.5textbarυtextbar) for fixed privacy parameters. For arbitrary bounded-weight graphs with edge weights in [0,M] we can brelease all distances with approximation error Õ(√textgreater(textbarυtextbarM).

2017-08-02
Dürmuth, Markus, Oswald, David, Pastewka, Niklas.  2016.  Side-Channel Attacks on Fingerprint Matching Algorithms. Proceedings of the 6th International Workshop on Trustworthy Embedded Devices. :3–13.

Biometric authentication schemes are frequently used to establish the identity of a user. Often, a trusted hardware device is used to decide if a provided biometric feature is sufficiently close to the features stored by the legitimate user during enrollment. In this paper, we address the question whether the stored features can be extracted with side-channel attacks. We consider several models for types of leakage that are relevant specifically for fingerprint verification, and show results for attacks against the Bozorth3 and a custom matching algorithm. This work shows an interesting path for future research on the susceptibility of biometric algorithms towards side-channel attacks.

2017-11-20
Li, H., He, Y., Sun, L., Cheng, X., Yu, J..  2016.  Side-channel information leakage of encrypted video stream in video surveillance systems. IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications. :1–9.

Video surveillance has been widely adopted to ensure home security in recent years. Most video encoding standards such as H.264 and MPEG-4 compress the temporal redundancy in a video stream using difference coding, which only encodes the residual image between a frame and its reference frame. Difference coding can efficiently compress a video stream, but it causes side-channel information leakage even though the video stream is encrypted, as reported in this paper. Particularly, we observe that the traffic patterns of an encrypted video stream are different when a user conducts different basic activities of daily living, which must be kept private from third parties as obliged by HIPAA regulations. We also observe that by exploiting this side-channel information leakage, attackers can readily infer a user's basic activities of daily living based on only the traffic size data of an encrypted video stream. We validate such an attack using two off-the-shelf cameras, and the results indicate that the user's basic activities of daily living can be recognized with a high accuracy.

2017-09-15
Hamda, Kento, Ishigaki, Genya, Sakai, Yoichi, Shinomiya, Norihiko.  2016.  Significance Analysis for Edges in a Graph by Means of Leveling Variables on Nodes. Proceedings of the 7th International Conference on Computing Communication and Networking Technologies. :27:1–27:5.

This paper proposes a novel measure for edge significance considering quantity propagation in a graph. Our method utilizes a pseudo propagation process brought by solving a problem of a load balancing on nodes. Edge significance is defined as a difference of propagation in a graph with an edge to without it. The simulation compares our proposed method with the traditional betweenness centrality in order to obtain differences of our measure to a type of centrality, which considers propagation process in a graph.

2017-05-18
Awad, Amro, Manadhata, Pratyusa, Haber, Stuart, Solihin, Yan, Horne, William.  2016.  Silent Shredder: Zero-Cost Shredding for Secure Non-Volatile Main Memory Controllers. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :263–276.

As non-volatile memory (NVM) technologies are expected to replace DRAM in the near future, new challenges have emerged. For example, NVMs have slow and power-consuming writes, and limited write endurance. In addition, NVMs have a data remanence vulnerability, i.e., they retain data for a long time after being powered off. NVM encryption alleviates the vulnerability, but exacerbates the limited endurance by increasing the number of writes to memory. We observe that, in current systems, a large percentage of main memory writes result from data shredding in operating systems, a process of zeroing out physical pages before mapping them to new processes, in order to protect previous processes' data. In this paper, we propose Silent Shredder, which repurposes initialization vectors used in standard counter mode encryption to completely eliminate the data shredding writes. Silent Shredder also speeds up reading shredded cache lines, and hence reduces power consumption and improves overall performance. To evaluate our design, we run three PowerGraph applications and 26 multi-programmed workloads from the SPEC 2006 suite, on a gem5-based full system simulator. Silent Shredder eliminates an average of 48.6% of the writes in the initialization and graph construction phases. It speeds up main memory reads by 3.3 times, and improves the number of instructions per cycle (IPC) by 6.4% on average. Finally, we discuss several use cases, including virtual machines' data isolation and user-level large data initialization, where Silent Shredder can be used effectively at no extra cost.

Mauriello, Matthew Louis, Shneiderman, Ben, Du, Fan, Malik, Sana, Plaisant, Catherine.  2016.  Simplifying Overviews of Temporal Event Sequences. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :2217–2224.

Beginning the analysis of new data is often difficult as modern datasets can be overwhelmingly large. With visual analytics in particular, displays of large datasets quickly become crowded and unclear. Through observing the practices of analysts working with the event sequence visualization tool EventFlow, we identified three techniques to reduce initial visual complexity by reducing the number of event categories resulting in a simplified overview. For novice users, we suggest an initial pair of event categories to display. For advanced users, we provide six ranking metrics and display all pairs in a ranked list. Finally, we present the Event Category Matrix (ECM), which simultaneously displays overviews of every event category pair. In this work, we report on the development of these techniques through two formative usability studies and the improvements made as a result. The goal of our work is to investigate strategies that help users overcome the challenges associated with initial visual complexity and to motivate the use of simplified overviews in temporal event sequence analysis.

Chachmon, Nadav, Richins, Daniel, Cohn, Robert, Christensson, Magnus, Cui, Wenzhi, Reddi, Vijay Janapa.  2016.  Simulation and Analysis Engine for Scale-Out Workloads. Proceedings of the 2016 International Conference on Supercomputing. :22:1–22:13.

We introduce a system-level Simulation and Analysis Engine (SAE) framework based on dynamic binary instrumentation for fine-grained and customizable instruction-level introspection of everything that executes on the processor. SAE can instrument the BIOS, kernel, drivers, and user processes. It can also instrument multiple systems simultaneously using a single instrumentation interface, which is essential for studying scale-out applications. SAE is an x86 instruction set simulator designed specifically to enable rapid prototyping, evaluation, and validation of architectural extensions and program analysis tools using its flexible APIs. It is fast enough to execute full platform workloads–-a modern operating system can boot in a few minutes–-thus enabling research, evaluation, and validation of complex functionalities related to multicore configurations, virtualization, security, and more. To reach high speeds, SAE couples tightly with a virtual platform and employs both a just-in-time (JIT) compiler that helps simulate simple instructions efficiently and a fast interpreter for simulating new or complex instructions. We describe SAE's architecture and instrumentation engine design and show the framework's usefulness for single- and multi-system architectural and program analysis studies.

2017-03-29
Ghosh, Uttam, Dong, Xinshu, Tan, Rui, Kalbarczyk, Zbigniew, Yau, David K.Y., Iyer, Ravishankar K..  2016.  A Simulation Study on Smart Grid Resilience Under Software-Defined Networking Controller Failures. Proceedings of the 2Nd ACM International Workshop on Cyber-Physical System Security. :52–58.

Riding on the success of SDN for enterprise and data center networks, recently researchers have shown much interest in applying SDN for critical infrastructures. A key concern, however, is the vulnerability of the SDN controller as a single point of failure. In this paper, we develop a cyber-physical simulation platform that interconnects Mininet (an SDN emulator), hardware SDN switches, and PowerWorld (a high-fidelity, industry-strength power grid simulator). We report initial experiments on how a number of representative controller faults may impact the delay of smart grid communications. We further evaluate how this delay may affect the performance of the underlying physical system, namely automatic gain control (AGC) as a fundamental closed-loop control that regulates the grid frequency to a critical nominal value. Our results show that when the fault-induced delay reaches seconds (e.g., more than four seconds in some of our experiments), degradation of the AGC becomes evident. Particularly, the AGC is most vulnerable when it is in a transient following say step changes in loading, because the significant state fluctuations will exacerbate the effects of using a stale system state in the control.

2017-09-27
Li, Guannan, Liu, Jun, Wang, Xue, Xu, Hongli, Cui, Jun-Hong.  2016.  A Simulator for Swarm AUVs Acoustic Communication Networking. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :42:1–42:2.

This paper presents a simulator for swarm operations designed to verify algorithms for a swarm of autonomous underwater robots (AUVs), specifically for constructing an underwater communication network with AUVs carrying acoustic communication devices. This simulator consists of three nodes: a virtual vehicle node (VV), a virtual environment node (VE), and a visual showing node (VS). The modular design treats AUV models as a combination of virtual equipment. An expert acoustic communication simulator is embedded in this simulator, to simulate scenarios with dynamic acoustic communication nodes. The several simulations we have performed demonstrate that this simulator is easy to use and can be further improved.

2017-08-02
Bacs, Andrei, Giuffrida, Cristiano, Grill, Bernhard, Bos, Herbert.  2016.  Slick: An Intrusion Detection System for Virtualized Storage Devices. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :2033–2040.

Cloud computing is rapidly reshaping the server administration landscape. The widespread use of virtualization and the increasingly high server consolidation ratios, in particular, have introduced unprecedented security challenges for users, increasing the exposure to intrusions and opening up new opportunities for attacks. Deploying security mechanisms in the hypervisor to detect and stop intrusion attempts is a promising strategy to address this problem. Existing hypervisor-based solutions, however, are typically limited to very specific classes of attacks and introduce exceedingly high performance overhead for production use. In this paper, we present Slick (Storage-Level Intrusion ChecKer), an intrusion detection system (IDS) for virtualized storage devices. Slick detects intrusion attempts by efficiently and transparently monitoring write accesses to critical regions on storage devices. The low-overhead monitoring component operates entirely inside the hypervisor, with no introspection or modifications required in the guest VMs. Using Slick, users can deploy generic IDS rules to detect a broad range of real-world intrusions in a flexible and practical way. Experimental results confirm that Slick is effective at enhancing the security of virtualized servers, while imposing less than 5% overhead in production.

2017-04-03
Lee, Seungsoo, Yoon, Changhoon, Shin, Seungwon.  2016.  The Smaller, the Shrewder: A Simple Malicious Application Can Kill an Entire SDN Environment. Proceedings of the 2016 ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization. :23–28.

Security vulnerability assessment is an important process that must be conducted against any system before the deployment, and emerging technologies are no exceptions. Software-Defined Networking (SDN) has aggressively evolved in the past few years and is now almost at the early adoption stage. At this stage, the attack surface of SDN should be thoroughly investigated and assessed in order to mitigate possible security breaches against SDN. Inspired by the necessity, we reveal three attack scenarios that leverage SDN application to attack SDNs, and test the attack scenarios against three of the most popular SDN controllers available today. In addition, we discuss the possible defense mechanisms against such application-originated attacks.

2017-09-19
Lee, Seung-seob, Shi, Hang, Tan, Kun, Liu, Yunxin, Lee, SuKyoung, Cui, Yong.  2016.  Smart and Secure: Preserving Privacy in Untrusted Home Routers. Proceedings of the 7th ACM SIGOPS Asia-Pacific Workshop on Systems. :11:1–11:8.

Recently, wireless home routers increasingly become smart. While these smart routers provide rich functionalities to users, they also raise security concerns. Since a smart home router may process and store personal data for users, once compromised, these sensitive information will be exposed. Unfortunately, current operating systems on home routers are far from secure. As a consequence, users are facing a difficult tradeoff between functionality and privacy risks. This paper attacks this dilemma with a novel SEAL architecture for home routers. SEAL leverages the ARM TrustZone technology to divide a conventional router OS (i.e., Linux) in a non-secure/normal world. All sensitive user data are shielded from the normal world using encryption. Modules (called applets) that process the sensitive data are located in a secure world and confined in secure sandboxes provided by a tiny secure OS. We report the system design of SEAL and our preliminary implementation and evaluation results.

Dhand, Pooja, Mittal, Sumit.  2016.  Smart Handoff Framework for Next Generation Heterogeneous Networks in Smart Cities. Proceedings of the International Conference on Advances in Information Communication Technology & Computing. :75:1–75:7.

Over the last few decades, accessibility scenarios have undergone a drastic change. Today the way people access information and resources is quite different from the age when internet was not evolved. The evolution of the Internet has made remarkable, epoch-making changes and has become the backbone of smart city. The vision of smart city revolves around seamless connectivity. Constant connectivity can provide uninterrupted services to users such as e-governance, e-banking, e-marketing, e-shopping, e-payment and communication through social media. And to provide uninterrupted services to such applications to citizens is our prime concern. So this paper focuses on smart handoff framework for next generation heterogeneous networks in smart cities to provide all time connectivity to anyone, anyhow and anywhere. To achieve this, three strategies have been proposed for handoff initialization phase-Mobile controlled, user controlled and network controlled handoff initialization. Each strategy considers a different set of parameters. Results show that additional parameters with RSSI and adaptive threshold and hysteresis solve ping-pong and corner effect problems in smart city.