Biblio
Logic encryption, a method to lock a circuit from unauthorized use unless the correct key is provided, is the most important technique in hardware IP protection. However, with the discovery of the SAT attack, all traditional logic encryption algorithms are broken. New algorithms after the SAT attack are all vulnerable to structural analysis unless a provable obfuscation is applied to the locked circuit. But there is no provable logic obfuscation available, in spite of some vague resorting to logic resynthesis. In this paper, we formulate and discuss a trilemma in logic encryption among locking robustness, structural security, and encryption efficiency, showing that pre-SAT approaches achieve only structural security and encryption efficiency, and post-SAT approaches achieve only locking robustness and encryption efficiency. There is also a dilemma between query complexity and error number in locking. We first develop a theory and solution to the dilemma in locking between query complexity and error number. Then, we provide a provable obfuscation solution to the dilemma between structural security and locking robustness. We finally present and discuss some results towards the resolution of the trilemma in logic encryption.
Industrial networks are the cornerstone of modern industrial control systems. Performing security checks of industrial communication processes helps detect unknown risks and vulnerabilities. Fuzz testing is a widely used method for performing security checks that takes advantage of automation. However, there is a big challenge to carry out security checks on industrial network due to the increasing variety and complexity of industrial communication protocols. In this case, existing approaches usually take a long time to model the protocol for generating test cases, which is labor-intensive and time-consuming. This becomes even worse when the target protocol is stateful. To help in addressing this problem, we employed a deep learning model to learn the structures of protocol frames and deal with the temporal features of stateful protocols. We propose a fuzzing framework named SeqFuzzer which automatically learns the protocol frame structures from communication traffic and generates fake but plausible messages as test cases. For proving the usability of our approach, we applied SeqFuzzer to widely-used Ethernet for Control Automation Technology (EtherCAT) devices and successfully detected several security vulnerabilities.
Denial of Service (DoS) attacks have been a serious security concern, as no service is, in principle, protected against them. Although a Dolev-Yao intruder with unlimited resources can trivially render any service unavailable, DoS attacks do not necessarily have to be carried out by such (extremely) powerful intruders. It is useful in practice and more challenging for formal protocol verification to determine whether a service is vulnerable even to resource-bounded intruders that cannot generate or intercept arbitrary large volumes of traffic. This paper proposes a novel, more refined intruder model where the intruder can only consume at most some specified amount of resources in any given time window. Additionally, we propose protocol theories that may contain timeouts and specify service resource usage during protocol execution. In contrast to the existing resource-conscious protocol verification models, our model allows finer and more subtle analysis of DoS problems. We illustrate the power of our approach by representing a number of classes of DoS attacks, such as, Slow, Asymmetric and Amplification DoS attacks, exhausting different types of resources of the target, such as, number of workers, processing power, memory, and network bandwidth. We show that the proposed DoS problem is undecidable in general and is PSPACE-complete for the class of resource-bounded, balanced systems. Finally, we implemented our formal verification model in the rewriting logic tool Maude and analyzed a number of DoS attacks in Maude using Rewriting Modulo SMT in an automated fashion.
Load balancing and IP anycast are traffic routing algorithms used to speed up delivery of the Domain Name System. In case of a DDoS attack or an overload condition, the value of these protocols is critical, as they can provide intrinsic DDoS mitigation with the failover alternatives. In this paper, we present a methodology for predicting the next DNS response in the light of a potential redirection to less busy servers, in order to mitigate the size of the attack. Our experiments were conducted using data from the Nov. 2015 attack of the Root DNS servers and Logistic Regression, k-Nearest Neighbors, Support Vector Machines and Random Forest as our primary classifiers. The models were able to successfully predict up to 83% of responses for Root Letters that operated on a small number of sites and consequently suffered the most during the attacks. On the other hand, regarding DNS requests coming from more distributed Root servers, the models demonstrated lower accuracy. Our analysis showed a correlation between the True Positive Rate metric and the number of sites, as well as a clear need for intelligent management of traffic in load balancing practices.
Since cyber-physical systems are inherently vulnerable to information leaks, software architects need to reason about security policies to define desired and undesired information flow through a system. The microservice architectural style requires the architects to refine a macro-level security policy into micro-level policies for individual microservices. However, when policies are refined in an ill-formed way, information leaks can emerge on composition of microservices. Related approaches to prevent such leaks do not take into account characteristics of cyber-physical systems like real-time behavior or message passing communication. In this paper, we enable the refinement and verification of information-flow security policies for cyber-physical microservice architectures. We provide architects with a set of well-formedness rules for refining a macro-level policy in a way that enforces its security restrictions. Based on the resulting micro-level policies, we present a verification technique to check if the real-time message passing of microservices is secure. In combination, our contributions prevent information leaks from emerging on composition. We evaluate the accuracy of our approach using an extension of the CoCoME case study.
In this article the combination of secret sharing schemes and the requirement of discretionary security policy is considered. Secret sharing schemes of Shamir and Blakley are investigated. Conditions for parameters of schemes the providing forbidden information channels are received. Ways for concealment of the forbidden channels are suggested. Three modifications of the Shamir's scheme and two modifications of the Blakley's scheme are suggested. Transition from polynoms to exponential functions for formation the parts of a secret is carried out. The problem of masking the presence of the forbidden information channels is solved. Several approaches with the complete and partial concealment are suggested.
The software development life cycle (SDLC) starts with business and functional specifications signed with a client. In addition to this, the specifications also capture policy / procedure / contractual / regulatory / legislation / standard compliances with respect to a given client industry. The SDLC must adhere to service level agreements (SLAs) while being compliant to development activities, processes, tools, frameworks, and reuse of open-source software components. In today's world, global software development happens across geographically distributed (autonomous) teams consuming extraordinary amounts of open source components drawn from a variety of disparate sources. Although this is helping organizations deal with technical and economic challenges, it is also increasing unintended risks, e.g., use of a non-complaint license software might lead to copyright issues and litigations, use of a library with vulnerabilities pose security risks etc. Mitigation of such risks and remedial measures is a challenge due to lack of visibility and transparency of activities across these distributed teams as they mostly operate in silos. We believe a unified model that non-invasively monitors and analyzes the activities of distributed teams will help a long way in building software that adhere to various compliances. In this paper, we propose a decentralized CAG - Compliance Adherence and Governance framework using blockchain technologies. Our framework (i) enables the capturing of required data points based on compliance specifications, (ii) analyzes the events for non-conformant behavior through smart contracts, (iii) provides real-time alerts, and (iv) records and maintains an immutable audit trail of various activities.
Increasingly organizations are collecting ever larger amounts of data to build complex data analytics, machine learning and AI models. Furthermore, the data needed for building such models may be unstructured (e.g., text, image, and video). Hence such data may be stored in different data management systems ranging from relational databases to newer NoSQL databases tailored for storing unstructured data. Furthermore, data scientists are increasingly using programming languages such as Python, R etc. to process data using many existing libraries. In some cases, the developed code will be automatically executed by the NoSQL system on the stored data. These developments indicate the need for a data security and privacy solution that can uniformly protect data stored in many different data management systems and enforce security policies even if sensitive data is processed using a data scientist submitted complex program. In this paper, we introduce our vision for building such a solution for protecting big data. Specifically, our proposed system system allows organizations to 1) enforce policies that control access to sensitive data, 2) keep necessary audit logs automatically for data governance and regulatory compliance, 3) sanitize and redact sensitive data on-the-fly based on the data sensitivity and AI model needs, 4) detect potentially unauthorized or anomalous access to sensitive data, 5) automatically create attribute-based access control policies based on data sensitivity and data type.
Many consumers now rely on different forms of voice assistants, both stand-alone devices and those built into smartphones. Currently, these systems react to specific wake-words, such as "Alexa," "Siri," or "Ok Google." However, with advancements in natural language processing, the next generation of voice assistants could instead always listen to the acoustic environment and proactively provide services and recommendations based on conversations without being explicitly invoked. We refer to such devices as "always listening voice assistants" and explore expectations around their potential use. In this paper, we report on a 178-participant survey investigating the potential services people anticipate from such a device and how they feel about sharing their data for these purposes. Our findings reveal that participants can anticipate a wide range of services pertaining to a conversation; however, most of the services are very similar to those that existing voice assistants currently provide with explicit commands. Participants are more likely to consent to share a conversation when they do not find it sensitive, they are comfortable with the service and find it beneficial, and when they already own a stand-alone voice assistant. Based on our findings we discuss the privacy challenges in designing an always-listening voice assistant.
The ubiquitous internetworking of devices in all areas of life is boosted by various trends for instance the Internet of Things. Promising technologies that can be used for such future environments come from Wireless Sensor Networks. It ensures connectivity between distributed, tiny and simple sensor nodes as well as sensor nodes and base stations in order to monitor physical or environmental conditions such as vibrations, temperature or motion. Security plays an increasingly important role in the coming decades in which attacking strategies are becoming more and more sophisticated. Contemporary cryptographic mechanisms face a great threat from quantum computers in the near future and together with Intrusion Detection Systems are hardly applicable on sensors due to strict resource constraints. Thus, in this work a future-proof lightweight and resource-aware security concept for sensor networks with a processing stage permeated filtering mechanism is proposed. A special focus in the concepts evaluation lies on the novel Magic Number filter to mitigate a special kind of Denial-of-Service attack performed on CC1350 LaunchPad ARM Cortex-M3 microcontroller boards.