Biblio
It is notably challenging to design an efficient and secure signature scheme based on error-correcting codes. An approach to build such signature schemes is to derive it from an identification protocol through the Fiat-Shamir transform. All such protocols based on codes must be run several rounds, since each run of the protocol allows a cheating probability of either 2/3 or 1/2. The resulting signature size is proportional to the number of rounds, thus making the 1/2 cheating probability version more attractive. We present a signature scheme based on double circulant codes in the rank metric, derived from an identification protocol with cheating probability of 2/3. We reduced this probability to almost 1/2 to obtain the smallest signature among code-based signature schemes based on the Fiat-Shamir paradigm, around 22 KBytes for 128 bit security level. Furthermore, among all code-based signature schemes, our proposal has the lowest value of signature plus public key size, and the smallest secret and public key sizes. We provide a security proof in the Random Oracle Model, implementation performances, and a comparison with the parameters of similar signature schemes.
Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).
We address the problem of distributed state estimation of a linear dynamical process in an attack-prone environment. A network of sensors, some of which can be compromised by adversaries, aim to estimate the state of the process. In this context, we investigate the impact of making a small subset of the nodes immune to attacks, or “trusted”. Given a set of trusted nodes, we identify separate necessary and sufficient conditions for resilient distributed state estimation. We use such conditions to illustrate how even a small trusted set can achieve a desired degree of robustness (where the robustness metric is specific to the problem under consideration) that could otherwise only be achieved via additional measurement and communication-link augmentation. We then establish that, unfortunately, the problem of selecting trusted nodes is NP-hard. Finally, we develop an attack-resilient, provably-correct distributed state estimation algorithm that appropriately leverages the presence of the trusted nodes.
To prevent unauthorized access to adversaries, strong authentication scheme is a vital security requirement in client-server inter-networking systems. These schemes must verify the legitimacy of such users in real-time environments and establish a dynamic session key fur subsequent communication. Of late, T. H. Chen and J. C. Huang proposed a two-factor authentication framework claiming that the scheme is secure against most of the existing attacks. However we have shown that Chen and Huang scheme have many critical weaknesses in real-time environments. The scheme is prone to man in the middle attack and information leakage attack. Furthermore, the scheme does not provide two essential security services such user anonymity and session key establishment. In this paper, we present an enhanced user participating authenticating scheme which overcomes all the weaknesses of Chen et al.'s scheme and provide most of the essential security features.
Finite-state machine (FSM) is widely used as control unit in most digital designs. Many intellectual property protection and obfuscation techniques leverage on the exponential number of possible states and state transitions of large FSM to secure a physical design with the reason that it is challenging to retrieve the FSM design from its downstream design or physical implementation without knowledge of the design. In this paper, we postulate that this assumption may not be sustainable with big data analytics. We demonstrate by applying a data mining technique to analyze sufficiently large amount of data collected from a full scan design to identify its FSM state registers. An impact metric is introduced to discriminate FSM state registers from other registers. A decision tree algorithm is constructed from the scan data for the regression analysis of the dependency of other registers on a chosen register to deduce its impact. The registers with the greater impact are more likely to be the FSM state registers. The proposed scheme is applied on several complex designs from OpenCores. The experiment results show the feasibility of our scheme in correctly identifying most FSM state registers with a high hit rate for a large majority of the designs.
In this work, we use a subjective approach to compute cyber resilience metrics for industrial control systems. We utilize the extended form of the R4 resilience framework and span the metrics over physical, technical, and organizational domains of resilience. We develop a qualitative cyber resilience assessment tool using the framework and a subjective questionnaire method. We make sure the questionnaires are realistic, balanced, and pertinent to ICS by involving subject matter experts into the process and following security guidelines and standards practices. We provide detail mathematical explanation of the resilience computation procedure. We discuss several usages of the qualitative tool by generating simulation results. We provide a system architecture of the simulation engine and the validation of the tool. We think the qualitative simulation tool would give useful insights for industrial control systems' overall resilience assessment and security analysis.
Authorship attribution is the problem of studying an anonymous text and finding the corresponding author in a set of candidate authors. In this paper, we propose a method based on N-grams model for the problem of authorship attribution. Several measures are used to assign an anonymous text to an author. The different variants of the proposed method are implemented and validated on PAN benchmarks. The numerical results are encouraging and demonstrate the benefit of the proposed idea.
The quantity of Internet of Things (IoT) devices in the marketplace and lack of security is staggering. The interconnectedness of IoT devices has increased the attack surface for hackers. "White Worm" technology has the potential to combat infiltrating malware. Before white worm technology becomes viable, its capabilities must be constrained to specific devices and limited to non-harmful actions. This paper addresses the current problem, international research, and the conflicting interest of individuals, businesses, and governments regarding white worm technology. Proposed is a new perspective on utilizing white worm technology to protect the vulnerability of IoT devices, while overcoming its challenges.