Biblio
To meet the growing railway-transportation demand, a new train control system, communication-based train control (CBTC) system, aims to maximize the ability of train lines by reducing the headway of each train. However, the wireless communications expose the CBTC system to new security threats. Due to the cyber-physical nature of the CBTC system, a jamming attack can damage the physical part of the train system by disrupting the communications. To address this issue, we develop a secure framework to mitigate the impact of the jamming attack based on a security criterion. At the cyber layer, we apply a multi-channel model to enhance the reliability of the communications and develop a zero-sum stochastic game to capture the interactions between the transmitter and jammer. We present analytical results and apply dynamic programming to find the equilibrium of the stochastic game. Finally, the experimental results are provided to evaluate the performance of the proposed secure mechanism.
With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We use a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed algorithms. We develop a secure and resilient DSVM algorithm with rejection method, and show its resiliency against adversary with numerical experiments.
CAPTCHA is a type of challenge-response test to ensure that the response is only generated by humans and not by computerized robots. CAPTCHA are getting harder as because usage of latest advanced pattern recognition and machine learning algorithms are capable of solving simpler CAPTCHA. However, some enhancement procedures make the CAPTCHAs too difficult to be recognized by the human. This paper resolves the problem by next generation human-friendly mini game-CAPTCHA for quantifying the usability of CAPTCHAs.
In this paper, we initiate the study of garbled protocols - a generalization of Yao's garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol. We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain protocols (i) for the setting of random access machines (RAM programs) while keeping communication and computational costs proportional to running times, while (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations. Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].
We present Minesweeper, a tool to verify that a network satisfies a wide range of intended properties such as reachability or isolation among nodes, waypointing, black holes, bounded path length, load-balancing, functional equivalence of two routers, and fault-tolerance. Minesweeper translates network configuration files into a logical formula that captures the stable states to which the network forwarding will converge as a result of interactions between routing protocols such as OSPF, BGP and static routes. It then combines the formula with constraints that describe the intended property. If the combined formula is satisfiable, there exists a stable state of the network in which the property does not hold. Otherwise, no stable state (if any) violates the property. We used Minesweeper to check four properties of 152 real networks from a large cloud provider. We found 120 violations, some of which are potentially serious security vulnerabilities. We also evaluated Minesweeper on synthetic benchmarks, and found that it can verify rich properties for networks with hundreds of routers in under five minutes. This performance is due to a suite of model-slicing and hoisting optimizations that we developed, which reduce runtime by over 460x for large networks.
With the globalization of integrated circuit design and manufacturing, Hardware Trojan have posed serious threats to the security of commercial chips. In this paper, we propose the framework of two-level temperature difference based thermal map analysis detection method. In our proposed method, thermal maps of an operating chip during a period are captured, and they are differentiated with the thermal maps of a golden model. Then every pixel's differential temperature of differential thermal maps is extracted and compared with other pixel's. To mitigate the Gaussian white noise and to differentiate the information of Hardware Trojan from the information of normal circuits, Kalman filter algorithm is involved. In our experiment, FPGAs configured with equivalent circuits are utilized to simulate the real chips to validate our proposed approach. The experimental result reveals that our proposed framework can detect Hardware Trojan whose power proportion magnitude is 10''3.
The deceleration of transistor feature size scaling has motivated growing adoption of specialized accelerators implemented as GPUs, FPGAs, ASICs, and more recently new types of computing such as neuromorphic, bio-inspired, ultra low energy, reversible, stochastic, optical, quantum, combinations, and others unforeseen. There is a tension between specialization and generalization, with the current state trending to master slave models where accelerators (slaves) are instructed by a general purpose system (master) running an Operating System (OS). Traditionally, an OS is a layer between hardware and applications and its primary function is to manage hardware resources and provide a common abstraction to applications. Does this function, however, apply to new types of computing paradigms? This paper revisits OS functionality for memristor-based accelerators. We explore one accelerator implementation, the Dot Product Engine (DPE), for a select pattern of applications in machine learning, imaging, and scientific computing and a small set of use cases. We explore typical OS functionality, such as reconfiguration, partitioning, security, virtualization, and programming. We also explore new types of functionality, such as precision and trustworthiness of reconfiguration. We claim that making an accelerator, such as the DPE, more general will result in broader adoption and better utilization.
This study proposes to apply an efficient formulation to solve the stochastic security-constrained generation capacity expansion planning (GCEP) problem using an improved method to directly compute the generalized generation distribution factors (GGDF) and the line outage distribution factors (LODF) in order to model the pre- and the post-contingency constraints based on the only application of the partial transmission distribution factors (PTDF). The classical DC-based formulation has been reformulated in order to include the security criteria solving both pre- and post-contingency constraints simultaneously. The methodology also takes into account the load uncertainty in the optimization problem using a two-stage multi-period model, and a clustering technique is used as well to reduce load scenarios (stochastic problem). The main advantage of this methodology is the feasibility to quickly compute the LODF especially with multiple-line outages (N-m). This idea could speed up contingency analyses and improve significantly the security-constrained analyses applied to GCEP problems. It is worth to mentioning that this approach is carried out without sacrificing optimality.
The keys generated by (symmetric or asymmetric) have been still compromised by attackers. Cryptography algorithms need extra efforts to enhance the security of keys that are transferring between parities. Also, using cryptography algorithms increase time consumption and overhead cost through communication. Encryption is very important issue for protecting information from stealing. Unfortunately encryption can achieve confidentiality not integrity. Covert channel allows two parties to indirectly send information, where the main drawbacks of covert channel are detectability and the security of pre-agreement knowledge. In this paper, i merge between encryption, authentication and convert channel to achieve un-detectability covert channel. This channel guarantee integrity and confidentiality of covert data and sending data dynamically. I propose and implement un-detectability a covert channel using AES (Advanced Encryption Standard) algorithm and HMAC (Hashed Message Authentication Code). Where this channel is un-detectability with integrity and confidentiality agreement process between the sender and the receiver. Instead of sending fake key directly through channel, encryption and HMAC function used to hide fake key. After that investigations techniques for improving un-detectability of channel is proposed.
Companies want to offer chat bots to their customers and employees which can answer questions, enable self-service, and showcase their products and services. Implementing and maintaining chat bots by hand costs time and money. Companies typically have web APIs for their services, which are often documented with an API specification. This paper presents a compiler that takes a web API specification written in Swagger and automatically generates a chat bot that helps the user make API calls. The generated bot is self-documenting, using descriptions from the API specification to answer help requests. Unfortunately, Swagger specifications are not always good enough to generate high-quality chat bots. This paper addresses this problem via a novel in-dialogue curation approach: the power user can improve the generated chat bot by interacting with it. The result is then saved back as an API specification. This paper reports on the design and implementation of the chat bot compiler, the in-dialogue curation, and working case studies.
This work describes how automated data generation integrates in a big data pipeline. A lack of veracity in big data can cause models that are inaccurate, or biased by trends in the training data. This can lead to issues as a pipeline matures that are difficult to overcome. This work describes the use of a Generative Adversarial Network to generate sketch data, such as those that might be used in a human verification task. These generated sketches are verified as recognizable using a crowd-sourcing methodology, and finds that the generated sketches were correctly recognized 43.8% of the time, in contrast to human drawn sketches which were 87.7% accurate. This method is scalable and can be used to generate realistic data in many domains and bootstrap a dataset used for training a model prior to deployment.
Information security policies are not easy to create unless organizations explicitly recognize the various steps required in the development process of an information security policy, especially in institutions of higher education that use enormous amounts of IT. An improper development process or a copied security policy content from another organization might also fail to execute an effective job. The execution could be aimed at addressing an issue such as the non-compliance to applicable rules and regulations even if the replicated policy is properly developed, referenced, cited in laws or regulations and interpreted correctly. A generic framework was proposed to improve and establish the development process of security policies in institutions of higher education. The content analysis and cross-case analysis methods were used in this study in order to gain a thorough understanding of the information security policy development process in institutions of higher education.
Mobile Ad Hoc Network (MANET) technology provides intercommunication between different nodes where no infrastructure is available for communication. MANET is attracting many researcher attentions as it is cost effective and easy for implementation. Main challenging aspect in MANET is its vulnerability. In MANET nodes are very much vulnerable to attacks along with its data as well as data flowing through these nodes. One of the main reasons of these vulnerabilities is its communication policy which makes nodes interdependent for interaction and data flow. This mutual trust between nodes is exploited by attackers through injecting malicious node or replicating any legitimate node in MANET. One of these attacks is blackhole attack. In this study, the behavior of blackhole attack is discussed and have proposed a lightweight solution for blackhole attack which uses inbuilt functions.
Notable recent security incidents have generated intense interest in adversaries which attempt to subvert–-perhaps covertly–-crypto$\backslash$-graphic algorithms. In this paper we develop (IND-CPA) Semantically Secure encryption in this challenging setting. This fundamental encryption primitive has been previously studied in the "kleptographic setting," though existing results must relax the model by introducing trusted components or otherwise constraining the subversion power of the adversary: designing a Public Key System that is kletographically semantically secure (with minimal trust) has remained elusive to date. In this work, we finally achieve such systems, even when all relevant cryptographic algorithms are subject to adversarial (kleptographic) subversion. To this end we exploit novel inter-component randomized cryptographic checking techniques (with an offline checking component), combined with common and simple software engineering modular programming techniques (applied to the system's black box specification level). Moreover, our methodology yields a strong generic technique for the preservation of any semantically secure cryptosystem when incorporated into the strong kleptographic adversary setting.
A wireless sensor network (WSN) is composed of sensor nodes and a base station. In WSNs, constructing an efficient key-sharing scheme to ensure a secure communication is important. In this paper, we propose a new key-sharing scheme for groups, which shares a group key in a single broadcast without being dependent on the number of nodes. This scheme is based on geometric characteristics and has information-theoretic security in the analysis of transmitted data. We compared our scheme with conventional schemes in terms of communication traffic, computational complexity, flexibility, and security, and the results showed that our scheme is suitable for an Internet-of-Things (IoT) network.
accepted
Users today enjoy access to a wealth of services that rely on user-contributed data, such as recommendation services, prediction services, and services that help classify and interpret data. The quality of such services inescapably relies on trustworthy contributions from users. However, validating the trustworthiness of contributions may rely on privacy-sensitive contextual data about the user, such as a user's location or usage habits, creating a conflict between privacy and trust: users benefit from a higher-quality service that identifies and removes illegitimate user contributions, but, at the same time, they may be reluctant to let the service access their private information to achieve this high quality. We argue that this conflict can be resolved with a pragmatic Glimmer of Trust, which allows services to validate user contributions in a trustworthy way without forfeiting user privacy. We describe how trustworthy hardware such as Intel's SGX can be used on the client-side–-in contrast to much recent work exploring SGX in cloud services–-to realize the Glimmer architecture, and demonstrate how this realization is able to resolve the tension between privacy and trust in a variety of cases.
Graphics processing unit (GPU) has been applied successfully in many scientific computing realms due to its superior performances on float-pointing calculation and memory bandwidth, and has great potential in power system applications. The N-1 static security analysis (SSA) appears to be a candidate application in which massive alternating current power flow (ACPF) problems need to be solved. However, when applying existing GPU-accelerated algorithms to solve N-1 SSA problem, the degree of parallelism is limited because existing researches have been devoted to accelerating the solution of a single ACPF. This paper therefore proposes a GPU-accelerated solution that creates an additional layer of parallelism among batch ACPFs and consequently achieves a much higher level of overall parallelism. First, this paper establishes two basic principles for determining well-designed GPU algorithms, through which the limitation of GPU-accelerated sequential-ACPF solution is demonstrated. Next, being the first of its kind, this paper proposes a novel GPU-accelerated batch-QR solver, which packages massive number of QR tasks to formulate a new larger-scale problem and then achieves higher level of parallelism and better coalesced memory accesses. To further improve the efficiency of solving SSA, a GPU-accelerated batch-Jacobian-Matrix generating and contingency screening is developed and carefully optimized. Lastly, the complete process of the proposed GPU-accelerated batch-ACPF solution for SSA is presented. Case studies on an 8503-bus system show dramatic computation time reduction is achieved compared with all reported existing GPU-accelerated methods. In comparison to UMFPACK-library-based single-CPU counterpart using Intel Xeon E5-2620, the proposed GPU-accelerated SSA framework using NVIDIA K20C achieves up to 57.6 times speedup. It can even achieve four times speedup when compared to one of the fastest multi-core CPU parallel computing solution using KLU library. The prop- sed batch-solving method is practically very promising and lays a critical foundation for many other power system applications that need to deal with massive subtasks, such as Monte-Carlo simulation and probabilistic power flow.
Object-oriented languages like Java and C\# allow the null value for all references. This supports many flexible patterns, but has led to many errors, security vulnerabilities, and system crashes. % Static type systems can prevent null-pointer exceptions at compile time, but require annotations, in particular for used libraries. Conservative defaults choose the most restrictive typing, preventing many errors, but requiring a large annotation effort. Liberal defaults choose the most flexible typing, requiring less annotations, but giving weaker guarantees. Trusted annotations can be provided, but are not checked and require a large manual effort. None of these approaches provide a strong guarantee that the checked part of the program is isolated from the unchecked part: even with conservative defaults, null-pointer exceptions can occur in the checked part. This paper presents Granullar, a gradual type system for null-safety. Developers start out verifying null-safety for the most important components of their applications. At the boundary to unchecked components, runtime checks are inserted by Granullar to guard the verified system from being polluted by unexpected null values. This ensures that null-pointer exceptions can only occur within the unchecked code or at the boundary to checked code; the checked code is free of null-pointer exceptions. We present Granullar for Java, define the checked-unchecked boundary, and how runtime checks are generated. We evaluate our approach on real world software annotated for null-safety. We demonstrate the runtime checks, and acceptable compile-time and run-time performance impacts. Granullar enables combining a checked core with untrusted libraries in a safe manner, improving on the practicality of such a system.
As societies are becoming more dependent on the power grids, the security issues and blackout threats are more emphasized. This paper proposes a new graph model for online visualization and assessment of power grid security. The proposed model integrates topology and power flow information to estimate and visualize interdependencies between the lines in the form of line dependency graph (LDG) and immediate threats graph (ITG). These models enable the system operator to predict the impact of line outage and identify the most vulnerable and critical links in the power system. Line Vulnerability Index (LVI) and Line Criticality Index (LCI) are introduced as two indices extracted from LDG to aid the operator in decision making and contingency selection. This package can be useful in enhancing situational awareness in power grid operation by visualization and estimation of system threats. The proposed approach is tested for security analysis of IEEE 30-bus and IEEE 118-bus systems and the results are discussed.
In this paper, we present a framework for graph-based representation of relation between sensors and alert types in a security alert sharing platform. Nodes in a graph represent either sensors or alert types, while edges represent various relations between them, such as common type of reported alerts or duplicated alerts. The graph is automatically updated, stored in a graph database, and visualized. The resulting graph will be used by network administrators and security analysts as a visual guide and situational awareness tool in a complex environment of security alert sharing.
In an Internet of Things (IOT) network, each node (device) provides and requires services and with the growth in IOT, the number of nodes providing the same service have also increased, thus creating a problem of selecting one reliable service from among many providers. In this paper, we propose a scalable graph-based collaborative filtering recommendation algorithm, improved using trust to solve service selection problem, which can scale to match the growth in IOT unlike a central recommender which fails. Using this recommender, a node can predict its ratings for the nodes that are providing the required service and then select the best rated service provider.