Biblio
Public key cryptography plays an important role in secure communications over insecure channels. Elliptic curve cryptography, as a variant of public key cryptography, has been extensively used in the last decades for such purposes. In this paper, we present a software tool for parallel generation of cryptographic keys based on elliptic curves. Binary method for point multiplication and C++ threads were used in parallel implementation, while secp256k1 elliptic curve was used for testing. Obtained results show speedup of 30% over the sequential solution for 8 threads. The results are briefly discussed in the paper.
The usage of robot is rapidly growth in our society. The communication link and applications connect the robots to their clients or users. This communication link and applications are normally connected through some kind of network connections. This network system is amenable of being attached and vulnerable to the security threats. It is a critical part for ensuring security and privacy for robotic platforms. The paper, also discusses about several cyber-physical security threats that are only for robotic platforms. The peer to peer applications use in the robotic platforms for threats target integrity, availability and confidential security purposes. A Remote Administration Tool (RAT) was introduced for specific security attacks. An impact oriented process was performed for analyzing the assessment outcomes of the attacks. Tests and experiments of attacks were performed in simulation environment which was based on Gazbo Turtlebot simulator and physically on the robot. A software tool was used for simulating, debugging and experimenting on ROS platform. Integrity attacks performed for modifying commands and manipulated the robot behavior. Availability attacks were affected for Denial-of-Service (DoS) and the robot was not listened to Turtlebot commands. Integrity and availability attacks resulted sensitive information on the robot.
Modern multicore System-on-Chips (SoCs) are regularly designed with third-party Intellectual Properties (IPs) and software tools to manage the complexity and development cost. This approach naturally introduces major security concerns, especially for those SoCs used in critical applications and cyberinfrastructure. Despite approaches like split manufacturing, security testing and hardware metering, this remains an open and challenging problem. In this work, we propose a dynamic intrusion detection approach to address the security challenge. The proposed runtime system (SoCINT) systematically gathers information about untrusted IPs and strictly enforces the access policies. SoCINT surpasses the-state-of-the-art monitoring systems by supporting hardware tracing, for more robust analysis, together with providing smart counterintelligence strategies. SoCINT is implemented in an open source processor running on a commercial FPGA platform. The evaluation results validate our claims by demonstrating resilience against attacks exploiting erroneous or malicious IPs.
Much recent work focuses on finding bugs and security vulnerabilities in smart contracts written in existing languages. Although this approach may be helpful, it does not address flaws in the underlying programming language, which can facilitate writing buggy code in the first place. We advocate a re-thinking of the blockchain software engineering tool set, starting with the programming language in which smart contracts are written. In this paper, we propose and justify requirements for a new generation of blockchain software development tools. New tools should (1) consider users' needs as a primary concern; (2) seek to facilitate safe development by detecting relevant classes of serious bugs at compile time; (3) as much as possible, be blockchain-agnostic, given the wide variety of different blockchain platforms available, and leverage the properties that are common among blockchain environments to improve safety and developer effectiveness.
In the development process of critical systems, one of the main challenges is to provide early system validation and verification against vulnerabilities in order to reduce cost caused by late error detection. We propose in this paper an approach that, firstly allows formally describe system security specifications, thanks to our suggested extended attack tree. Secondly, static and dynamic system modeling by using a SysML connectivity profile to model error propagation is introduced. Finally, a model checker has been used in order to validate system specifications.
This paper presents our results from identifying anddocumenting false positives generated by static code analysistools. By false positives, we mean a static code analysis toolgenerates a warning message, but the warning message isnot really an error. The goal of our study is to understandthe different kinds of false positives generated so we can (1)automatically determine if an error message is truly indeed a truepositive, and (2) reduce the number of false positives developersand testers must triage. We have used two open-source tools andone commercial tool in our study. The results of our study haveled to 14 core false positive patterns, some of which we haveconfirmed with static code analysis tool developers.
Most security software tools try to detect malicious components by cryptographic hashes, signatures or based on their behavior. The former, is a widely adopted approach based on Integrity Measurement Architecture (IMA) enabling appraisal and attestation of system components. The latter, however, may induce a very long time until misbehavior of a component leads to a successful detection. Another approach is a Dynamic Runtime Attestation (DRA) based on the comparison of binary code loaded in the memory and well-known references. Since DRA is a complex approach, involving multiple related components and often complex attestation strategies, a flexible and extensible architecture is needed. In a cooperation project an architecture was designed and a Proof of Concept (PoC) successfully developed and evaluated. To achieve needed flexibility and extensibility, the implementation facilitates central components providing attestation strategies (guidelines). These guidelines define and implement the necessary steps for all relevant attestation operations, i.e. measurement, reference generation and verification.
In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice. We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities. We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects. We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger.
Two emerging architectural paradigms, i.e., Software Defined Networking (SDN) and Network Function Virtualization (NFV), enable the deployment and management of Service Function Chains (SFCs). A SFC is an ordered sequence of abstract Service Functions (SFs), e.g., firewalls, VPN-gateways, traffic monitors, that packets have to traverse in the route from source to destination. While this appealing solution offers significant advantages in terms of flexibility, it also introduces new challenges such as the correct configuration and ordering of SFs in the chain to satisfy overall security requirements. This paper presents a formal model conceived to enable the verification of correct policy enforcements in SFCs. Software tools based on the model can then be designed to cope with unwanted network behaviors (e.g., security flaws) deriving from incorrect interactions of SFs of the same SFC.
This paper presents an overview of the research project “High-Performance Hybrid Simulation/Measurement-Based Tools for Proactive Operator Decision-Support”, performed under the auspices of the U.S. Department of Energy grant DE-OE0000628. The objective of this project is to develop software tools to provide enhanced real-time situational awareness to support the decision making and system control actions of transmission operators. The integrated tool will combine high-performance dynamic simulation with synchrophasor measurement data to assess in real time system dynamic performance and operation security risk. The project includes: (i) The development of high-performance dynamic simulation software; (ii) the development of new computationally effective measurement-based tools to estimate operating margins of a power system in real time using measurement data from synchrophasors and SCADA; (iii) the development a hybrid framework integrating measurement-based and simulation-based approaches, and (iv) the use of cutting-edge visualization technology to display various system quantities and to visually process the results of the hybrid measurement-base/simulation-based security-assessment tool. Parallelization and high performance computing are utilized to enable ultrafast transient stability analysis that can be used in a real-time environment to quickly perform “what-if” simulations involving system dynamics phenomena. EPRI's Extended Transient Midterm Simulation Program (ETMSP) is modified and enhanced for this work. The contingency analysis is scaled for large-scale contingency analysis using MPI-based parallelization. Simulations of thousands of contingencies on a high performance computing machine are performed, and results show that parallelization over contingencies with MPI provides good scalability and computational gains. Different ways to reduce the I/O bottleneck have been also exprored. Thread-parallelization of the sparse linear solve is explored also through use of the SuperLU_MT library. Based on performance profiling results for the implicit method, the majority of CPU time is spent on the integration steps. Hence, in order to further improve the ETMSP performance, a variable time step control scheme for the original trapezoidal integration method has been developed and implemented. The Adams-Bashforth-Moulton predictor-corrector method was introduced and designed for ETMSP. Test results show superior performance with this method.