Biblio
An adaptable agent-based IDS (AAIDS) inspired by the danger theory of artificial immune system is proposed. The learning mechanism of AAIDS is designed by emulating how dendritic cells (DC) in immune systems detect and classify danger signals. AG agent, DC agent and TC agent coordinate together and respond to system calls directly rather than analyze network packets. Simulations show AAIDS can determine several critical scenarios of the system behaviors where packet analysis is impractical.
This paper proposes a multi-modular AC-DC converter system using wireless communication for a rapid charger of electric vehicles (EVs). The multi-modular topology, which consists of multiple modules, has an advantage on the expandability regarding voltage and power. In the proposed system, the input current and output voltage are controlled by each decentralized controller, which wirelessly communicates to the main controller, on each module. Thus, high-speed communication between the main and modules is not required. As the results in a reduced number of signal lines. The fundamental effectiveness of the proposed system is verified with a 3-kW prototype. In the experimented results, the input current imbalance rate is reduced from 49.4% to 0.1%, where total harmonic distortion is less than 3%.
The integration of subset sum in the verifiable secret sharing scheme provides added security measure for a multiparty computation such as immediate identification of and removal of an imposter, avoidance or discourages man-in-the-middle attack and lattice-based attack, and lessens dealer's burden on processing monitoring the integrity of shareholders. This study focuses on the security assessment of a brute-force attack on the subset sum-based verifiable secret sharing scheme. With the simulation done using a generator of all possible fixed-length partition (which is k=3 as the least possible) summing up to the sum of the original subset generated by the dealer, it shows that it will already took 11,408 years to brute-force all possible values even on a small 32-bit-length value and 3.8455 years for a 128-bit length value thus proving that the resiliency on brute attack on the subset sum based VSSS can be discounted despite simplicity of the implementation. Zero knowledge on the number of threshold will also multiply to the impossibility of the brute force attack.
Internet of Things (IoT) offers new opportunities for business, technology and science but it also raises new challenges in terms of security and privacy, mainly because of the inherent characteristics of this environment: IoT devices come from a variety of manufacturers and operators and these devices suffer from constrained resources in terms of computation, communication and storage. In this paper, we address the problem of trust establishment for IoT and propose a security solution that consists of a secure bootstrap mechanism for device identification as well as a message attestation mechanism for aggregate response validation. To achieve both security requirements, we approach the problem in a confined environment, named SubNets of Things (SNoT), where various devices depend on it. In this context, devices are uniquely and securely identified thanks to their environment and their role within it. Additionally, the underlying message authentication technique features signature aggregation and hence, generates one compact response on behalf of all devices in the subnet.
In Software Defined Infrastructure (SDI), virtualization techniques are used to decouple applications and higher-level services from their underlying physical compute, storage, and network resources. The approach offers a set of powerful new capabilities (isolation, encapsulation, portability, interposition), including the formation of a software-based, infrastructure-wide control plane for orchestrated management. In this position paper, we identify opportunities for revisiting ongoing cybersecurity challenges using SDI as a powerful new toolset. Benefits of this approach can be broadly utilized in public, private, and hybrid clouds, data centers, enterprise computing, IoT deployments, and more. The discussion motivates the research challenge underlying VMware's partnership with the National Science Foundation to fund novel and foundational research in this area. Known as the NSF/VMware Partnership on Software Defined Infrastructure as a Foundation for Clean-Slate Computing Security (SDI-CSCS), the jointly funded university research program is set to begin in the fall of 2017.
Cyber defense can no longer be limited to intrusion detection methods. These systems require malicious activity to enter an internal network before an attack can be detected. Having advanced, predictive knowledge of future attacks allow a potential victim to heighten security and possibly prevent any malicious traffic from breaching the network. This paper investigates the use of Auto-Regressive Integrated Moving Average (ARIMA) models and Bayesian Networks (BN) to predict future cyber attack occurrences and intensities against two target entities. In addition to incident count forecasting, categorical and binary occurrence metrics are proposed to better represent volume forecasts to a victim. Different measurement periods are used in time series construction to better model the temporal patterns unique to each attack type and target configuration, seeing over 86% improvement over baseline forecasts. Using ground truth aggregated over different measurement periods as signals, a BN is trained and tested for each attack type and the obtained results provided further evidence to support the findings from ARIMA. This work highlights the complexity of cyber attack occurrences; each subset has unique characteristics and is influenced by a number of potential external factors.
Control flow integrity (CFI) has received significant attention in the community to combat control hijacking attacks in the presence of memory corruption vulnerabilities. The challenges in creating a practical CFI has resulted in the development of a new type of CFI based on runtime type checking (RTC). RTC-based CFI has been implemented in a number of recent practical efforts such as GRSecurity Reuse Attack Protector (RAP) and LLVM-CFI. While there has been a number of previous efforts that studied the strengths and limitations of other types of CFI techniques, little has been done to evaluate the RTC-based CFI. In this work, we study the effectiveness of RTC from the security and practicality aspects. From the security perspective, we observe that type collisions are abundant in sufficiently large code bases but exploiting them to build a functional attack is not straightforward. Then we show how an attacker can successfully bypass RTC techniques using a variant of ROP attacks that respect type checking (called TROP) and also built two proof-of-concept exploits, one against Nginx web server and the other against Exim mail server. We also discuss practical challenges of implementing RTC. Our findings suggest that while RTC is more practical for applying CFI to large code bases, its policy is not strong enough when facing a motivated attacker.
Science gateways bring out the possibility of reproducible science as they are integrated into reusable techniques, data and workflow management systems, security mechanisms, and high performance computing (HPC). We introduce BioinfoPortal, a science gateway that integrates a suite of different bioinformatics applications using HPC and data management resources provided by the Brazilian National HPC System (SINAPAD). BioinfoPortal follows the Software as a Service (SaaS) model and the web server is freely available for academic use. The goal of this paper is to describe the science gateway and its usage, addressing challenges of designing a multiuser computational platform for parallel/distributed executions of large-scale bioinformatics applications using the Brazilian HPC resources. We also present a study of performance and scalability of some bioinformatics applications executed in the HPC environments and perform machine learning analyses for predicting features for the HPC allocation/usage that could better perform the bioinformatics applications via BioinfoPortal.
The impending realization of scalable quantum computers will have a significant impact on today's security infrastructure. With the advent of powerful quantum computers public key cryptographic schemes will become vulnerable to Shor's quantum algorithm, undermining the security current communications systems. Post-quantum (or quantum-resistant) cryptography is an active research area, endeavoring to develop novel and quantum resistant public key cryptography. Amongst the various classes of quantum-resistant cryptography schemes, lattice-based cryptography is emerging as one of the most viable options. Its efficient implementation on software and on commodity hardware has already been shown to compete and even excel the performance of current classical security public-key schemes. This work discusses the next step in terms of their practical deployment, i.e., addressing the physical security of lattice-based cryptographic implementations. We survey the state-of-the-art in terms of side channel attacks (SCA), both invasive and passive attacks, and proposed countermeasures. Although the weaknesses exposed have led to countermeasures for these schemes, the cost, practicality and effectiveness of these on multiple implementation platforms, however, remains under-studied.
The process of release of a single domain wall from the closure domain structure at the microwire ends and the process of nucleation of the reversed domain in regions far from the microwire ends were studied using the technique that consists in determining the critical parameters of the rectangular magnetic field pulse (magnitude-Hpc and length-τc) needed for free domain wall production. Since these processes can be influenced by the magnitude of the magnetic field before or after the application of the field pulse (Hi, τ), we propose a modified experiment in which the so-called three-level pulse is used. The three-level pulse starts from the first level, then continues with the second measuring rectangular pulse (Hi, τ), which ends at the third field level. Based on the results obtained in experiments using three-level field pulses, it has been shown that reversed domains are not present in the remanent state in regions far from the microwire ends. Some modification of the theoretical model of a single domain wall trapped in a potential well will be needed for an adequate description of the depinning processes.
Memory-safety violations are the primary cause of security and reliability issues in software systems written in unsafe languages. Given the limited adoption of decades-long research in software-based memory safety approaches, as an alternative, Intel released Memory Protection Extensions (MPX)–-a hardware-assisted technique to achieve memory safety. In this work, we perform an exhaustive study of Intel MPX architecture along three dimensions: (a) performance overheads, (b) security guarantees, and (c) usability issues. We present the first detailed root cause analysis of problems in the Intel MPX architecture through a cross-layer dissection of the entire system stack, involving the hardware, operating system, compilers, and applications. To put our findings into perspective, we also present an in-depth comparison of Intel MPX with three prominent types of software-based memory safety approaches. Lastly, based on our investigation, we propose directions for potential changes to the Intel MPX architecture to aid the design space exploration of future hardware extensions for memory safety. A complete version of this work appears in the 2018 proceedings of the ACM on Measurement and Analysis of Computing Systems.
Fog computing provides computing, storage and communication resources at the edge of the network, near the physical world. Subsequently, end devices nearing the physical world can have interesting properties such as short delays, responsiveness, optimized communications and privacy. However, these end devices have low stability and are prone to failures. There is consequently a need for failure management protocols for IoT applications in the Fog. The design of such solutions is complex due to the specificities of the environment, i.e., (i) dynamic infrastructure where entities join and leave without synchronization, (ii) high heterogeneity in terms of functions, communication models, network, processing and storage capabilities, and, (iii) cyber-physical interactions which introduce non-deterministic and physical world's space and time dependent events. This paper presents a fault tolerance approach taking into account these three characteristics of the Fog-IoT environment. Fault tolerance is achieved by saving the state of the application in an uncoordinated way. When a failure is detected, notifications are propagated to limit the impact of failures and dynamically reconfigure the application. Data stored during the state saving process are used for recovery, taking into account consistency with respect to the physical world. The approach was validated through practical experiments on a smart home platform.
Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it. To this day, zk-SNARKs are being used for delegating computation, electronic cryptocurrencies, and anonymous credentials. However, all current SNARKs implementations rely on pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades. In this work, we introduce the first designated-verifier zk-SNARK based on lattice assumptions, which are believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt'13) to the SNARK of Danezis et al. (Asiacrypt'14) that is based on Square Span Programs (SSPs) and relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters as well as extensive benchmarks on a C implementation, showing that our construction is practically instantiable.
Anomaly detection on security logs is receiving more and more attention. Authentication events are an important component of security logs, and being able to produce trustful and accurate predictions minimizes the effort of cyber-experts to stop false attacks. Observed events are classified into Normal, for legitimate user behavior, and Malicious, for malevolent actions. These classes are consistently excessively imbalanced which makes the classification problem harder; in the commonly used Los Alamos dataset, the malicious class comprises only 0.00033% of the total. This work proposes a novel method to extract advanced composite features, and a supervised learning technique for classifying authentication logs trustfully; the models are Random Forest, LogitBoost, Logistic Regression, and ultimately Majority Voting which leverages the predictions of the previous models and gives the final prediction for each authentication event. We measure the performance of our experiments by using the False Negative Rate and False Positive Rate. In overall we achieve 0 False Negative Rate (i.e. no attack was missed), and on average a False Positive Rate of 0.0019.
Named Data Networking (NDN) is the most mature proposal of the Information Centric Networking paradigm, a clean-slate approach for the Future Internet. Although NDN was designed to tackle security issues inherent to IP networks natively, newly introduced security attacks in its transitional phase threaten NDN's practical deployment. Therefore, a security monitoring plane for NDN is indispensable before any potential deployment of this novel architecture in an operating context by any provider. We propose an approach for the monitoring and anomaly detection in NDN nodes leveraging Bayesian Network techniques. A list of monitored metrics is introduced as a quantitative measure to feature the behavior of an NDN node. By leveraging the hypothesis testing theory, a micro detector is developed to detect whenever the metric significantly changes from its normal behavior. A Bayesian network structure that correlates alarms from micro detectors is designed based on the expert knowledge of the NDN specification and the NFD implementation. The relevance and performance of our security monitoring approach are demonstrated by considering the Content Poisoning Attack (CPA), one of the most critical attacks in NDN, through numerous experiment data collected from a real NDN deployment.
Cybersecurity assurance plays an important role in managing trust in smart grid communication systems. In this paper, cybersecurity assurance controls for smart grid communication networks and devices are delineated from the more technical functional controls to provide insights on recent innovative risk-based approaches to cybersecurity assurance in smart grid systems. The cybersecurity assurance control baselining presented in this paper is based on requirements and guidelines of the new family of IEC 62443 standards on network and systems security of industrial automation and control systems. The paper illustrates how key cybersecurity control baselining and tailoring concepts of the U.S. NIST SP 800-53 can be adopted in smart grid security architecture. The paper outlines the application of IEC 62443 standards-based security zoning and assignment of security levels to the zones in smart grid system architectures. To manage trust in the smart grid system architecture, cybersecurity assurance base lining concepts are applied per security impact levels. Selection and justification of security assurance controls presented in the paper is utilizing the approach common in Security Technical Implementation Guides (STIGs) of the U.S. Defense Information Systems Agency. As shown in the paper, enhanced granularity for managing trust both on the overall system and subsystem levels of smart grid systems can be achieved by implementation of the instructions of the CNSSI 1253 of the U.S. Committee of National Security Systems on security categorization and control selection for national security systems.
The design of optimal energy management strategies that trade-off consumers' privacy and expected energy cost by using an energy storage is studied. The Kullback-Leibler divergence rate is used to assess the privacy risk of the unauthorized testing on consumers' behavior. We further show how this design problem can be formulated as a belief state Markov decision process problem so that standard tools of the Markov decision process framework can be utilized, and the optimal solution can be obtained by using Bellman dynamic programming. Finally, we illustrate the privacy-enhancement and cost-saving by numerical examples.
In Smart Grids (SGs), data aggregation process is essential in terms of limiting packet size, data transmission amount and data storage requirements. This paper presents a novel Domingo-Ferrer additive privacy based Secure Data Aggregation (SDA) scheme for Fog Computing based SGs (FCSG). The proposed protocol achieves end-to-end confidentiality while ensuring low communication and storage overhead. Data aggregation is performed at fog layer to reduce the amount of data to be processed and stored at cloud servers. As a result, the proposed protocol achieves better response time and less computational overhead compared to existing solutions. Moreover, due to hierarchical architecture of FCSG and additive homomorphic encryption consumer privacy is protected from third parties. Theoretical analysis evaluates the effects of packet size and number of packets on transmission overhead and the amount of data stored in cloud server. In parallel with the theoretical analysis, our performance evaluation results show that there is a significant improvement in terms of data transmission and storage efficiency. Moreover, security analysis proves that the proposed scheme successfully ensures the privacy of collected data.