Biblio
The understanding of measured jitter is improved in three ways. First, it is shown that the measured jitter is not only governed by written-in jitter and the reader resolution along the cross-track direction but by remanence noise in the vicinity of transitions and the down-track reader resolution as well. Second, a novel data analysis scheme is introduced that allows for an unambiguous separation of these two contributions. Third, based on data analyses involving the first two learnings and micro-magnetic simulations, we identify and explain the root causes for variations of jitter with write current (WC) (write field), WC overshoot amplitude (write-field rise time), and linear disk velocity measured for heat-assisted magnetic recording.
Traditional network routing protocol exhibits high statics and singleness, which provide significant advantages for the attacker. There are two kinds of attacks on the network: active attacks and passive attacks. Existing solutions for those attacks are based on replication or detection, which can deal with active attacks; but are helpless to passive attacks. In this paper, we adopt the theory of network coding to fragment the data in the Software-Defined Networks and propose a network coding-based resilient multipath routing scheme. First, we present a new metric named expected eavesdropping ratio to measure the resilience in the presence of passive attacks. Then, we formulate the network coding-based resilient multipath routing problem as an integer-programming optimization problem by using expected eavesdropping ratio. Since the problem is NP-hard, we design a Simulated Annealing-based algorithm to efficiently solve the problem. The simulation results demonstrate that the proposed algorithms improve the defense performance against passive attacks by about 20% when compared with baseline algorithms.
The borderless, dynamic, high dimensional and virtual natures of cyberspace have brought unprecedented hard situation for defenders. To fight uncertain challenges in versatile cyberspace, a security framework based on the cloud computing platform that facilitates containerization technology to create a security capability pool to generate and distribute security payload according to system needs. Composed by four subsystems of the security decision center, the image and container library, the decision rule base and the security event database, this framework distills structured knowledge from aggregated security events and then deliver security load to the managed network or terminal nodes directed by the decision center. By introducing such unified and standardized top-level security framework that is decomposable, combinable and configurable in a service-oriented manner, it could offer flexibility and effectiveness in reconstructing security resource allocation and usage to reach higher efficiency.
This paper deals with effects of current sensor bandwidth and time delays in a system controlled by a Phase-Shift Self-Oscillating Current Controller (PSSOCC). The robustness of this current controller has been proved in former works showing its good performances in a large range of applications including AC/DC and DC/AC converters, power factor correction, active filters, isolation amplifiers and motor control. As switching frequencies can be upper than 30kHz, time delays and bandwidth limitations cannot be neglected in comparison with former works on this robust current controller. Thus, several models are proposed in this paper to analyze system behaviours. Those models permit to find analytical expressions binding maximum oscillation frequency with time delay and/or additional filter parameters. Through current spectrums analysis, quality of analytical expressions is proved for each model presented in this work. An experimental approach shows that every element of the electronic board having a low-pass effect or delaying command signals need to be included in the model in order to have a perfect match between calculations, simulations and practical results.
The Internet of Things is stepping out of its infancy into full maturity, requiring massive data processing and storage. Unfortunately, because of the unique characteristics of resource constraints, short-range communication, and self-organization in IoT, it always resorts to the cloud or fog nodes for outsourced computation and storage, which has brought about a series of novel challenging security and privacy threats. For this reason, one of the critical challenges of having numerous IoT devices is the capacity to manage them and their data. A specific concern is from which devices or Edge clouds to accept join requests or interaction requests. This paper discusses a design concept for developing the IoT data management platform, along with a data management and lineage traceability implementation of the platform based on blockchain and smart contracts, which approaches the two major challenges: how to implement effective data management and enrich rational interoperability for trusted groups of linked Things; And how to settle conflicts between untrusted IoT devices and its requests taking into account security and privacy preserving. Experimental results show that the system scales well with the loss of computing and communication performance maintaining within the acceptable range, works well to effectively defend against unauthorized access and empower data provenance and transparency, which verifies the feasibility and efficiency of the design concept to provide privacy, fine-grained, and integrity data management over the IoT devices by introducing the blockchain-based data management platform.
The current authentication systems based on password and pin code are not enough to guarantee attacks from malicious users. For this reason, in the last years, several studies are proposed with the aim to identify the users basing on their typing dynamics. In this paper, we propose a deep neural network architecture aimed to discriminate between different users using a set of keystroke features. The idea behind the proposed method is to identify the users silently and continuously during their typing on a monitored system. To perform such user identification effectively, we propose a feature model able to capture the typing style that is specific to each given user. The proposed approach is evaluated on a large dataset derived by integrating two real-world datasets from existing studies. The merged dataset contains a total of 1530 different users each writing a set of different typing samples. Several deep neural networks, with an increasing number of hidden layers and two different sets of features, are tested with the aim to find the best configuration. The final best classifier scores a precision equal to 0.997, a recall equal to 0.99 and an accuracy equal to 99% using an MLP deep neural network with 9 hidden layers. Finally, the performances obtained by using the deep learning approach are also compared with the performance of traditional decision-trees machine learning algorithm, attesting the effectiveness of the deep learning-based classifiers in the domain of keystroke analysis.
Modern computer peripherals are diverse in their capabilities and functionality, ranging from keyboards and printers to smartphones and external GPUs. In recent years, peripherals increasingly connect over a small number of standardized communication protocols, including USB, Bluetooth, and NFC. The host operating system is responsible for managing these devices; however, malicious peripherals can request additional functionality from the OS resulting in system compromise, or can craft data packets to exploit vulnerabilities within OS software stacks. Defenses against malicious peripherals to date only partially cover the peripheral attack surface and are limited to specific protocols (e.g., USB). In this paper, we propose Linux (e)BPF Modules (LBM), a general security framework that provides a unified API for enforcing protection against malicious peripherals within the Linux kernel. LBM leverages the eBPF packet filtering mechanism for performance and extensibility and we provide a high-level language to facilitate the development of powerful filtering functionality. We demonstrate how LBM can provide host protection against malicious USB, Bluetooth, and NFC devices; we also instantiate and unify existing defenses under the LBM framework. Our evaluation shows that the overhead introduced by LBM is within 1 μs per packet in most cases, application and system overhead is negligible, and LBM outperforms other state-of-the-art solutions. To our knowledge, LBM is the first security framework designed to provide comprehensive protection against malicious peripherals within the Linux kernel.
Conversational agents assist traditional teaching-learning instruments in proposing new designs for knowledge creation and learning analysis, across organizational environments. Means of building common educative background in both industry and academic fields become of interest for ensuring educational effectiveness and consistency. Such a context requires transferable practices and becomes the basis for the Agile adoption into Higher Education, at both curriculum and operational levels. The current work proposes a model for delivering Agile Scrum training through an assistive web-based conversational service, where analytics are collected to provide an overview on learners' knowledge path. Besides its specific applicability into Software Engineering (SE) industry, the model is to assist the academic SE curriculum. A user-acceptance test has been carried out among 200 undergraduate students and patterns of interaction have been depicted for 2 conversational strategies.
Security attacks against Internet of Things (IoT) are on the rise and they lead to drastic consequences. Data confidentiality is typically based on a strong symmetric-key algorithm to guard against confidentiality attacks. However, there is a need to design an efficient lightweight cipher scheme for a number of applications for IoT systems. Recently, a set of lightweight cryptographic algorithms have been presented and they are based on the dynamic key approach, requiring a small number of rounds to minimize the computation and resource overhead, without degrading the security level. This paper follows this logic and provides a new flexible lightweight cipher, with or without chaining operation mode, with a simple round function and a dynamic key for each input message. Consequently, the proposed cipher scheme can be utilized for real-time applications and/or devices with limited resources such as Multimedia Internet of Things (MIoT) systems. The importance of the proposed solution is that it produces dynamic cryptographic primitives and it performs the mixing of selected blocks in a dynamic pseudo-random manner. Accordingly, different plaintext messages are encrypted differently, and the avalanche effect is also preserved. Finally, security and performance analysis are presented to validate the efficiency and robustness of the proposed cipher variants.
The Internet of Things (IoT) systems are vulnerable to many security threats that may have drastic impacts. Existing cryptographic solutions do not cater for the limitations of resource-constrained IoT devices, nor for real-time requirements of some IoT applications. Therefore, it is essential to design new efficient cipher schemes with low overhead in terms of delay and resource requirements. In this paper, we propose a lightweight stream cipher scheme, which is based, on one hand, on the dynamic key-dependent approach to achieve a high security level, and on the other hand, the scheme involves few simple operations to minimize the overhead. In our approach, cryptographic primitives change in a dynamic lightweight manner for each input block. Security and performance study as well as experimentation are performed to validate that the proposed cipher achieves a high level of efficiency and robustness, making it suitable for resource-constrained IoT devices.
The large amounts of synchrophasor data obtained by Phasor Measurement Units (PMUs) provide dynamic visibility into power systems. Extracting reliable information from the data can enhance power system situational awareness. The data quality often suffers from data losses, bad data, and cyber data attacks. Data privacy is also an increasing concern. In this paper, we discuss our recently proposed framework of data recovery, error correction, data privacy enhancement, and event identification methods by exploiting the intrinsic low-dimensional structures in the high-dimensional spatial-temporal blocks of PMU data. Our data-driven approaches are computationally efficient with provable analytical guarantees. The data recovery method can recover the ground-truth data even if simultaneous and consecutive data losses and errors happen across all PMU channels for some time. We can identify PMU channels that are under false data injection attacks by locating abnormal dynamics in the data. The data recovery method for the operator can extract the information accurately by collectively processing the privacy-preserving data from many PMUs. A cyber intruder with access to partial measurements cannot recover the data correctly even using the same approach. A real-time event identification method is also proposed, based on the new idea of characterizing an event by the low-dimensional subspace spanned by the dominant singular vectors of the data matrix.
Phishing is the major problem of the internet era. In this era of internet the security of our data in web is gaining an increasing importance. Phishing is one of the most harmful ways to unknowingly access the credential information like username, password or account number from the users. Users are not aware of this type of attack and later they will also become a part of the phishing attacks. It may be the losses of financial found, personal information, reputation of brand name or trust of brand. So the detection of phishing site is necessary. In this paper we design a framework of phishing detection using URL.
Cross-Site Request Forgery (CSRF) is one of the oldest and simplest attacks on the Web, yet it is still effective on many websites and it can lead to severe consequences, such as economic losses and account takeovers. Unfortunately, tools and techniques proposed so far to identify CSRF vulnerabilities either need manual reviewing by human experts or assume the availability of the source code of the web application. In this paper we present Mitch, the first machine learning solution for the black-box detection of CSRF vulnerabilities. At the core of Mitch there is an automated detector of sensitive HTTP requests, i.e., requests which require protection against CSRF for security reasons. We trained the detector using supervised learning techniques on a dataset of 5,828 HTTP requests collected on popular websites, which we make available to other security researchers. Our solution outperforms existing detection heuristics proposed in the literature, allowing us to identify 35 new CSRF vulnerabilities on 20 major websites and 3 previously undetected CSRF vulnerabilities on production software already analyzed using a state-of-the-art tool.
Virtual reality (VR) recently is a promising technique in both industry and academia due to its potential applications in immersive experiences including website, game, tourism, or museum. VR technique provides an amazing 3-Dimensional (3D) experiences by requiring a very high amount of elements such as images, texture, depth, focus length, etc. However, in order to apply VR technique to various devices, especially in mobiles, ultra-high transmission rate and extremely low latency are really big challenge. Considering this problem, this paper proposes a novel combination model by transforming the computing capability of VR device into an equivalent caching amount while remaining low latency and fast transmission. In addition, Classic caching models are used to computing and catching capabilities which is easily apply to multi-user models.
This article will consider the probability test of Solovey-Strassen, to determine the simplicity of the number and its possible modifications. This test allows for the shortest possible time to determine whether the number is prime or not. C\# programming language was used to implement the algorithm in practice.
We recently see a real digital revolution where all companies prefer to use cloud computing because of its capability to offer a simplest way to deploy the needed services. However, this digital transformation has generated different security challenges as the privacy vulnerability against cyber-attacks. In this work we will present a new architecture of a hybrid Intrusion detection System, IDS for virtual private clouds, this architecture combines both network-based and host-based intrusion detection system to overcome the limitation of each other, in case the intruder bypassed the Network-based IDS and gained access to a host, in intend to enhance security in private cloud environments. We propose to use a non-traditional mechanism in the conception of the IDS (the detection engine). Machine learning, ML algorithms will can be used to build the IDS in both parts, to detect malicious traffic in the Network-based part as an additional layer for network security, and also detect anomalies in the Host-based part to provide more privacy and confidentiality in the virtual machine. It's not in our scope to train an Artificial Neural Network ”ANN”, but just to propose a new scheme for IDS based ANN, In our future work we will present all the details related to the architecture and parameters of the ANN, as well as the results of some real experiments.
Nowadays, video streaming over HTTP is one of the most dominant Internet applications, using adaptive video techniques. Network assisted approaches have been proposed and are being standardized in order to provide high QoE for the end-users of such applications. SAND is a recent MPEG standard where DASH Aware Network Elements (DANEs) are introduced for this purpose. As web-caches are one of the main components of the SAND architecture, the location and the connectivity of these web-caches plays an important role in the user's QoE. The nature of SAND and DANE provides a good foundation for software controlled virtualized DASH environments, and in this paper, we propose a cache location algorithm and a cache migration algorithm for virtualized SAND deployments. The optimal locations for the virtualized DANEs is determined by an SDN controller and migrates it based on gathered statistics. The performance of the resulting system shows that, when SDN and NFV technologies are leveraged in such systems, software controlled virtualized approaches can provide an increase in QoE.
Today's rapid progress in the physical implementation of quantum computers demands scalable synthesis methods to map practical logic designs to quantum architectures. There exist many quantum algorithms which use classical functions with superposition of states. Motivated by recent trends, in this paper, we show the design of quantum circuit to perform modular exponentiation functions using two different approaches. In the design phase, first we generate quantum circuit from a verilog implementation of exponentiation functions using synthesis tools and then apply two different Quantum Error Correction techniques. Finally the circuit is further optimized using the Linear Nearest Neighbor (LNN) Property. We demonstrate the effectiveness of our approach by generating a set of networks for the reversible modular exponentiation function for a set of input values. At the end of the work, we have summarized the obtained results, where a cost analysis over our developed approaches has been made. Experimental results show that depending on the choice of different QECC methods the performance figures can vary by up to 11%, 10%, 8% in T-count, number of qubits, number of gates respectively.
Unlike faults in classical systems, faults in Cyber-Physical Systems will often be caused by the system's interaction with its physical environment and social context, rendering these faults harder to diagnose. To complicate matters further, knowledge about the behavior and failure modes of a system are often collected in different models. We show how three of those models, namely attack trees, fault trees, and timed failure propagation graphs can be converted into Halpern-Pearl causal models, combined into a single holistic causal model, and analyzed with actual causality reasoning to detect and explain unwanted events. Halpern-Pearl models have several advantages over their source models, particularly that they allow for modeling preemption, consider the non-occurrence of events, and can incorporate additional domain knowledge. Furthermore, such holistic models allow for analysis across model boundaries, enabling detection and explanation of events that are beyond a single model. Our contribution here delineates a semi-automatic process to (1) convert different models into Halpern-Pearl causal models, (2) combine these models into a single holistic model, and (3) reason about system failures. We illustrate our approach with the help of an Unmanned Aerial Vehicle case study.
Modern large scale technical systems often face iterative changes on their behaviours with the requirement of validated quality which is not easy to achieve completely with traditional testing. Regression verification is a powerful tool for the formal correctness analysis of software-driven systems. By proving that a new revision of the software behaves similarly as the original version of the software, some of the trust that the old software and system had earned during the validation processes or operation histories can be inherited to the new revision. This trust inheritance by the formal analysis relies on a number of implicit assumptions which are not self-evident but easy to miss, and may lead to a false sense of safety induced by a misunderstood regression verification processes. This paper aims at pointing out hidden, implicit assumptions of regression verification in the context of cyber-physical systems by making them explicit using practical examples. The explicit trust inheritance analysis would clarify for the engineers to understand the extent of the trust that regression verification provides and consequently facilitate them to utilize this formal technique for the system validation.
In this paper we propose a solution to support iOS developers in creating better applications, to use static analysis to investigate source code and detect secure coding issues while simultaneously pointing out good practices and/or secure APIs they should use.
As digital microfluidic biochips (DMFBs) make the transition to the marketplace for commercial exploitation, security and intellectual property (IP) protection are emerging as important design considerations. Recent studies have shown that DMFBs are vulnerable to reverse engineering aimed at stealing biomolecular protocols (IP theft). The IP piracy of proprietary protocols may lead to significant losses for pharmaceutical and biotech companies. The micro-electrode-dot-array (MEDA) is a next-generation DMFB platform that supports real-time sensing of droplets and has the added advantage of important security protections. However, real-time sensing offers opportunities to an attacker to steal the biochemical IP. We show that the daisychaining of microelectrodes and the use of one-time-programmability in MEDA biochips provides effective bitstream scrambling of biochemical protocols. To examine the strength of this solution, we develop a SAT attack that can unscramble the bitstreams through repeated observations of bioassays executed on the MEDA platform. Based on insights gained from the SAT attack, we propose an advanced defense against IP theft. Simulation results using real-life biomolecular protocols confirm that while the SAT attack is effective for simple instances, our advanced defense can thwart it for realistic MEDA biochips and real-life protocols.
Software has become an essential component of modern life, but when software vulnerabilities threaten the security of users, new ways of analyzing for software security must be explored. Using the National Institute of Standards and Technology's Juliet Java Suite, containing thousands of examples of defective Java methods for a variety of vulnerabilities, a prototype tool was developed implementing an array of Long-Short Term Memory Recurrent Neural Networks to detect vulnerabilities within source code. The tool employs various data preparation methods to be independent of coding style and to automate the process of extracting methods, labeling data, and partitioning the dataset. The result is a prototype command-line utility that generates an n-dimensional vulnerability prediction vector. The experimental evaluation using 44,495 test cases indicates that the tool can achieve an accuracy higher than 90% for 24 out of 29 different types of CWE vulnerabilities.
Conversational systems are computer programs that interact with users using natural language. Considering the complexity and interaction of the different components involved in building intelligent conversational systems that can perform diverse tasks, a promising approach to facilitate their development is by using multiagent systems (MAS). This paper reviews the main concepts and history of conversational systems, and introduces an architecture based on MAS. This architecture was designed to support the development of conversational systems in the domain chosen by the developer while also providing a reusable built-in dialogue control. We present a practical application in the healthcare domain. We observed that it can help developers to create conversational systems in different domains while providing a reusable and centralized dialogue control. We also present derived lessons learned that can be helpful to steer future research on engineering domain-specific conversational systems.