Biblio

Found 364 results

Filters: Keyword is reliability  [Clear All Filters]
2018-01-10
Wang, P., Safavi-Naini, R..  2017.  Interactive message transmission over adversarial wiretap channel II. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

In Wyner wiretap II model of communication, Alice and Bob are connected by a channel that can be eavesdropped by an adversary with unlimited computation who can select a fraction of communication to view, and the goal is to provide perfect information theoretic security. Information theoretic security is increasingly important because of the threat of quantum computers that can effectively break algorithms and protocols that are used in today's public key infrastructure. We consider interactive protocols for wiretap II channel with active adversary who can eavesdrop and add adversarial noise to the eavesdropped part of the codeword. These channels capture wireless setting where malicious eavesdroppers at reception distance of the transmitter can eavesdrop the communication and introduce jamming signal to the channel. We derive a new upperbound R ≤ 1 - ρ for the rate of interactive protocols over two-way wiretap II channel with active adversaries, and construct a perfectly secure protocol family with achievable rate 1 - 2ρ + ρ2. This is strictly higher than the rate of the best one round protocol which is 1 - 2ρ, hence showing that interaction improves rate. We also prove that even with interaction, reliable communication is possible only if ρ \textbackslashtextless; 1/2. An interesting aspect of this work is that our bounds will also hold in network setting when two nodes are connected by n paths, a ρ of which is corrupted by the adversary. We discuss our results, give their relations to the other works, and propose directions for future work.

2018-02-21
Zheng, H., Zhang, X..  2017.  Optimizing Task Assignment with Minimum Cost on Heterogeneous Embedded Multicore Systems Considering Time Constraint. 2017 ieee 3rd international conference on big data security on cloud (bigdatasecurity), ieee international conference on high performance and smart computing (hpsc), and ieee international conference on intelligent data and security (ids). :225–230.
Time and cost are the most critical performance metrics for computer systems including embedded system, especially for the battery-based embedded systems, such as PC, mainframe computer, and smart phone. Most of the previous work focuses on saving energy in a deterministic way by taking the average or worst scenario into account. However, such deterministic approaches usually are inappropriate in modeling energy consumption because of uncertainties in conditional instructions on processors and time-varying external environments. Through studying the relationship between energy consumption, execution time and completion probability of tasks on heterogeneous multi-core architectures this paper proposes an optimal energy efficiency and system performance model and the OTHAP (Optimizing Task Heterogeneous Assignment with Probability) algorithm to address the Processor and Voltage Assignment with Probability (PVAP) problem of data-dependent aperiodic tasks in real-time embedded systems, ensuring that all the tasks can be done under the time constraint with areal-time embedded systems guaranteed probability. We adopt a task DAG (Directed Acyclic Graph) to model the PVAP problem. We first use a processor scheduling algorithm to map the task DAG onto a set of voltage-variable processors, and then use our dynamic programming algorithm to assign a proper voltage to each task and The experimental results demonstrate our approach outperforms state-of-the-art algorithms in this field (maximum improvement of 24.6%).
2018-03-05
Mohlala, M., Ikuesan, A. R., Venter, H. S..  2017.  User Attribution Based on Keystroke Dynamics in Digital Forensic Readiness Process. 2017 IEEE Conference on Application, Information and Network Security (AINS). :124–129.

As the development of technology increases, the security risk also increases. This has affected most organizations, irrespective of size, as they depend on the increasingly pervasive technology to perform their daily tasks. However, the dependency on technology has introduced diverse security vulnerabilities in organizations which requires a reliable preparedness for probable forensic investigation of the unauthorized incident. Keystroke dynamics is one of the cost-effective methods for collecting potential digital evidence. This paper presents a keystroke pattern analysis technique suitable for the collection of complementary potential digital evidence for forensic readiness. The proposition introduced a technique that relies on the extraction of reliable behavioral signature from user activity. Experimental validation of the proposition demonstrates the effectiveness of proposition using a multi-scheme classifier. The overall goal is to have forensically sound and admissible keystroke evidence that could be presented during the forensic investigation to minimize the costs and time of the investigation.

2018-01-23
Moghaddam, F. F., Wieder, P., Yahyapour, R..  2017.  A policy-based identity management schema for managing accesses in clouds. 2017 8th International Conference on the Network of the Future (NOF). :91–98.

Security challenges are the most important obstacles for the advancement of IT-based on-demand services and cloud computing as an emerging technology. Lack of coincidence in identity management models based on defined policies and various security levels in different cloud servers is one of the most challenging issues in clouds. In this paper, a policy- based user authentication model has been presented to provide a reliable and scalable identity management and to map cloud users' access requests with defined polices of cloud servers. In the proposed schema several components are provided to define access policies by cloud servers, to apply policies based on a structural and reliable ontology, to manage user identities and to semantically map access requests by cloud users with defined polices. Finally, the reliability and efficiency of this policy-based authentication schema have been evaluated by scientific performance, security and competitive analysis. Overall, the results show that this model has met defined demands of the research to enhance the reliability and efficiency of identity management in cloud computing environments.

2018-05-30
Saleh, M., Ratazzi, E. P., Xu, S..  2017.  A Control Flow Graph-Based Signature for Packer Identification. MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM). :683–688.

The large number of malicious files that are produced daily outpaces the current capacity of malware analysis and detection. For example, Intel Security Labs reported that during the second quarter of 2016, their system found more than 40M of new malware [1]. The damage of malware attacks is also increasingly devastating, as witnessed by the recent Cryptowall malware that has reportedly generated more than \$325M in ransom payments to its perpetrators [2]. In terms of defense, it has been widely accepted that the traditional approach based on byte-string signatures is increasingly ineffective, especially for new malware samples and sophisticated variants of existing ones. New techniques are therefore needed for effective defense against malware. Motivated by this problem, the paper investigates a new defense technique against malware. The technique presented in this paper is utilized for automatic identification of malware packers that are used to obfuscate malware programs. Signatures of malware packers and obfuscators are extracted from the CFGs of malware samples. Unlike conventional byte signatures that can be evaded by simply modifying one or multiple bytes in malware samples, these signatures are more difficult to evade. For example, CFG-based signatures are shown to be resilient against instruction modifications and shuffling, as a single signature is sufficient for detecting mildly different versions of the same malware. Last but not least, the process for extracting CFG-based signatures is also made automatic.

2018-03-19
Salem, A., Liao, X., Shen, Y., Lu, X..  2017.  Provoking the Adversary by Dual Detection Techniques: A Game Theoretical Framework. 2017 International Conference on Networking and Network Applications (NaNA). :326–329.

Establishing a secret and reliable wireless communication is a challenging task that is of paramount importance. In this paper, we investigate the physical layer security of a legitimate transmission link between a user that assists an Intrusion Detection System (IDS) in detecting eavesdropping and jamming attacks in the presence of an adversary that is capable of conducting an eavesdropping or a jamming attack. The user is being faced by a challenge of whether to transmit, thus becoming vulnerable to an eavesdropping or a jamming attack, or to keep silent and consequently his/her transmission will be delayed. The adversary is also facing a challenge of whether to conduct an eavesdropping or a jamming attack that will not get him/her to be detected. We model the interactions between the user and the adversary as a two-state stochastic game. Explicit solutions characterize some properties while highlighting some interesting strategies that are being embraced by the user and the adversary. Results show that our proposed system outperform current systems in terms of communication secrecy.

2018-02-02
You, J., Shangguan, J., Sun, Y., Wang, Y..  2017.  Improved trustworthiness judgment in open networks. 2017 International Smart Cities Conference (ISC2). :1–2.

The collaborative recommendation mechanism is beneficial for the subject in an open network to find efficiently enough referrers who directly interacted with the object and obtain their trust data. The uncertainty analysis to the collected trust data selects the reliable trust data of trustworthy referrers, and then calculates the statistical trust value on certain reliability for any object. After that the subject can judge its trustworthiness and further make a decision about interaction based on the given threshold. The feasibility of this method is verified by three experiments which are designed to validate the model's ability to fight against malicious service, the exaggeration and slander attack. The interactive success rate is significantly improved by using the new model, and the malicious entities are distinguished more effectively than the comparative model.

2017-12-12
Sylla, A. N., Louvel, M., Rutten, E., Delaval, G..  2017.  Design Framework for Reliable Multiple Autonomic Loops in Smart Environments. 2017 International Conference on Cloud and Autonomic Computing (ICCAC). :131–142.

Today's control systems such as smart environments have the ability to adapt to their environment in order to achieve a set of objectives (e.g., comfort, security and energy savings). This is done by changing their behaviour upon the occurrence of specific events. Building such a system requires to design and implement autonomic loops that collect events and measurements, make decisions and execute the corresponding actions.The design and the implementation of such loops are made difficult by several factors: the complexity of systems with multiple objectives, the risk of conflicting decisions between multiple loops, the inconsistencies that can result from communication errors and hardware failures and the heterogeneity of the devices.In this paper, we propose a design framework for reliable and self-adaptive systems, where multiple autonomic loops can be composed into complex managers, and we consider its application to smart environments. We build upon the proposed framework a generic autonomic loop which combines an automata-based controller that makes correct and coherent decisions, a transactional execution mechanism that avoids inconsistencies, and an abstraction layer that hides the heterogeneity of the devices.We propose patterns for composition of such loops, in parallel, coordinated, and hierarchically, with benefits from the leveraging of automata-based modular constructs, that provides for guarantees on the correct behaviour of the controlled system. We implement our framework with the transactional middleware LINC, the reactive language Heptagon/BZR and the abstraction framework PUTUTU. A case study in the field of building automation is presented to illustrate the proposed framework.

2017-11-03
Bozkaya, Elif, Chowdhury, Kaushik, Canberk, Berk.  2016.  SINR and Reliability Based Hidden Terminal Estimation for Next Generation Vehicular Networks. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :69–76.
Safety applications are currently being developed in next generation vehicular networks to provide road safety. Broadcasting of safety messages unveils the need of reliability assessment since there are no request-to-send (RTS)/clear-to-send (CTS) handshaking and acknowledgment packets in broadcast vehicular communications. Therefore, the reliability of broadcast messages suffer from hidden terminal problem, interference and high mobility. To overcome these challenges, Signal-to-Interference-and-Noise Ratio (SINR) estimation is a key solution so that we can foresee the transmission collisions caused by hidden terminals and prevent its transmissions. Then, we can build a model to improve reliability of broadcast messages. Towards this aim, in this paper, we propose a SINR based hidden terminal estimation model. First, we introduce a method to specify hidden terminals and their transmissions, then we estimate accurate SINR level at the receiver in the near future. Second, we formulate the successfully received packets with a heuristic algorithm and, calculate throughput and hidden terminal radius. Our approach enables significant improvement in the reliability of broadcast vehicular communications.
2017-12-28
Luo, S., Wang, Y., Huang, W., Yu, H..  2016.  Backup and Disaster Recovery System for HDFS. 2016 International Conference on Information Science and Security (ICISS). :1–4.

HDFS has been widely used for storing massive scale data which is vulnerable to site disaster. The file system backup is an important strategy for data retention. In this paper, we present an efficient, easy- to-use Backup and Disaster Recovery System for HDFS. The system includes a client based on HDFS with additional feature of remote backup, and a remote server with a HDFS cluster to keep the backup data. It supports full backup and regularly incremental backup to the server with very low cost and high throughout. In our experiment, the average speed of backup and recovery is up to 95 MB/s, approaching the theoretical maximum speed of gigabit Ethernet.

2017-08-22
Schroeder, Bianca.  2016.  Case Studies from the Real World: The Importance of Measurement and Analysis in Building Better Systems. Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering. :1–1.

At the core of the "Big Data" revolution lie frameworks and systems that allow for the massively parallel processing of large amounts of data. Ironically, while they have been designed for processing large amounts of data, these systems are at the same time major producers of data: to support the administration and management of these huge-scale systems, they are configured to generate detailed log and monitoring data, periodically capturing the system state across all nodes, components and jobs in the system. While such logging information is used routinely by sysadmins for ad-hoc trouble-shooting and problem diagnosis, we point out that there is a tremendous value in analyzing such data from a research point of view. In this talk, we will go over several case studies that demonstrate how measuring and analyzing measurement data from production systems can provide new insights into how systems work and fail, and how these new insights can help in designing better systems.

2017-05-22
Sutcliffe, Richard J., Kowarsch, Benjamin.  2016.  Closing the Barn Door: Re-Prioritizing Safety, Security, and Reliability. Proceedings of the 21st Western Canadian Conference on Computing Education. :1:1–1:15.

Past generations of software developers were well on the way to building a software engineering mindset/gestalt, preferring tools and techniques that concentrated on safety, security, reliability, and code re-usability. Computing education reflected these priorities and was, to a great extent organized around these themes, providing beginning software developers a basis for professional practice. In more recent times, economic and deadline pressures and the de-professionalism of practitioners have combined to drive a development agenda that retains little respect for quality considerations. As a result, we are now deep into a new and severe software crisis. Scarcely a day passes without news of either a debilitating data or website hack, or the failure of a mega-software project. Vendors, individual developers, and possibly educators can anticipate an equally destructive flood of malpractice litigation, for the argument that they systematically and recklessly ignored known best development practice of long standing is irrefutable. Yet we continue to instruct using methods and to employ development tools we know, or ought to know, are inherently insecure, unreliable, and unsafe, and that produce software of like ilk. The authors call for a renewed professional and educational focus on software quality, focusing on redesigned tools that enable and encourage known best practice, combined with reformed educational practices that emphasize writing human readable, safe, secure, and reliable software. Practitioners can only deploy sound management techniques, appropriate tool choice, and best practice development methodologies such as thorough planning and specification, scope management, factorization, modularity, safety, appropriate team and testing strategies, if those ideas and techniques are embedded in the curriculum from the beginning. The authors have instantiated their ideas in the form of their highly disciplined new version of Niklaus Wirth's 1980s Modula-2 programming notation under the working moniker Modula-2 R10. They are now working on an implementation that will be released under a liberal open source license in the hope that it will assist in reforming the CS curriculum around a best practices core so as to empower would-be professionals with the intellectual and practical mindset to begin resolving the software crisis. They acknowledge there is no single software engineering silver bullet, but assert that professional techniques can be inculcated throughout a student's four-year university tenure, and if implemented in the workplace, these can greatly reduce the likelihood of multiplied IT failures at the hands of our graduates. The authors maintain that professional excellence is a necessary mindset, a habit of self-discipline that must be intentionally embedded in all aspects of one's education, and subsequently drive all aspects of one's practice, including, but by no means limited to, the choice and use of programming tools.

2017-09-05
Naureen, Ayesha, Zhang, Ning.  2016.  A Comparative Study of Data Aggregation Approaches for Wireless Sensor Networks. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :125–128.

In Wireless Sensor Networks (WSNs), data aggregation has been used to reduce bandwidth and energy costs during a data collection process. However, data aggregation, while bringing us the benefit of improving bandwidth usage and energy efficiency, also introduces opportunities for security attacks, thus reducing data delivery reliability. There is a trade-off between bandwidth and energy efficiency and achieving data delivery reliability. In this paper, we present a comparative study on the reliability and efficiency characteristics of different data aggregation approaches using both simulation studies and test bed evaluations. We also analyse the factors that contribute to network congestion and affect data delivery reliability. Finally, we investigate an optimal trade-off between reliability and efficiency properties of the different approaches by using an intermediate approach, called Multi-Aggregator based Multi-Cast (MAMC) data aggregation approach. Our evaluation results for MAMC show that it is possible to achieve reliability and efficiency at the same time.

2017-05-17
Luo, Pei, Li, Cheng, Fei, Yunsi.  2016.  Concurrent Error Detection for Reliable SHA-3 Design. Proceedings of the 26th Edition on Great Lakes Symposium on VLSI. :39–44.

Cryptographic systems are vulnerable to random errors and injected faults. Soft errors can inadvertently happen in critical cryptographic modules and attackers can inject faults into systems to retrieve the embedded secret. Different schemes have been developed to improve the security and reliability of cryptographic systems. As the new SHA-3 standard, Keccak algorithm will be widely used in various cryptographic applications, and its implementation should be protected against random errors and injected faults. In this paper, we devise different parity checking methods to protect the operations of Keccak. Results show that our schemes can be easily implemented and can effectively protect Keccak system against random errors and fault attacks.

2017-11-13
Patti, E., Syrri, A. L. A., Jahn, M., Mancarella, P., Acquaviva, A., Macii, E..  2016.  Distributed Software Infrastructure for General Purpose Services in Smart Grid. IEEE Transactions on Smart Grid. 7:1156–1163.

In this paper, the design of an event-driven middleware for general purpose services in smart grid (SG) is presented. The main purpose is to provide a peer-to-peer distributed software infrastructure to allow the access of new multiple and authorized actors to SGs information in order to provide new services. To achieve this, the proposed middleware has been designed to be: 1) event-based; 2) reliable; 3) secure from malicious information and communication technology attacks; and 4) to enable hardware independent interoperability between heterogeneous technologies. To demonstrate practical deployment, a numerical case study applied to the whole U.K. distribution network is presented, and the capabilities of the proposed infrastructure are discussed.

2017-03-29
Nicol, David M., Kumar, Rakesh.  2016.  Efficient Monte Carlo Evaluation of SDN Resiliency. Proceedings of the 2016 Annual ACM Conference on SIGSIM Principles of Advanced Discrete Simulation. :143–152.

Software defined networking (SDN) is an emerging technology for controlling flows through networks. Used in the context of industrial control systems, an objective is to design configurations that have built-in protection for hardware failures in the sense that the configuration has "baked-in" back-up routes. The objective is to leave the configuration static as long as possible, minimizing the need to have the controller push in new routing and filtering rules We have designed and implemented a tool that enables us to determine the complete connectivity map from an analysis of all switch configurations in the network. We can use this tool to explore the impact of a link failure, in particular to determine whether the failure induces loss of the ability to deliver a flow even after the built-in back-up routes are used. A measure of the original configuration's resilience to link failure is the mean number of link failures required to induce the first such loss of service. The computational cost of each link failure and subsequent analysis is large, so there is much to be gained by reducing the overall cost of obtaining a statistically valid estimate of resiliency. This paper shows that when analysis of a network state can identify all as-yet-unfailed links any one of whose failure would induce loss of a flow, then we can use the technique of importance sampling to estimate the mean number of links required to fail before some flow is lost, and analyze the potential for reducing the variance of the sample statistic. We provide both theoretical and empirical evidence for significant variance reduction.

2017-09-19
Zúquete, André.  2016.  Exploitation of Dual Channel Transmissions to Increase Security and Reliability in Classic Bluetooth Piconets. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :55–60.

In this paper we discuss several improvements to the security and reliability of a classic Bluetooth network (piconet) that can arise from the fact of being able to transmit the same frame with two frequencies on each slot, instead of the actual standard, that uses only one frequency. Furthermore, we build upon this possibility and we show that piconet participants can explore many strategies to increase the security of their communications by confounding eavesdroppers, such as multiple hopping sequences, random selection of a hopping sequence on each transmission slot and variable frame encryption per hopping sequence. Finally, all this can be decided independently by any piconet participant without having to agree in real time on some type of service with other participants of the same piconet.

2017-11-20
Liu, R., Wu, H., Pang, Y., Qian, H., Yu, S..  2016.  A highly reliable and tamper-resistant RRAM PUF: Design and experimental validation. 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :13–18.

This work presents a highly reliable and tamper-resistant design of Physical Unclonable Function (PUF) exploiting Resistive Random Access Memory (RRAM). The RRAM PUF properties such as uniqueness and reliability are experimentally measured on 1 kb HfO2 based RRAM arrays. Firstly, our experimental results show that selection of the split reference and offset of the split sense amplifier (S/A) significantly affect the uniqueness. More dummy cells are able to generate a more accurate split reference, and relaxing transistor's sizes of the split S/A can reduce the offset, thus achieving better uniqueness. The average inter-Hamming distance (HD) of 40 RRAM PUF instances is 42%. Secondly, we propose using the sum of the read-out currents of multiple RRAM cells for generating one response bit, which statistically minimizes the risk of early retention failure of a single cell. The measurement results show that with 8 cells per bit, 0% intra-HD can maintain more than 50 hours at 150 °C or equivalently 10 years at 69 °C by 1/kT extrapolation. Finally, we propose a layout obfuscation scheme where all the S/A are randomly embedded into the RRAM array to improve the RRAM PUF's resistance against invasive tampering. The RRAM cells are uniformly placed between M4 and M5 across the array. If the adversary attempts to invasively probe the output of the S/A, he has to remove the top-level interconnect and destroy the RRAM cells between the interconnect layers. Therefore, the RRAM PUF has the “self-destructive” feature. The hardware overhead of the proposed design strategies is benchmarked in 64 × 128 RRAM PUF array at 65 nm, while these proposed optimization strategies increase latency, energy and area over a naive implementation, they significantly improve the performance and security.

2018-02-02
Huang, W., Bruck, J..  2016.  Secure RAID schemes for distributed storage. 2016 IEEE International Symposium on Information Theory (ISIT). :1401–1405.

We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal encoding and decoding complexities, from the EVENODD codes and B codes, which are array codes widely used in the RAID architecture. These schemes optimally tolerate two node failures and two eavesdropping nodes. For more general parameters, we construct efficient systematic secure RAID schemes from Reed-Solomon codes. Our results suggest that building “keyless”, information-theoretic security into the RAID architecture is practical.

2017-11-27
Sayyadipour, S., Latify, M. A., Yousefi, G. R..  2016.  Vulnerability analysis of power systems during the scheduled maintenance of network facilities. 2016 Smart Grids Conference (SGC). :1–4.

This paper proposes a practical time-phased model to analyze the vulnerability of power systems over a time horizon, in which the scheduled maintenance of network facilities is considered. This model is deemed as an efficient tool that could be used by system operators to assess whether how their systems become vulnerable giving a set of scheduled facility outages. The final model is presented as a single level Mixed-Integer Linear Programming (MILP) problem solvable with commercially available software. Results attained based on the well-known IEEE 24-Bus Reliability Test System (RTS) appreciate the applicability of the model and highlight the necessity of considering the scheduled facility outages in assessing the vulnerability of a power system.

2017-12-28
Obenshain, D., Tantillo, T., Babay, A., Schultz, J., Newell, A., Hoque, M. E., Amir, Y., Nita-Rotaru, C..  2016.  Practical Intrusion-Tolerant Networks. 2016 IEEE 36th International Conference on Distributed Computing Systems (ICDCS). :45–56.

As the Internet becomes an important part of the infrastructure our society depends on, it is crucial to construct networks that are able to work even when part of the network is compromised. This paper presents the first practical intrusion-tolerant network service, targeting high-value applications such as monitoring and control of global clouds and management of critical infrastructure for the power grid. We use an overlay approach to leverage the existing IP infrastructure while providing the required resiliency and timeliness. Our solution overcomes malicious attacks and compromises in both the underlying network infrastructure and in the overlay itself. We deploy and evaluate the intrusion-tolerant overlay implementation on a global cloud spanning East Asia, North America, and Europe, and make it publicly available.

2017-11-13
Venugopalan, V., Patterson, C. D., Shila, D. M..  2016.  Detecting and thwarting hardware trojan attacks in cyber-physical systems. 2016 IEEE Conference on Communications and Network Security (CNS). :421–425.

Cyber-physical system integrity requires both hardware and software security. Many of the cyber attacks are successful as they are designed to selectively target a specific hardware or software component in an embedded system and trigger its failure. Existing security measures also use attack vector models and isolate the malicious component as a counter-measure. Isolated security primitives do not provide the overall trust required in an embedded system. Trust enhancements are proposed to a hardware security platform, where the trust specifications are implemented in both software and hardware. This distribution of trust makes it difficult for a hardware-only or software-only attack to cripple the system. The proposed approach is applied to a smart grid application consisting of third-party soft IP cores, where an attack on this module can result in a blackout. System integrity is preserved in the event of an attack and the anomalous behavior of the IP core is recorded by a supervisory module. The IP core also provides a snapshot of its trust metric, which is logged for further diagnostics.

2017-05-18
Karimian, Nima, Wortman, Paul A., Tehranipoor, Fatemeh.  2016.  Evolving Authentication Design Considerations for the Internet of Biometric Things (IoBT). Proceedings of the Eleventh IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. :10:1–10:10.

The Internet of Things (IoT) is a design implementation of embedded system design that connects a variety of devices, sensors, and physical objects to a larger connected network (e.g. the Internet) which requires human-to-human or human-to-computer interaction. While the IoT is expected to expand the user's connectivity and everyday convenience, there are serious security considerations that come into account when using the IoT for distributed authentication. Furthermore the incorporation of biometrics to IoT design brings about concerns of cost and implementing a 'user-friendly' design. In this paper, we focus on the use of electrocardiogram (ECG) signals to implement distributed biometrics authentication within an IoT system model. Our observations show that ECG biometrics are highly reliable, more secure, and easier to implement than other biometrics.

2015-11-16
Zachary J. Estrada, University of Illinois at Urbana-Champaign, Cuong Pham, University of Illinois at Urbana-Champaign, Fei Deng, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign, Lok Yan, Air Force Research Laboratory.  2015.  Dynamic VM Dependability Monitoring Using Hypervisor Probes. 11th European Dependable Computing Conference- Dependability in Practice (EDCC 2015).

Many current VM monitoring approaches require guest OS modifications and are also unable to perform application level monitoring, reducing their value in a cloud setting. This paper introduces hprobes, a framework that allows one to dynamically monitor applications and operating systems inside a VM. The hprobe framework does not require any changes to the guest OS, which avoids the tight coupling of monitoring with its target. Furthermore, the monitors can be customized and enabled/disabled while the VM is running. To demonstrate the usefulness of this framework, we present three sample detectors: an emergency detector for a security vulnerability, an application watchdog, and an infinite-loop detector. We test our detectors on real applications and demonstrate that those detectors achieve an acceptable level of performance overhead with a high degree of flexibility.

2017-02-27
Cómbita, L. F., Giraldo, J., Cárdenas, A. A., Quijano, N..  2015.  Response and reconfiguration of cyber-physical control systems: A survey. 2015 IEEE 2nd Colombian Conference on Automatic Control (CCAC). :1–6.

The integration of physical systems with distributed embedded computing and communication devices offers advantages on reliability, efficiency, and maintenance. At the same time, these embedded computers are susceptible to cyber-attacks that can harm the performance of the physical system, or even drive the system to an unsafe state; therefore, it is necessary to deploy security mechanisms that are able to automatically detect, isolate, and respond to potential attacks. Detection and isolation mechanisms have been widely studied for different types of attacks; however, automatic response to attacks has attracted considerably less attention. Our goal in this paper is to identify trends and recent results on how to respond and reconfigure a system under attack, and to identify limitations and open problems. We have found two main types of attack protection: i) preventive, which identifies the vulnerabilities in a control system and then increases its resiliency by modifying either control parameters or the redundancy of devices; ii) reactive, which responds as soon as the attack is detected (e.g., modifying the non-compromised controller actions).