Biblio

Filters: Keyword is software fault tolerance  [Clear All Filters]
2021-02-16
Kriaa, S., Papillon, S., Jagadeesan, L., Mendiratta, V..  2020.  Better Safe than Sorry: Modeling Reliability and Security in Replicated SDN Controllers. 2020 16th International Conference on the Design of Reliable Communication Networks DRCN 2020. :1—6.
Software-defined networks (SDN), through their programmability, significantly increase network resilience by enabling dynamic reconfiguration of network topologies in response to faults and potentially malicious attacks detected in real-time. Another key trend in network softwarization is cloud-native software, which, together with SDN, will be an integral part of the core of future 5G networks. In SDN, the control plane forms the "brain" of the software-defined network and is typically implemented as a set of distributed controller replicas to avoid a single point of failure. Distributed consensus algorithms are used to ensure agreement among the replicas on key data even in the presence of faults. Security is also a critical concern in ensuring that attackers cannot compromise the SDN control plane; byzantine fault tolerance algorithms can provide protection against compromised controller replicas. However, while reliability/availability and security form key attributes of resilience, they are typically modeled separately in SDN, without consideration of the potential impacts of their interaction. In this paper we present an initial framework for a model that unifies reliability, availability, and security considerations in distributed consensus. We examine – via simulation of our model – some impacts of the interaction between accidental faults and malicious attacks on SDN and suggest potential mitigations unique to cloud-native software.
2021-03-01
Golagha, M., Pretschner, A., Briand, L. C..  2020.  Can We Predict the Quality of Spectrum-based Fault Localization? 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST). :4–15.
Fault localization and repair are time-consuming and tedious. There is a significant and growing need for automated techniques to support such tasks. Despite significant progress in this area, existing fault localization techniques are not widely applied in practice yet and their effectiveness varies greatly from case to case. Existing work suggests new algorithms and ideas as well as adjustments to the test suites to improve the effectiveness of automated fault localization. However, important questions remain open: Why is the effectiveness of these techniques so unpredictable? What are the factors that influence the effectiveness of fault localization? Can we accurately predict fault localization effectiveness? In this paper, we try to answer these questions by collecting 70 static, dynamic, test suite, and fault-related metrics that we hypothesize are related to effectiveness. Our analysis shows that a combination of only a few static, dynamic, and test metrics enables the construction of a prediction model with excellent discrimination power between levels of effectiveness (eight metrics yielding an AUC of .86; fifteen metrics yielding an AUC of.88). The model hence yields a practically useful confidence factor that can be used to assess the potential effectiveness of fault localization. Given that the metrics are the most influential metrics explaining the effectiveness of fault localization, they can also be used as a guide for corrective actions on code and test suites leading to more effective fault localization.
2020-07-27
Liem, Clifford, Murdock, Dan, Williams, Andrew, Soukup, Martin.  2019.  Highly Available, Self-Defending, and Malicious Fault-Tolerant Systems for Automotive Cybersecurity. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :24–27.
With the growing number of electronic features in cars and their connections to the cloud, smartphones, road-side equipment, and neighboring cars the need for effective cybersecurity is paramount. Beyond the concern of brand degradation, warranty fraud, and recalls, what keeps manufacturers up at night is the threat of malicious attacks which can affect the safety of vehicles on the road. Would any single protection technique provide the security needed over the long lifetime of a vehicle? We present a new methodology for automotive cybersecurity where the designs are made to withstand attacks in the future based on the concepts of high availability and malicious fault-tolerance through self-defending techniques. When a system has an intrusion, self-defending technologies work to contain the breach using integrity verification, self-healing, and fail-over techniques to keep the system running.
2020-04-06
Patsonakis, Christos, Samari, Katerina, Kiayiasy, Aggelos, Roussopoulos, Mema.  2019.  On the Practicality of a Smart Contract PKI. 2019 IEEE International Conference on Decentralized Applications and Infrastructures (DAPPCON). :109–118.
Public key infrastructures (PKIs) are one of the main building blocks for securing communications over the Internet. Currently, PKIs are under the control of centralized authorities, which is problematic as evidenced by numerous incidents where they have been compromised. The distributed, fault tolerant log of transactions provided by blockchains and more recently, smart contract platforms, constitutes a powerful tool for the decentralization of PKIs. To verify the validity of identity records, blockchain-based identity systems store on chain either all identity records, or, a small (or even constant) sized amount of data for verifying identity records stored off chain. However, as most of these systems have never been implemented, there is little information regarding the practical implications of each design's tradeoffs. In this work, we first implement and evaluate the only provably secure, smart contract based PKI of Patsonakis et al. on top of Ethereum. This construction incurs constant-sized storage at the expense of computational complexity. To explore this tradeoff, we propose and implement a second construction which, eliminates the need for trusted setup, preserves the security properties of Patsonakis et al. and, as illustrated through our evaluation, is the only version with constant-sized state that can be deployed on the live chain of Ethereum. Furthermore, we compare these two systems with the simple approach of most prior works, e.g., the Ethereum Name Service, where all identity records are stored on the smart contract's state, to illustrate several shortcomings of Ethereum and its cost model. We propose several modifications for fine tuning the model, which would be useful to be considered for any smart contract platform like Ethereum so that it reaches its full potential to support arbitrary distributed applications.
2020-09-11
Shukla, Ankur, Katt, Basel, Nweke, Livinus Obiora.  2019.  Vulnerability Discovery Modelling With Vulnerability Severity. 2019 IEEE Conference on Information and Communication Technology. :1—6.
Web browsers are primary targets of attacks because of their extensive uses and the fact that they interact with sensitive data. Vulnerabilities present in a web browser can pose serious risk to millions of users. Thus, it is pertinent to address these vulnerabilities to provide adequate protection for personally identifiable information. Research done in the past has showed that few vulnerability discovery models (VDMs) highlight the characterization of vulnerability discovery process. In these models, severity which is one of the most crucial properties has not been considered. Vulnerabilities can be categorized into different levels based on their severity. The discovery process of each kind of vulnerabilities is different from the other. Hence, it is essential to incorporate the severity of the vulnerabilities during the modelling of the vulnerability discovery process. This paper proposes a model to assess the vulnerabilities present in the software quantitatively with consideration for the severity of the vulnerabilities. It is possible to apply the proposed model to approximate the number of vulnerabilities along with vulnerability discovery rate, future occurrence of vulnerabilities, risk analysis, etc. Vulnerability data obtained from one of the major web browsers (Google Chrome) is deployed to examine goodness-of-fit and predictive capability of the proposed model. Experimental results justify the fact that the model proposed herein can estimate the required information better than the existing VDMs.
2020-03-09
Li, Zhixin, Liu, Lei, Kong, Degang.  2019.  Virtual Machine Failure Prediction Method Based on AdaBoost-Hidden Markov Model. 2019 International Conference on Intelligent Transportation, Big Data Smart City (ICITBS). :700–703.

The failure prediction method of virtual machines (VM) guarantees reliability to cloud platforms. However, the uncertainty of VM security state will affect the reliability and task processing capabilities of the entire cloud platform. In this study, a failure prediction method of VM based on AdaBoost-Hidden Markov Model was proposed to improve the reliability of VMs and overall performance of cloud platforms. This method analyzed the deep relationship between the observation state and the hidden state of the VM through the hidden Markov model, proved the influence of the AdaBoost algorithm on the hidden Markov model (HMM), and realized the prediction of the VM failure state. Results show that the proposed method adapts to the complex dynamic cloud platform environment, can effectively predict the failure state of VMs, and improve the predictive ability of VM security state.

2020-06-26
Puccetti, Armand.  2019.  The European H2020 project VESSEDIA (Verification Engineering of Safety and SEcurity critical Dynamic Industrial Applications). 2019 22nd Euromicro Conference on Digital System Design (DSD). :588—591.

This paper presents an overview of the H2020 project VESSEDIA [9] aimed at verifying the security and safety of modern connected systems also called IoT. The originality relies in using Formal Methods inherited from high-criticality applications domains to analyze the source code at different levels of intensity, to gather possible faults and weaknesses. The analysis methods are mostly exhaustive an guarantee that, after analysis, the source code of the application is error-free. This paper is structured as follows: after an introductory section 1 giving some factual data, section 2 presents the aims and the problems addressed; section 3 describes the project's use-cases and section 4 describes the proposed approach for solving these problems and the results achieved until now; finally, section 5 discusses some remaining future work.

2020-02-26
Tran, Geoffrey Phi, Walters, John Paul, Crago, Stephen.  2019.  Increased Fault-Tolerance and Real-Time Performance Resiliency for Stream Processing Workloads through Redundancy. 2019 IEEE International Conference on Services Computing (SCC). :51–55.

Data analytics and telemetry have become paramount to monitoring and maintaining quality-of-service in addition to business analytics. Stream processing-a model where a network of operators receives and processes continuously arriving discrete elements-is well-suited for these needs. Current and previous studies and frameworks have focused on continuity of operations and aggregate performance metrics. However, real-time performance and tail latency are also important. Timing errors caused by either performance or failed communication faults also affect real-time performance more drastically than aggregate metrics. In this paper, we introduce redundancy in the stream data to improve the real-time performance and resiliency to timing errors caused by either performance or failed communication faults. We also address limitations in previous solutions using a fine-grained acknowledgment tracking scheme to both increase the effectiveness for resiliency to performance faults and enable effectiveness for failed communication faults. Our results show that fine-grained acknowledgment schemes can improve the tail and mean latencies by approximately 30%. We also show that these schemes can improve resiliency to performance faults compared to existing work. Our improvements result in 47.4% to 92.9% fewer missed deadlines compared to 17.3% to 50.6% for comparable topologies and redundancy levels in the state of the art. Finally, we show that redundancies of 25% to 100% can reduce the number of data elements that miss their deadline constraints by 0.76% to 14.04% for applications with high fan-out and by 7.45% up to 50% for applications with no fan-out.

Abraham, Jacob A..  2019.  Resiliency Demands on Next Generation Critical Embedded Systems. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :135–138.

Emerging intelligent systems have stringent constraints including cost and power consumption. When they are used in critical applications, resiliency becomes another key requirement. Much research into techniques for fault tolerance and dependability has been successfully applied to highly critical systems, such as those used in space, where cost is not an overriding constraint. Further, most resiliency techniques were focused on dealing with failures in the hardware and bugs in the software. The next generation of systems used in critical applications will also have to be tolerant to test escapes after manufacturing, soft errors and transients in the electronics, hardware bugs, hardware and software Trojans and viruses, as well as intrusions and other security attacks during operation. This paper will assess the impact of these threats on the results produced by a critical system, and proposed solutions to each of them. It is argued that run-time checks at the application-level are necessary to deal with errors in the results.

2020-07-06
Hasan, Kamrul, Shetty, Sachin, Hassanzadeh, Amin, Ullah, Sharif.  2019.  Towards Optimal Cyber Defense Remediation in Cyber Physical Systems by Balancing Operational Resilience and Strategic Risk. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–8.

A prioritized cyber defense remediation plan is critical for effective risk management in cyber-physical systems (CPS). The increased integration of Information Technology (IT)/Operational Technology (OT) in CPS has to lead to the need to identify the critical assets which, when affected, will impact resilience and safety. In this work, we propose a methodology for prioritized cyber risk remediation plan that balances operational resilience and economic loss (safety impacts) in CPS. We present a platform for modeling and analysis of the effect of cyber threats and random system faults on the safety of CPS that could lead to catastrophic damages. We propose to develop a data-driven attack graph and fault graph-based model to characterize the exploitability and impact of threats in CPS. We develop an operational impact assessment to quantify the damages. Finally, we propose the development of a strategic response decision capability that proposes optimal mitigation actions and policies that balances the trade-off between operational resilience (Tactical Risk) and Strategic Risk.

2020-03-02
Kharchenko, Vyacheslav, Ponochovniy, Yuriy, Abdulmunem, Al-Sudani Mustafa Qahtan, Shulga, Iryna.  2019.  AvTA Based Assessment of Dependability Considering Recovery After Failures and Attacks on Vulnerabilities. 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). 2:1036–1040.

The paper describes modification of the ATA (Attack Tree Analysis) technique for assessment of instrumentation and control systems (ICS) dependability (reliability, availability and cyber security) called AvTA (Availability Tree Analysis). The techniques FMEA, FMECA and IMECA applied to carry out preliminary semi-formal and criticality oriented analysis before AvTA based assessment are described. AvTA models combine reliability and cyber security subtrees considering probabilities of ICS recovery in case of hardware (physical) and software (design) failures and attacks on components casing failures. Successful recovery events (SREs) avoid corresponding failures in tree using OR gates if probabilities of SRE for assumed time are more than required. Case for dependability AvTA based assessment (model, availability function and technology of decision-making for choice of component and system parameters) for smart building ICS (Building Automation Systems, BAS) is discussed.

2020-07-27
Torkura, Kennedy A., Sukmana, Muhammad I.H., Cheng, Feng, Meinel, Christoph.  2019.  Security Chaos Engineering for Cloud Services: Work In Progress. 2019 IEEE 18th International Symposium on Network Computing and Applications (NCA). :1–3.
The majority of security breaches in cloud infrastructure in recent years are caused by human errors and misconfigured resources. Novel security models are imperative to overcome these issues. Such models must be customer-centric, continuous, not focused on traditional security paradigms like intrusion detection and adopt proactive techniques. Thus, this paper proposes CloudStrike, a cloud security system that implements the principles of Chaos Engineering to enable the aforementioned properties. Chaos Engineering is an emerging discipline employed to prevent non-security failures in cloud infrastructure via Fault Injection Testing techniques. CloudStrike employs similar techniques with a focus on injecting failures that impact security i.e. integrity, confidentiality and availability. Essentially, CloudStrike leverages the relationship between dependability and security models. Preliminary experiments provide insightful and prospective results.
2019-09-04
Lawson, M., Lofstead, J..  2018.  Using a Robust Metadata Management System to Accelerate Scientific Discovery at Extreme Scales. 2018 IEEE/ACM 3rd International Workshop on Parallel Data Storage Data Intensive Scalable Computing Systems (PDSW-DISCS). :13–23.
Our previous work, which can be referred to as EMPRESS 1.0, showed that rich metadata management provides a relatively low-overhead approach to facilitating insight from scale-up scientific applications. However, this system did not provide the functionality needed for a viable production system or address whether such a system could scale. Therefore, we have extended our previous work to create EMPRESS 2.0, which incorporates the features required for a useful production system. Through a discussion of EMPRESS 2.0, this paper explores how to incorporate rich query functionality, fault tolerance, and atomic operations into a scalable, storage system independent metadata management system that is easy to use. This paper demonstrates that such a system offers significant performance advantages over HDF5, providing metadata querying that is 150X to 650X faster, and can greatly accelerate post-processing. Finally, since the current implementation of EMPRESS 2.0 relies on an RDBMS, this paper demonstrates that an RDBMS is a viable technology for managing data-oriented metadata.
2019-09-26
Pfeffer, T., Herber, P., Druschke, L., Glesner, S..  2018.  Efficient and Safe Control Flow Recovery Using a Restricted Intermediate Language. 2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). :235-240.

Approaches for the automatic analysis of security policies on source code level cannot trivially be applied to binaries. This is due to the lacking high-level semantics of low-level object code, and the fundamental problem that control-flow recovery from binaries is difficult. We present a novel approach to recover the control-flow of binaries that is both safe and efficient. The key idea of our approach is to use the information contained in security mechanisms to approximate the targets of computed branches. To achieve this, we first define a restricted control transition intermediate language (RCTIL), which restricts the number of possible targets for each branch to a finite number of given targets. Based on this intermediate language, we demonstrate how a safe model of the control flow can be recovered without data-flow analyses. Our evaluation shows that that makes our solution more efficient than existing solutions.

2020-01-29
C. {Cheh}, A. {Fawaz}, M. A. {Noureddine}, B. {Chen}, W. G. {Temple}, W. H. {Sanders}.  2018.  Determining Tolerable Attack Surfaces that Preserves Safety of Cyber-Physical Systems. 2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC). :125-134.

As safety-critical systems become increasingly interconnected, a system's operations depend on the reliability and security of the computing components and the interconnections among them. Therefore, a growing body of research seeks to tie safety analysis to security analysis. Specifically, it is important to analyze system safety under different attacker models. In this paper, we develop generic parameterizable state automaton templates to model the effects of an attack. Then, given an attacker model, we generate a state automaton that represents the system operation under the threat of the attacker model. We use a railway signaling system as our case study and consider threats to the communication protocol and the commands issued to physical devices. Our results show that while less skilled attackers are not able to violate system safety, more dedicated and skilled attackers can affect system safety. We also consider several countermeasures and show how well they can deter attacks.

2020-11-04
Zong, P., Wang, Y., Xie, F..  2018.  Embedded Software Fault Prediction Based on Back Propagation Neural Network. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :553—558.

Predicting software faults before software testing activities can help rational distribution of time and resources. Software metrics are used for software fault prediction due to their close relationship with software faults. Thanks to the non-linear fitting ability, Neural networks are increasingly used in the prediction model. We first filter metric set of the embedded software by statistical methods to reduce the dimensions of model input. Then we build a back propagation neural network with simple structure but good performance and apply it to two practical embedded software projects. The verification results show that the model has good ability to predict software faults.

2018-03-19
Popov, P..  2017.  Models of Reliability of Fault-Tolerant Software Under Cyber-Attacks. 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). :228–239.

This paper offers a new approach to modelling the effect of cyber-attacks on reliability of software used in industrial control applications. The model is based on the view that successful cyber-attacks introduce failure regions, which are not present in non-compromised software. The model is then extended to cover a fault tolerant architecture, such as the 1-out-of-2 software, popular for building industrial protection systems. The model is used to study the effectiveness of software maintenance policies such as patching and "cleansing" ("proactive recovery") under different adversary models ranging from independent attacks to sophisticated synchronized attacks on the channels. We demonstrate that the effect of attacks on reliability of diverse software significantly depends on the adversary model. Under synchronized attacks system reliability may be more than an order of magnitude worse than under independent attacks on the channels. These findings, although not surprising, highlight the importance of using an adequate adversary model in the assessment of how effective various cyber-security controls are.

2018-06-07
Alazzawe, A., Kant, K..  2017.  Slice Swarms for HPC Application Resilience. 2017 Fifth International Symposium on Computing and Networking (CANDAR). :1–10.

Resilience in High Performance Computing (HPC) is a constraining factor for bringing applications to the upcoming exascale systems. Resilience techniques must be able to scale to handle the increasing number of expected errors in an energy efficient manner. Since the purpose of running applications on HPC systems is to perform large scale computations as quick as possible, resilience methods should not add a large delay to the time to completion of the application. In this paper we introduce a novel technique to detect and recover from transient errors in HPC applications. One of the features of our technique is that the energy budget allocated to resilience can be adjusted depending on the operator's resilience needs. For example, on synthetic data, the technique can detect about 50% of transient errors while only using 20% of the dynamic energy required for running the application. For a 60% energy budget, an application that uses 10k cores and takes 128 hours to run, will only require 10% longer to complete.

Marques, J., Andrade, J., Falcao, G..  2017.  Unreliable memory operation on a convolutional neural network processor. 2017 IEEE International Workshop on Signal Processing Systems (SiPS). :1–6.

The evolution of convolutional neural networks (CNNs) into more complex forms of organization, with additional layers, larger convolutions and increasing connections, established the state-of-the-art in terms of accuracy errors for detection and classification challenges in images. Moreover, as they evolved to a point where Gigabytes of memory are required for their operation, we have reached a stage where it becomes fundamental to understand how their inference capabilities can be impaired if data elements somehow become corrupted in memory. This paper introduces fault-injection in these systems by simulating failing bit-cells in hardware memories brought on by relaxing the 100% reliable operation assumption. We analyze the behavior of these networks calculating inference under severe fault-injection rates and apply fault mitigation strategies to improve on the CNNs resilience. For the MNIST dataset, we show that 8x less memory is required for the feature maps memory space, and that in sub-100% reliable operation, fault-injection rates up to 10-1 (with most significant bit protection) can withstand only a 1% error probability degradation. Furthermore, considering the offload of the feature maps memory to an embedded dynamic RAM (eDRAM) system, using technology nodes from 65 down to 28 nm, up to 73 80% improved power efficiency can be obtained.

2018-02-06
Resch, S., Paulitsch, M..  2017.  Using TLA+ in the Development of a Safety-Critical Fault-Tolerant Middleware. 2017 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :146–152.

Creating and implementing fault-tolerant distributed algorithms is a challenging task in highly safety-critical industries. Using formal methods supports design and development of complex algorithms. However, formal methods are often perceived as an unjustifiable overhead. This paper presents the experience and insights when using TLA+ and PlusCal to model and develop fault-tolerant and safety-critical modules for TAS Control Platform, a platform for railway control applications up to safety integrity level (SIL) 4. We show how formal methods helped us improve the correctness of the algorithms, improved development efficiency and how part of the gap between model and implementation has been closed by translation to C code. Additionally, we describe how we gained trust in the formal model and tools by following a specific design process called property-driven design, which also implicitly addresses software quality metrics such as code coverage metrics.

2018-04-04
Majumder, R., Som, S., Gupta, R..  2017.  Vulnerability prediction through self-learning model. 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS). :400–402.

Vulnerability being the buzz word in the modern time is the most important jargon related to software and operating system. Since every now and then, software is developed some loopholes and incompleteness lie in the development phase, so there always remains a vulnerability of abruptness in it which can come into picture anytime. Detecting vulnerability is one thing and predicting its occurrence in the due course of time is another thing. If we get to know the vulnerability of any software in the due course of time then it acts as an active alarm for the developers to again develop sound and improvised software the second time. The proposal talks about the implementation of the idea using the artificial neural network, where different data sets are being given as input for being used for further analysis for successful results. As of now, there are models for studying the vulnerabilities in the software and networks, this paper proposal in addition to the current work, will throw light on the predictability of vulnerabilities over the due course of time.

2017-12-12
August, M. A., Diallo, M. H., Graves, C. T., Slayback, S. M., Glasser, D..  2017.  AnomalyDetect: Anomaly Detection for Preserving Availability of Virtualized Cloud Services. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W). :334–340.

In this paper, we present AnomalyDetect, an approach for detecting anomalies in cloud services. A cloud service consists of a set of interacting applications/processes running on one or more interconnected virtual machines. AnomalyDetect uses the Kalman Filter as the basis for predicting the states of virtual machines running cloud services. It uses the cloud service's virtual machine historical data to forecast potential anomalies. AnomalyDetect has been integrated with the AutoMigrate framework and serves as the means for detecting anomalies to automatically trigger live migration of cloud services to preserve their availability. AutoMigrate is a framework for developing intelligent systems that can monitor and migrate cloud services to maximize their availability in case of cloud disruption. We conducted a number of experiments to analyze the performance of the proposed AnomalyDetect approach. The experimental results highlight the feasibility of AnomalyDetect as an approach to autonomic cloud availability.

Pacheco, J., Zhu, X., Badr, Y., Hariri, S..  2017.  Enabling Risk Management for Smart Infrastructures with an Anomaly Behavior Analysis Intrusion Detection System. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W). :324–328.

The Internet of Things (IoT) connects not only computers and mobile devices, but it also interconnects smart buildings, homes, and cities, as well as electrical grids, gas, and water networks, automobiles, airplanes, etc. However, IoT applications introduce grand security challenges due to the increase in the attack surface. Current security approaches do not handle cybersecurity from a holistic point of view; hence a systematic cybersecurity mechanism needs to be adopted when designing IoTbased applications. In this work, we present a risk management framework to deploy secure IoT-based applications for Smart Infrastructures at the design time and the runtime. At the design time, we propose a risk management method that is appropriate for smart infrastructures. At the design time, our framework relies on the Anomaly Behavior Analysis (ABA) methodology enabled by the Autonomic Computing paradigm and an intrusion detection system to detect any threat that can compromise IoT infrastructures by. Our preliminary experimental results show that our framework can be used to detect threats and protect IoT premises and services.

2017-11-27
Jyotiyana, D., Saxena, V. P..  2016.  Fault attack for scalar multiplication over finite field (E(Fq)) on Elliptic Curve Digital Signature Algorithm. 2016 International Conference on Recent Advances and Innovations in Engineering (ICRAIE). :1–4.

Elliptic Curve Cryptosystems are very much delicate to attacks or physical attacks. This paper aims to correctly implementing the fault injection attack against Elliptic Curve Digital Signature Algorithm. More specifically, the proposed algorithm concerns to fault attack which is implemented to sufficiently alter signature against vigilant periodic sequence algorithm that supports the efficient speed up and security perspectives with most prominent and well known scalar multiplication algorithm for ECDSA. The purpose is to properly injecting attack whether any probable countermeasure threatening the pseudo code is determined by the attack model according to the predefined methodologies. We show the results of our experiment with bits acquire from the targeted implementation to determine the reliability of our attack.

2017-12-28
Duan, S., Li, Y., Levitt, K..  2016.  Cost sensitive moving target consensus. 2016 IEEE 15th International Symposium on Network Computing and Applications (NCA). :272–281.

Consensus is a fundamental approach to implementing fault-tolerant services through replication. It is well known that there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing system and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.