Biblio

Found 12046 results

Filters: Keyword is Resiliency  [Clear All Filters]
2018-03-19
Heckman, M. R., Schell, R. R., Reed, E. E..  2015.  A Multi-Level Secure File Sharing Server and Its Application to a Multi-Level Secure Cloud. MILCOM 2015 - 2015 IEEE Military Communications Conference. :1224–1229.
Contemporary cloud environments are built on low-assurance components, so they cannot provide a high level of assurance about the isolation and protection of information. A ``multi-level'' secure cloud environment thus typically consists of multiple, isolated clouds, each of which handles data of only one security level. Not only are such environments duplicative and costly, data ``sharing'' must be implemented by massive, wasteful copying of data from low-level domains to high-level domains. The requirements for certifiable, scalable, multi-level cloud security are threefold: 1) To have trusted, high-assurance components available for use in creating a multi-level secure cloud environment; 2) To design a cloud architecture that efficiently uses the high-assurance components in a scalable way, and 3) To compose the secure components within the scalable architecture while still verifiably maintaining the system security properties. This paper introduces a trusted, high-assurance file server and architecture that satisfies all three requirements. The file server is built on mature technology that was previously certified and deployed across domains from TS/SCI to Unclassified and that supports high-performance, low-to-high and high-to-low file sharing with verifiable security.
2022-12-01
Kao, Chia-Nan, Chang, Yung-Cheng, Huang, Nen-Fu, Salim S, I, Liao, I.-Ju, Liu, Rong-Tai, Hung, Hsien-Wei.  2015.  A predictive zero-day network defense using long-term port-scan recording. 2015 IEEE Conference on Communications and Network Security (CNS). :695—696.
Zero-day attack is a critical network attack. The zero-day attack period (ZDAP) is the period from the release of malware/exploit until a patch becomes available. IDS/IPS cannot effectively block zero-day attacks because they use pattern-based signatures in general. This paper proposes a Prophetic Defender (PD) by which ZDAP can be minimized. Prior to actual attack, hackers scan networks to identify hosts with vulnerable ports. If this port scanning can be detected early, zero-day attacks will become detectable. PD architecture makes use of a honeypot-based pseudo server deployed to detect malicious port scans. A port-scanning honeypot was operated by us in 6 years from 2009 to 2015. By analyzing the 6-year port-scanning log data, we understand that PD is effective for detecting and blocking zero-day attacks. The block rate of the proposed architecture is 98.5%.
2017-05-18
Ahsan, Muhammad, Meter, Rodney Van, Kim, Jungsang.  2015.  Designing a Million-Qubit Quantum Computer Using a Resource Performance Simulator. J. Emerg. Technol. Comput. Syst.. 12:39:1–39:25.

The optimal design of a fault-tolerant quantum computer involves finding an appropriate balance between the burden of large-scale integration of noisy components and the load of improving the reliability of hardware technology. This balance can be evaluated by quantitatively modeling the execution of quantum logic operations on a realistic quantum hardware containing limited computational resources. In this work, we report a complete performance simulation software tool capable of (1) searching the hardware design space by varying resource architecture and technology parameters, (2) synthesizing and scheduling a fault-tolerant quantum algorithm within the hardware constraints, (3) quantifying the performance metrics such as the execution time and the failure probability of the algorithm, and (4) analyzing the breakdown of these metrics to highlight the performance bottlenecks and visualizing resource utilization to evaluate the adequacy of the chosen design. Using this tool, we investigate a vast design space for implementing key building blocks of Shor’s algorithm to factor a 1,024-bit number with a baseline budget of 1.5 million qubits. We show that a trapped-ion quantum computer designed with twice as many qubits and one-tenth of the baseline infidelity of the communication channel can factor a 2,048-bit integer in less than 5 months.

2020-01-20
Clark, Shane S., Paulos, Aaron, Benyo, Brett, Pal, Partha, Schantz, Richard.  2015.  Empirical Evaluation of the A3 Environment: Evaluating Defenses Against Zero-Day Attacks. 2015 10th International Conference on Availability, Reliability and Security. :80–89.

A3 is an execution management environment that aims to make network-facing applications and services resilient against zero-day attacks. A3 recently underwent two adversarial evaluations of its defensive capabilities. In one, A3 defended an App Store used in a Capture the Flag (CTF) tournament, and in the other, a tactically relevant network service in a red team exercise. This paper describes the A3 defensive technologies evaluated, the evaluation results, and the broader lessons learned about evaluations for technologies that seek to protect critical systems from zero-day attacks.

2017-02-27
Lokesh, M. R., Kumaraswamy, Y. S..  2015.  Healing process towards resiliency in cyber-physical system: A modified danger theory based artifical immune recogization2 algorithm approach. 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS). :226–232.

Healing Process is a major role in developing resiliency in cyber-physical system where the environment is diverse in nature. Cyber-physical system is modelled with Multi Agent Paradigm and biological inspired Danger Theory based-Artificial Immune Recognization2 Algorithm Methodology towards developing healing process. The Proposed methodology is implemented in a simulation environment and percentage of Convergence rates shown in achieving accuracy in the healing process to resiliency in cyber-physical system environment is shown.

2018-07-06
Zhang, F., Chan, P. P. K., Tang, T. Q..  2015.  L-GEM based robust learning against poisoning attack. 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). :175–178.

Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack.

2020-03-09
Knirsch, Fabian, Engel, Dominik, Frincu, Marc, Prasanna, Viktor.  2015.  Model-Based Assessment for Balancing Privacy Requirements and Operational Capabilities in the Smart Grid. 2015 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT). :1–5.

The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and - if feasible - an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

2017-05-18
Das, Subhasis, Aamodt, Tor M., Dally, William J..  2015.  Reuse Distance-Based Probabilistic Cache Replacement. ACM Trans. Archit. Code Optim.. 12:33:1–33:22.

This article proposes Probabilistic Replacement Policy (PRP), a novel replacement policy that evicts the line with minimum estimated hit probability under optimal replacement instead of the line with maximum expected reuse distance. The latter is optimal under the independent reference model of programs, which does not hold for last-level caches (LLC). PRP requires 7% and 2% metadata overheads in the cache and DRAM respectively. Using a sampling scheme makes DRAM overhead negligible, with minimal performance impact. Including detailed overhead modeling and equal cache areas, PRP outperforms SHiP, a state-of-the-art LLC replacement algorithm, by 4% for memory-intensive SPEC-CPU2006 benchmarks.

Tan, Li, Chen, Zizhong, Song, Shuaiwen Leon.  2015.  Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology. ACM Trans. Archit. Code Optim.. 12:35:1–35:27.

Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent parameters that determine energy efficiency and resilience individually have causal effects with each other, which directly affect the trade-offs among performance, energy efficiency and resilience at scale. To enable high-efficiency management for large-scale High-Performance Computing (HPC) systems nowadays, quantitatively understanding the entangled effects among performance, energy efficiency, and resilience is thus required. While previous work focuses on exploring energy-saving and resilience-enhancing opportunities separately, little has been done to theoretically and empirically investigate the interplay between energy efficiency and resilience at scale. In this article, by extending the Amdahl’s Law and the Karp-Flatt Metric, taking resilience into consideration, we quantitatively model the integrated energy efficiency in terms of performance per Watt and showcase the trade-offs among typical HPC parameters, such as number of cores, frequency/voltage, and failure rates. Experimental results for a wide spectrum of HPC benchmarks on two HPC systems show that the proposed models are accurate in extrapolating resilience-aware performance and energy efficiency, and capable of capturing the interplay among various energy-saving and resilience factors. Moreover, the models can help find the optimal HPC configuration for the highest integrated energy efficiency, in the presence of failures and applied resilience techniques.

2018-07-06
Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., Jha, N. K..  2015.  Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare. IEEE Journal of Biomedical and Health Informatics. 19:1893–1905.

Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.

2023-03-31
Chibba, Michelle, Cavoukian, Ann.  2015.  Privacy, consumer trust and big data: Privacy by design and the 3 C'S. 2015 ITU Kaleidoscope: Trust in the Information Society (K-2015). :1–5.
The growth of ICTs and the resulting data explosion could pave the way for the surveillance of our lives and diminish our democratic freedoms, at an unimaginable scale. Consumer mistrust of an organization's ability to safeguard their data is at an all time high and this has negative implications for Big Data. The timing is right to be proactive about designing privacy into technologies, business processes and networked infrastructures. Inclusiveness of all objectives can be achieved through consultation, co-operation, and collaboration (3 C's). If privacy is the default, without diminishing functionality or other legitimate interests, then trust will be preserved and innovation will flourish.
2017-02-27
Sun, H., Luo, H., Wu, T. Y., Obaidat, M. S..  2015.  A PSNR-Controllable Data Hiding Algorithm Based on LSBs Substitution. 2015 IEEE Global Communications Conference (GLOBECOM). :1–7.

There are more and more systems using mobile devices to perform sensing tasks, but these increase the risk of leakage of personal privacy and data. Data hiding is one of the important ways for information security. Even though many data hiding algorithms have worked on providing more hiding capacity or higher PSNR, there are few algorithms that can control PSNR effectively while ensuring hiding capacity. In this paper, with controllable PSNR based on LSBs substitution- PSNR-Controllable Data Hiding (PCDH), we first propose a novel encoding plan for data hiding. In PCDH, we use the remainder algorithm to calculate the hidden information, and hide the secret information in the last x LSBs of every pixel. Theoretical proof shows that this method can control the variation of stego image from cover image, and control PSNR by adjusting parameters in the remainder calculation. Then, we design the encoding and decoding algorithms with low computation complexity. Experimental results show that PCDH can control the PSNR in a given range while ensuring high hiding capacity. In addition, it can resist well some steganalysis. Compared to other algorithms, PCDH achieves better tradeoff among PSNR, hiding capacity, and computation complexity.

2021-02-08
Xu, P., Miao, Q., Liu, T., Chen, X..  2015.  Multi-direction Edge Detection Operator. 2015 11th International Conference on Computational Intelligence and Security (CIS). :187—190.

Due to the noise in the images, the edges extracted from these noisy images are always discontinuous and inaccurate by traditional operators. In order to solve these problems, this paper proposes multi-direction edge detection operator to detect edges from noisy images. The new operator is designed by introducing the shear transformation into the traditional operator. On the one hand, the shear transformation can provide a more favorable treatment for directions, which can make the new operator detect edges in different directions and overcome the directional limitation in the traditional operator. On the other hand, all the single pixel edge images in different directions can be fused. In this case, the edge information can complement each other. The experimental results indicate that the new operator is superior to the traditional ones in terms of the effectiveness of edge detection and the ability of noise rejection.

2020-03-09
Xie, Yuanpeng, Jiang, Yixin, Liao, Runfa, Wen, Hong, Meng, Jiaxiao, Guo, Xiaobin, Xu, Aidong, Guan, Zewu.  2015.  User Privacy Protection for Cloud Computing Based Smart Grid. 2015 IEEE/CIC International Conference on Communications in China - Workshops (CIC/ICCC). :7–11.

The smart grid aims to improve the efficiency, reliability and safety of the electric system via modern communication system, it's necessary to utilize cloud computing to process and store the data. In fact, it's a promising paradigm to integrate smart grid into cloud computing. However, access to cloud computing system also brings data security issues. This paper focuses on the protection of user privacy in smart meter system based on data combination privacy and trusted third party. The paper demonstrates the security issues for smart grid communication system and cloud computing respectively, and illustrates the security issues for the integration. And we introduce data chunk storage and chunk relationship confusion to protect user privacy. We also propose a chunk information list system for inserting and searching data.

2021-04-08
Venkitasubramaniam, P., Yao, J., Pradhan, P..  2015.  Information-Theoretic Security in Stochastic Control Systems. Proceedings of the IEEE. 103:1914–1931.
Infrastructural systems such as the electricity grid, healthcare, and transportation networks today rely increasingly on the joint functioning of networked information systems and physical components, in short, on cyber-physical architectures. Despite tremendous advances in cryptography, physical-layer security and authentication, information attacks, both passive such as eavesdropping, and active such as unauthorized data injection, continue to thwart the reliable functioning of networked systems. In systems with joint cyber-physical functionality, the ability of an adversary to monitor transmitted information or introduce false information can lead to sensitive user data being leaked or result in critical damages to the underlying physical system. This paper investigates two broad challenges in information security in cyber-physical systems (CPSs): preventing retrieval of internal physical system information through monitored external cyber flows, and limiting the modification of physical system functioning through compromised cyber flows. A rigorous analytical framework grounded on information-theoretic security is developed to study these challenges in a general stochastic control system abstraction-a theoretical building block for CPSs-with the objectives of quantifying the fundamental tradeoffs between information security and physical system performance, and through the process, designing provably secure controller policies. Recent results are presented that establish the theoretical basis for the framework, in addition to practical applications in timing analysis of anonymous systems, and demand response systems in a smart electricity grid.
Tyagi, H., Vardy, A..  2015.  Universal Hashing for Information-Theoretic Security. Proceedings of the IEEE. 103:1781–1795.
The information-theoretic approach to security entails harnessing the correlated randomness available in nature to establish security. It uses tools from information theory and coding and yields provable security, even against an adversary with unbounded computational power. However, the feasibility of this approach in practice depends on the development of efficiently implementable schemes. In this paper, we review a special class of practical schemes for information-theoretic security that are based on 2-universal hash families. Specific cases of secret key agreement and wiretap coding are considered, and general themes are identified. The scheme presented for wiretap coding is modular and can be implemented easily by including an extra preprocessing layer over the existing transmission codes.
2017-05-18
Haitzer, Thomas, Navarro, Elena, Zdun, Uwe.  2015.  Architecting for Decision Making About Code Evolution. Proceedings of the 2015 European Conference on Software Architecture Workshops. :52:1–52:7.

During software evolution, it is important to evolve not only the source code, but also its architecture to prevent architecture drift and architecture erosion. This is a complex activity, especially for large software projects, with multiple development teams that might be located in different countries or on different continents. To ease this kind of evolution, we have developed a domain-specific language for making decisions about the evolution. It supports the definition of architectural changes based on multiple implementation tasks that can have temporal dependencies among each other. Then, by means of a model-to-model transformation, we automatically create a constraint model that we use to generate, by means of the Alloy model analyzer, the possible alternative decisions for executing the implementation tasks. The tight integration with architecture abstractions enables architects to automatically check the changes related to an implementation task in relation to the architecture description. This helps keeping architecture and code in sync, avoiding drift and erosion.

2021-04-08
Cao, Z., Deng, H., Lu, L., Duan, X..  2014.  An information-theoretic security metric for future wireless communication systems. 2014 XXXIth URSI General Assembly and Scientific Symposium (URSI GASS). :1–4.
Quantitative analysis of security properties in wireless communication systems is an important issue; it helps us get a comprehensive view of security and can be used to compare the security performance of different systems. This paper analyzes the security of future wireless communication system from an information-theoretic point of view and proposes an overall security metric. We demonstrate that the proposed metric is more reasonable than some existing metrics and it is highly sensitive to some basic parameters and helpful to do fine-grained tuning of security performance.
Liu, S., Hong, Y., Viterbo, E..  2014.  On measures of information theoretic security. 2014 IEEE Information Theory Workshop (ITW 2014). :309–310.
While information-theoretic security is stronger than computational security, it has long been considered impractical. In this work, we provide new insights into the design of practical information-theoretic cryptosystems. Firstly, from a theoretical point of view, we give a brief introduction into the existing information theoretic security criteria, such as the notions of Shannon's perfect/ideal secrecy in cryptography, and the concept of strong secrecy in coding theory. Secondly, from a practical point of view, we propose the concept of ideal secrecy outage and define a outage probability. Finally, we show how such probability can be made arbitrarily small in a practical cryptosystem.
2018-07-06
Biggio, Battista, Rieck, Konrad, Ariu, Davide, Wressnegger, Christian, Corona, Igino, Giacinto, Giorgio, Roli, Fabio.  2014.  Poisoning Behavioral Malware Clustering. Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop. :27–36.
Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems. However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data. In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior. To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
2016-11-16
2020-01-21
Azimi, Mahdi, Sami, Ashkan, Khalili, Abdullah.  2014.  A Security Test-Bed for Industrial Control Systems. Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation. :26–31.

Industrial Control Systems (ICS) such as Supervisory Control And Data Acquisition (SCADA), Distributed Control Systems (DCS) and Distributed Automation Systems (DAS) control and monitor critical infrastructures. In recent years, proliferation of cyber-attacks to ICS revealed that a large number of security vulnerabilities exist in such systems. Excessive security solutions are proposed to remove the vulnerabilities and improve the security of ICS. However, to the best of our knowledge, none of them presented or developed a security test-bed which is vital to evaluate the security of ICS tools and products. In this paper, a test-bed is proposed for evaluating the security of industrial applications by providing different metrics for static testing, dynamic testing and network testing in industrial settings. Using these metrics and results of the three tests, industrial applications can be compared with each other from security point of view. Experimental results on several real world applications indicate that proposed test-bed can be successfully employed to evaluate and compare the security level of industrial applications.

2022-04-20
Qingxue, Meng, Jiajun, Lin.  2014.  The Modeling and Simulation of Vehicle Distance Control Based on Cyber-Physical System. 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference. :341–345.
With the advent of motorization, result in traffic system more congested, how to make the traffic system more effective and also take safety into account, namely build a intelligent transportation system, has become a hot spot of society. The vehicle distance control system studied in this paper is an important function in intelligent transportation system, through introducing cyber-physical systems (CPS) technology into it, set up system model, make the vehicles maintain a preset safety distance, thereby not only help improve the effective utilization of traffic system, but also help avoid the collision due to the speed change. Finally, use Simulink software to simulate and analyze the performance of the system, the result shows that the model can effectively cope with the distance change which is due to speed change, and ensure the vehicles maintain a preset safety distance within a short period of time.
2021-03-04
Sun, H., Liu, L., Feng, L., Gu, Y. X..  2014.  Introducing Code Assets of a New White-Box Security Modeling Language. 2014 IEEE 38th International Computer Software and Applications Conference Workshops. :116—121.

This paper argues about a new conceptual modeling language for the White-Box (WB) security analysis. In the WB security domain, an attacker may have access to the inner structure of an application or even the entire binary code. It becomes pretty easy for attackers to inspect, reverse engineer, and tamper the application with the information they steal. The basis of this paper is the 14 patterns developed by a leading provider of software protection technologies and solutions. We provide a part of a new modeling language named i-WBS (White-Box Security) to describe problems of WB security better. The essence of White-Box security problem is code security. We made the new modeling language focus on code more than ever before. In this way, developers who are not security experts can easily understand what they need to really protect.

2015-04-30
Shila, D.M., Venugopal, V..  2014.  Design, implementation and security analysis of Hardware Trojan Threats in FPGA. Communications (ICC), 2014 IEEE International Conference on. :719-724.

Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.