Mailloux, L. O., Span, M., Mills, R. F., Young, W..
2019.
A Top Down Approach for Eliciting Systems Security Requirements for a Notional Autonomous Space System. 2019 IEEE International Systems Conference (SysCon). :1–7.
Today's highly interconnected and technology reliant environment places great emphasis on the need for secure cyber-physical systems. This work addresses this need by detailing a top down systems security requirements analysis approach for understanding and eliciting security requirements for a notional space system. More specifically, the System-Theoretic Process Analysis approach for Security (STPA-Sec) is used to understand and elicit systems security requirements during the conceptual stage of development. This work employs STPA-Sec in a notional space system to detail the development of functional-level security requirements, design-level engineering considerations, and architectural-level security specifications early in the system life cycle when the solution trade-space is largest rather than merely examining components and adding protections during system operation, maintenance, or sustainment. Lastly, this approach employs a holistic viewpoint which aligns with the systems and software engineering processes as detailed in ISO/IEC/IEEE 152SS and NIST SP SOO-160 Volume 1. This work seeks to advance the science of systems security by providing insight into a viable systems security requirements analysis approach which results in traceable security, safety, and resiliency requirements that can be designed-for, built-to, and verified with confidence.
Hagan, M., Siddiqui, F., Sezer, S..
2019.
Enhancing Security and Privacy of Next-Generation Edge Computing Technologies. 2019 17th International Conference on Privacy, Security and Trust (PST). :1–5.
The advent of high performance fog and edge computing and high bandwidth connectivity has brought about changes to Internet-of-Things (IoT) service architectures, allowing for greater quantities of high quality information to be extracted from their environments to be processed. However, recently introduced international regulations, along with heightened awareness among consumers, have strengthened requirements to ensure data security, with significant financial and reputational penalties for organisations who fail to protect customers' data. This paper proposes the leveraging of fog and edge computing to facilitate processing of confidential user data, to reduce the quantity and availability of raw confidential data at various levels of the IoT architecture. This ultimately reduces attack surface area, however it also increases efficiency of the architecture by distributing processing amongst nodes and transmitting only processed data. However, such an approach is vulnerable to device level attacks. To approach this issue, a proposed System Security Manager is used to continuously monitor system resources and ensure confidential data is confined only to parts of the device that require it. In event of an attack, critical data can be isolated and the system informed, to prevent data confidentiality breach.
Ullah, S., Shetty, S., Hassanzadeh, A..
2018.
Towards Modeling Attacker’s Opportunity for Improving Cyber Resilience in Energy Delivery Systems. 2018 Resilience Week (RWS). :100–107.
Cyber resiliency of Energy Delivery Systems (EDS) is critical for secure and resilient cyber infrastructure. Defense-in-depth architecture forces attackers to conduct lateral propagation until the target is compromised. Researchers developed techniques based on graph spectral matrices to model lateral propagation. However, these techniques ignore host criticality which is critical in EDS. In this paper, we model attacker's opportunity by developing three criticality metrics for each host along the path to the target. The first metric refers the opportunity of attackers before they penetrate the infrastructure. The second metric measure the opportunity a host provides by allowing attackers to propagate through the network. Along with vulnerability we also take into account the attributes of hosts and links within each path. Then, we derive third criticality metric to reflect the information flow dependency from each host to target. Finally, we provide system design for instantiating the proposed metrics for real network scenarios in EDS. We present simulation results which illustrates the effectiveness of the metrics for efficient defense deployment in EDS cyber infrastructure.
Feth, P., Adler, R., Schneider, D..
2018.
A Context-Aware, Confidence-Disclosing and Fail-Operational Dynamic Risk Assessment Architecture. 2018 14th European Dependable Computing Conference (EDCC). :190–194.
Future automotive systems will be highly automated and they will cooperate to optimize important system qualities and performance. Established safety assurance approaches and standards have been designed with manually controlled stand-alone systems in mind and are thus not fit to ensure safety of this next generation of systems. We argue that, given frequent dynamic changes and unknown contexts, systems need to be enabled to dynamically assess and manage their risks. In doing so, systems become resilient from a safety perspective, i.e. they are able to maintain a state of acceptable risk even when facing changes. This work presents a Dynamic Risk Assessment architecture that implements the concepts of context-awareness, confidence-disclosure and fail-operational. In particular, we demonstrate the utilization of these concepts for the calculation of automotive collision risk metrics, which are at the heart of our architecture.
Anju, J., Shreelekshmi, R..
2019.
Modified Feature Descriptors to enhance Secure Content-based Image Retrieval in Cloud. 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT). 1:674–680.
With the emergence of cloud, content-based image retrieval (CBIR) on encrypted domain gain enormous importance due to the ever increasing need for ensuring confidentiality, authentication, integrity and privacy of data. CBIR on outsourced encrypted images can be done by extracting features from unencrypted images and generating searchable encrypted index based on it. Visual descriptors like color descriptors, shape and texture descriptors, etc. are employed for similarity search. Since visual descriptors used to represent an image have crucial role in retrieving most similar results, an attempt to combine them has been made in this paper. The effect of combining different visual descriptors on retrieval precision in secure CBIR scheme proposed by Xia et al. is analyzed. Experimental results show that combining visual descriptors can significantly enhance retrieval precision of the secure CBIR scheme.
Choudhury, O., Sylla, I., Fairoza, N., Das, A..
2019.
A Blockchain Framework for Ensuring Data Quality in Multi-Organizational Clinical Trials. 2019 IEEE International Conference on Healthcare Informatics (ICHI). :1–9.
The cost and complexity of conducting multi-site clinical trials have significantly increased over time, with site monitoring, data management, and Institutional Review Board (IRB) amendments being key drivers. Trial sponsors, such as pharmaceutical companies, are also increasingly outsourcing trial management to multiple organizations. Enforcing compliance with standard operating procedures, such as preserving data privacy for human subject protection, is crucial for upholding the integrity of a study and its findings. Current efforts to ensure quality of data collected at multiple sites and by multiple organizations lack a secure, trusted, and efficient framework for fragmented data capture. To address this challenge, we propose a novel data management infrastructure based on a permissioned blockchain with private channels, smart contracts, and distributed ledgers. We use an example multi-organizational clinical trial to design and implement a blockchain network: generate activity-specific private channels to segregate data flow for confidentiality, write channel-specific smart contracts to enforce regulatory guidelines, monitor the immutable transaction log to detect protocol breach, and auto-generate audit trail. Through comprehensive experimental study, we demonstrate that our system handles high-throughput transactions, exhibits low-latency, and constitutes a trusted, scalable solution.
Shen, N., Yeh, J., Chen, C., Chen, Y., Zhang, Y..
2019.
Ensuring Query Completeness in Outsourced Database Using Order-Preserving Encryption. 2019 IEEE Intl Conf on Parallel Distributed Processing with Applications, Big Data Cloud Computing, Sustainable Computing Communications, Social Computing Networking (ISPA/BDCloud/SocialCom/SustainCom). :776–783.
Nowadays database outsourcing has become business owners' preferred option and they are benefiting from its flexibility, reliability, and low cost. However, because database service providers cannot always be fully trusted and data owners will no longer have a direct control over their own data, how to make the outsourced data secure becomes a hot research topic. From the data integrity protection aspect, the client wants to make sure the data returned is correct, complete, and up-to-date. Previous research work in literature put more efforts on data correctness, while data completeness is still a challenging problem to solve. There are some existing works that tried to protect the completeness of data. Unfortunately, these solutions were considered not fully solving the problem because of their high communication or computation overhead. The implementations and limitations of existing works will be further discussed in this paper. From the data confidentiality protection aspect, order-preserving encryption (OPE) is a widely used encryption scheme in protecting data confidentiality. It allows the client to perform range queries and some other operations such as GROUP BY and ORDER BY over the OPE encrypted data. Therefore, it is worthy to develop a solution that allows user to verify the query completeness for an OPE encrypted database so that both data confidentiality and completeness are both protected. Inspired by this motivation, we propose a new data completeness protecting scheme by inserting fake tuples into databases. Both the real and fake tuples are OPE encrypted and thus the cloud server cannot distinguish among them. While our new scheme is much more efficient than all existing approaches, the level of security protection remains the same.
Roisum, H., Urizar, L., Yeh, J., Salisbury, K., Magette, M..
2019.
Completeness Integrity Protection for Outsourced Databases Using Semantic Fake Data. 2019 4th International Conference on Communication and Information Systems (ICCIS). :222–228.
As cloud storage and computing gains popularity, data entrusted to the cloud has the potential to be exposed to more people and thus more vulnerable to attacks. It is important to develop mechanisms to protect data privacy and integrity so that clients can safely outsource their data to the cloud. We present a method for ensuring data completeness which is one facet of the data integrity problem. Our approach converts a standard database to a Completeness Protected Database (CPDB) by inserting some semantic fake data before outsourcing it to the cloud. These fake data are initially produced using our generating function which uses Order Preserving Encryption, which allows the user to be able to regenerate these fake data and match them to fake data returned from a range query to check for completeness. The CPDB is innovative in the following ways: (1) fake data is deterministically generated but is semantically indistinguishable from other existing data; (2) since fake data is generated by deterministic functions, data owners do not need to locally store the fake data that have been inserted, instead they can re-generate fake data using the functions; (3) no costly data encryption/signature is used in our scheme compared to previous work which encrypt/sign the entire database.