Biblio
In recent years, we have seen an advent in software attestation defenses targeting embedded systems which aim to detect tampering with a device's running program. With a persistent threat of an increasingly powerful attacker with physical access to the device, attestation approaches have become more rooted into the device's hardware with some approaches even changing the underlying microarchitecture. These drastic changes to the hardware make the proposed defenses hard to apply to new systems. In this paper, we present and evaluate LAHEL as the means to study the implementation and pitfalls of a hardware-based attestation mechanism. We limit LAHEL to utilize existing technologies without demanding any hardware changes. We implement LAHEL as a hardware IP core which interfaces with the CoreSight Debug Architecture available in modern ARM cores. We show how LAHEL can be integrated to system on chip designs allowing for microcontroller vendors to easily add our defense into their products. We present and test our prototype on a Zynq-7000 SoC, evaluating the security of LAHEL against powerful time-of-check-time-of-use (TOCTOU) attacks, while demonstrating improved performance over existing attestation schemes.
The number of applications and services that are hosted on cloud platforms is constantly increasing. Nowadays, more and more applications are hosted as services on cloud platforms, co-existing with other services in a mutually untrusted environment. Facilities such as virtual machines, containers and encrypted communication channels aim to offer isolation between the various applications and protect sensitive user data. However, such techniques are not always able to provide a secure execution environment for sensitive applications nor they offer guarantees that data are not monitored by an honest but curious provider once they reach the cloud infrastructure. The recent advancements of trusted execution environments within commodity processors, such as Intel SGX, provide a secure reverse sandbox, where code and data are isolated even from the underlying operating system. Moreover, Intel SGX provides a remote attestation mechanism, allowing the communicating parties to verify their identity as well as prove that code is executed on hardware-assisted software enclaves. Many approaches try to ensure code and data integrity, as well as enforce channel encryption schemes such as TLS, however, these techniques are not enough to achieve complete isolation and secure communications without hardware assistance or are not efficient in terms of performance. In this work, we design and implement a practical attestation system that allows the service provider to offer a seamless attestation service between the hosted applications and the end clients. Furthermore, we implement a novel caching system that is capable to eliminate the latencies introduced by the remote attestation process. Our approach allows the parties to attest one another before each communication attempt, with improved performance when compared to a standard TLS handshake.
The root causes of many security vulnerabilities include a pernicious combination of two problems, often regarded as inescapable aspects of computing. First, the protection mechanisms provided by the mainstream processor architecture and C/C++ language abstractions, dating back to the 1970s and before, provide only coarse-grain virtual-memory-based protection. Second, mainstream system engineering relies almost exclusively on test-and-debug methods, with (at best) prose specifications. These methods have historically sufficed commercially for much of the computer industry, but they fail to prevent large numbers of exploitable bugs, and the security problems that this causes are becoming ever more acute.In this paper we show how more rigorous engineering methods can be applied to the development of a new security-enhanced processor architecture, with its accompanying hardware implementation and software stack. We use formal models of the complete instruction-set architecture (ISA) at the heart of the design and engineering process, both in lightweight ways that support and improve normal engineering practice - as documentation, in emulators used as a test oracle for hardware and for running software, and for test generation - and for formal verification. We formalise key intended security properties of the design, and establish that these hold with mechanised proof. This is for the same complete ISA models (complete enough to boot operating systems), without idealisation.We do this for CHERI, an architecture with hardware capabilities that supports fine-grained memory protection and scalable secure compartmentalisation, while offering a smooth adoption path for existing software. CHERI is a maturing research architecture, developed since 2010, with work now underway on an Arm industrial prototype to explore its possible adoption in mass-market commercial processors. The rigorous engineering work described here has been an integral part of its development to date, enabling more rapid and confident experimentation, and boosting confidence in the design.
With its huge real-world demands, large-scale confidential computing still cannot be supported by today's Trusted Execution Environment (TEE), due to the lack of scalable and effective protection of high-throughput accelerators like GPUs, FPGAs, and TPUs etc. Although attempts have been made recently to extend the CPU-like enclave to GPUs, these solutions require change to the CPU or GPU chips, may introduce new security risks due to the side-channel leaks in CPU-GPU communication and are still under the resource constraint of today's CPU TEE.To address these problems, we present the first Heterogeneous TEE design that can truly support large-scale compute or data intensive (CDI) computing, without any chip-level change. Our approach, called HETEE, is a device for centralized management of all computing units (e.g., GPUs and other accelerators) of a server rack. It is uniquely designed to work with today's data centres and clouds, leveraging modern resource pooling technologies to dynamically compartmentalize computing tasks, and enforce strong isolation and reduce TCB through hardware support. More specifically, HETEE utilizes the PCIe ExpressFabric to allocate its accelerators to the server node on the same rack for a non-sensitive CDI task, and move them back into a secure enclave in response to the demand for confidential computing. Our design runs a thin TCB stack for security management on a security controller (SC), while leaving a large set of software (e.g., AI runtime, GPU driver, etc.) to the integrated microservers that operate enclaves. An enclaves is physically isolated from others through hardware and verified by the SC at its inception. Its microserver and computing units are restored to a secure state upon termination.We implemented HETEE on a real hardware system, and evaluated it with popular neural network inference and training tasks. Our evaluations show that HETEE can easily support the CDI tasks on the real-world scale and incurred a maximal throughput overhead of 2.17% for inference and 0.95% for training on ResNet152.
Language-based information flow control (IFC) aims to provide guarantees about information propagation in computer systems having multiple security levels. Existing IFC systems extend the lattice model of Denning's, enforcing transitive security policies by tracking information flows along with a partially ordered set of security levels. They yield a transitive noninterference property of either confidentiality or integrity. In this paper, we explore IFC for security policies that are not necessarily transitive. Such nontransitive security policies avoid unwanted or unexpected information flows implied by transitive policies and naturally accommodate high-level coarse-grained security requirements in modern component-based software. We present a novel security type system for enforcing nontransitive security policies. Unlike traditional security type systems that verify information propagation by subtyping security levels of a transitive policy, our type system relaxes strong transitivity by inferring information flow history through security levels and ensuring that they respect the nontransitive policy in effect. Such a type system yields a new nontransitive noninterference property that offers more flexible information flow relations induced by security policies that do not have to be transitive, therefore generalizing the conventional transitive noninterference. This enables us to directly reason about the extent of information flows in the program and restrict interactions between security-sensitive and untrusted components.
Cloud, Software-Defined Networking (SDN), and Network Function Virtualization (NFV) technologies have introduced a new era of cybersecurity threats and challenges. To protect cloud infrastructure, in our earlier work, we proposed Software Defined Security Service (SDS2) to tackle security challenges centered around a new policy-based interaction model. The security architecture consists of three main components: a Security Controller, Virtual Security Functions (VSF), and a Sec-Manage Protocol. However, the security architecture requires an agile and specific protocol to transfer interaction parameters and security messages between its components where OpenFlow considers mainly as network routing protocol. So, The Sec-Manage protocol has been designed specifically for obtaining policy-based interaction parameters among cloud entities between the security controller and its VSFs. This paper focuses on the design and the implementation of the Sec-Manage protocol and demonstrates its use in setting, monitoring, and conveying relevant policy-based interaction security parameters.
Measuring software complexity is key in managing the software lifecycle and in controlling its maintenance. While there are well-established and comprehensive metrics to measure the complexity of the software code, assessment of the complexity of software designs remains elusive. Moreover, there are no clear guidelines to help software designers chose alternatives that reduce design complexity, improve design comprehensibility, and improve the maintainability of the software. This paper outlines a language independent approach to measuring software design complexity using objective and deterministic metrics. The paper outlines the metrics for two major software design notations; UML Class Diagrams and UML State Machines. The approach is based on the analysis of the design elements and their mutual interactions. The approach can be extended to cover other UML design notations.
with the advent of Cloud Computing a new era of computing has come into existence. No doubt, there are numerous advantages associated with the Cloud Computing but, there is other side of the picture too. The challenges associated with it need a more promising reply as far as the security of data that is stored, in process and in transit is concerned. This paper put forth a cloud computing model that tries to answer the data security queries; we are talking about, in terms of the four cryptographic techniques namely Homomorphic Encryption (HE), Verifiable Computation (VC), Secure Multi-Party Computation (SMPC), Functional Encryption (FE). This paper takes into account the various cryptographic techniques to undertake cloud computing security issues. It also surveys these important (existing) cryptographic tools/techniques through a proposed Cloud computation model that can be used for Big Data applications. Further, these cryptographic tools are also taken into account in terms of CIA triad. Then, these tools/techniques are analyzed by comparing them on the basis of certain parameters of concern.
This work examines metrics that can be used to measure the ability of agile software development methods to meet security and privacy requirements of communications applications. Many implementations of communication protocols, including those in vehicular networks, occur within regulated environments where agile development methods are traditionally discouraged. We propose a framework and metrics to measure adherence to security, quality and software effectiveness regulations if developers desire the cost and schedule benefits of agile methods. After providing an overview of specific challenges that a regulated environment imposes on communications software development, we proceed to examine the 12 agile principles and how they relate to a regulatory environment. From this review we identify two metrics to measure performance of three key regulatory attributes of software for communications applications, and then recommend an approach of either tools, agile methods or DevOps that is best positioned to satisfy its regulated environment attributes. By considering the recommendations in this paper, managers of software-dominant communications programs in a regulated environment can gain insight into leveraging the benefits of agile methods.
The use of Automatic Dependent Surveillance - Broadcast (ADS-B) for aircraft tracking and flight management operations is widely used today. However, ADS-B is prone to several cyber-security threats due to the lack of data authentication and encryption. Recently, Blockchain has emerged as new paradigm that can provide promising solutions in decentralized systems. Furthermore, software containers and Microservices facilitate the scaling of Blockchain implementations within cloud computing environment. When fused together, these technologies could help improve Air Traffic Control (ATC) processing of ADS-B data. In this paper, a Blockchain implementation within a Microservices framework for ADS-B data verification is proposed. The aim of this work is to enable data feeds coming from third-party receivers to be processed and correlated with that of the ATC ground station receivers. The proposed framework could mitigate ADS- B security issues of message spoofing and anomalous traffic data. and hence minimize the cost of ATC infrastructure by throughout third-party support.
The growing prevalence of Internet-of-Things (IoT) technology has led to an increase in the development of heterogeneous smart applications. Smart applications may involve a collaborative participation between IoT devices. Participation of IoT devices for specific application requires a tamper-proof identity to be generated and stored, in order to completely represent the device, as well as to eliminate the possibility of identity spoofing and presence of rogue devices in a network. In this paper, we present a composite Identity-of-Things (IDoT) approach on IoT devices with permissioned blockchain implementation for distributed identity management model. Our proposed approach considers both application and device domains in generating the composite identity. In addition, the use of permissioned blockchain for identity storage and verification allows the identity to be immutable. A simulation has been carried out to demonstrate the application of the proposed identity management model.
Humans are a key part of software development, including customers, designers, coders, testers and end users. In this keynote talk I explain why incorporating human-centric issues into software engineering for next-generation applications is critical. I use several examples from our recent and current work on handling human-centric issues when engineering various `smart living' cloud- and edge-based software systems. This includes using human-centric, domain-specific visual models for non-technical experts to specify and generate data analysis applications; personality impact on aspects of software activities; incorporating end user emotions into software requirements engineering for smart homes; incorporating human usage patterns into emerging edge computing applications; visualising smart city-related data; reporting diverse software usability defects; and human-centric security and privacy requirements for smart living systems. I assess the usefulness of these approaches, highlight some outstanding research challenges, and briefly discuss our current work on new human-centric approaches to software engineering for smart living applications.
This paper describe most popular IoT protocols used for IoT embedded systems and research their advantage and disadvantage. Hardware stage used in this experiment is described in this article - it is used Esp32 and programming language C. It is very important to use corrected IoT protocol that is determines of purpose, hardware and software of system. There are so different IoT protocols, because they are cover vary requirements for vary cases.
A critical need exists for collaboration and action by government, industry, and academia to address cyber weaknesses or vulnerabilities inherent to embedded or cyber physical systems (CPS). These vulnerabilities are introduced as we leverage technologies, methods, products, and services from the global supply chain throughout a system's lifecycle. As adversaries are exploiting these weaknesses as access points for malicious purposes, solutions for system security and resilience become a priority call for action. The SAE G-32 Cyber Physical Systems Security Committee has been convened to address this complex challenge. The SAE G-32 will take a holistic systems engineering approach to integrate system security considerations to develop a Cyber Physical System Security Framework. This framework is intended to bring together multiple industries and develop a method and common language which will enable us to more effectively, efficiently, and consistently communicate a risk, cost, and performance trade space. The standard will allow System Integrators to make decisions utilizing a common framework and language to develop affordable, trustworthy, resilient, and secure systems.
Malware threats often go undetected immediately, because attackers can camouflage well within the system. The users realize this after the devices stop working and cause harm for them. One way to deceive malicious content detection, malware authors use packers. Malware analysis is an activity to gain knowledge about malware. Reverse engineering is a technique used to identify and deal with new viruses or to understand malware behavior. Therefore, this technique can be the right choice for conducting malware analysis, especially for malware with packers. The results of the analysis are used as a source for making creating indicator of compromise in the YARA rule format. YARA rule is used as a component for detecting malware using the indicators obtained in the analysis process.