Visible to the public Biblio

Filters: Keyword is Picture archiving and communication systems  [Clear All Filters]
2022-08-12
El-Korashy, Akram, Tsampas, Stelios, Patrignani, Marco, Devriese, Dominique, Garg, Deepak, Piessens, Frank.  2021.  CapablePtrs: Securely Compiling Partial Programs Using the Pointers-as-Capabilities Principle. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—16.
Capability machines such as CHERI provide memory capabilities that can be used by compilers to provide security benefits for compiled code (e.g., memory safety). The existing C to CHERI compiler, for example, achieves memory safety by following a principle called “pointers as capabilities” (PAC). Informally, PAC says that a compiler should represent a source language pointer as a machine code capability. But the security properties of PAC compilers are not yet well understood. We show that memory safety is only one aspect, and that PAC compilers can provide significant additional security guarantees for partial programs: the compiler can provide security guarantees for a compilation unit, even if that compilation unit is later linked to attacker-provided machine code.As such, this paper is the first to study the security of PAC compilers for partial programs formally. We prove for a model of such a compiler that it is fully abstract. The proof uses a novel proof technique (dubbed TrICL, read trickle), which should be of broad interest because it reuses the whole-program compiler correctness relation for full abstraction, thus saving work. We also implement our scheme for C on CHERI, show that we can compile legacy C code with minimal changes, and show that the performance overhead of compiled code is roughly proportional to the number of cross-compilation-unit function calls.
2022-06-13
Santos, Nelson, Younis, Waleed, Ghita, Bogdan, Masala, Giovanni.  2021.  Enhancing Medical Data Security on Public Cloud. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :103–108.

Cloud computing, supported by advancements in virtualisation and distributed computing, became the default options for implementing the IT infrastructure of organisations. Medical data and in particular medical images have increasing storage space and remote access requirements. Cloud computing satisfies these requirements but unclear safeguards on data security can expose sensitive data to possible attacks. Furthermore, recent changes in legislation imposed additional security constraints in technology to ensure the privacy of individuals and the integrity of data when stored in the cloud. In contrast with this trend, current data security methods, based on encryption, create an additional overhead to the performance, and often they are not allowed in public cloud servers. Hence, this paper proposes a mechanism that combines data fragmentation to protect medical images on the public cloud servers, and a NoSQL database to secure an efficient organisation of such data. Results of this paper indicate that the latency of the proposed method is significantly lower if compared with AES, one of the most adopted data encryption mechanisms. Therefore, the proposed method is an optimal trade-off in environments with low latency requirements or limited resources.

2021-01-28
Ganji, F., Amir, S., Tajik, S., Forte, D., Seifert, J.-P..  2020.  Pitfalls in Machine Learning-based Adversary Modeling for Hardware Systems. 2020 Design, Automation Test in Europe Conference Exhibition (DATE). :514—519.

The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature.