Biblio
Cloud computing, supported by advancements in virtualisation and distributed computing, became the default options for implementing the IT infrastructure of organisations. Medical data and in particular medical images have increasing storage space and remote access requirements. Cloud computing satisfies these requirements but unclear safeguards on data security can expose sensitive data to possible attacks. Furthermore, recent changes in legislation imposed additional security constraints in technology to ensure the privacy of individuals and the integrity of data when stored in the cloud. In contrast with this trend, current data security methods, based on encryption, create an additional overhead to the performance, and often they are not allowed in public cloud servers. Hence, this paper proposes a mechanism that combines data fragmentation to protect medical images on the public cloud servers, and a NoSQL database to secure an efficient organisation of such data. Results of this paper indicate that the latency of the proposed method is significantly lower if compared with AES, one of the most adopted data encryption mechanisms. Therefore, the proposed method is an optimal trade-off in environments with low latency requirements or limited resources.
The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature.