Biblio
Throughout the life cycle of any technical project, the enterprise needs to assess the risks associated with its development, commissioning, operation and decommissioning. This article defines the task of researching risks in relation to the operation of a data storage subsystem in the cloud infrastructure of a geographically distributed company and the tools that are required for this. Analysts point out that, compared to 2018, in 2019 there were 3.5 times more cases of confidential information leaks from storages on unprotected (freely accessible due to incorrect configuration) servers in cloud services. The total number of compromised personal data and payment information records increased 5.4 times compared to 2018 and amounted to more than 8.35 billion records. Moreover, the share of leaks of payment information has decreased, but the percentage of leaks of personal data has grown and accounts for almost 90% of all leaks from cloud storage. On average, each unsecured service identified resulted in 33.7 million personal data records being leaked. Leaks are mainly related to misconfiguration of services and stored resources, as well as human factors. These impacts can be minimized by improving the skills of cloud storage administrators and regularly auditing storage. Despite its seeming insecurity, the cloud is a reliable way of storing data. At the same time, leaks are still occurring. According to Kaspersky Lab, every tenth (11%) data leak from the cloud became possible due to the actions of the provider, while a third of all cyber incidents in the cloud (31% in Russia and 33% in the world) were due to gullibility company employees caught up in social engineering techniques. Minimizing the risks associated with the storage of personal data is one of the main tasks when operating a company's cloud infrastructure.
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.