Biblio
Smart contracts have been widely used on Ethereum to enable business services across various application domains. However, they are prone to different forms of security attacks due to the dynamic and non-deterministic blockchain runtime environment. In this work, we highlighted a general miner-side type of exploit, called concurrency exploit, which attacks smart contracts via generating malicious transaction sequences. Moreover, we designed a systematic algorithm to automatically detect such exploits. In our preliminary evaluation, our approach managed to identify real vulnerabilities that cannot be detected by other tools in the literature.
Machine learning, specifically deep learning is becoming a key technology component in application domains such as identity management, finance, automotive, and healthcare, to name a few. Proprietary machine learning models - Machine Learning IP - are developed and deployed at the network edge, end devices and in the cloud, to maximize user experience. With the proliferation of applications embedding Machine Learning IPs, machine learning models and hyper-parameters become attractive to attackers, and require protection. Major players in the semiconductor industry provide mechanisms on device to protect the IP at rest and during execution from being copied, altered, reverse engineered, and abused by attackers. In this work we explore system security architecture mechanisms and their applications to Machine Learning IP protection.
The world is witnessing the emerging role of Internet of Things (IoT) as a technology that is transforming different industries, global community and its economy. Currently a plethora of interconnected smart devices have been deployed for diverse pervasive applications and services, and billions more are expected to be connected to the Internet in the near future. The potential benefits of IoT include improved quality of life, convenience, enhanced energy efficiency, and more productivity. Alongside these potential benefits, however, come increased security risks and potential for abuse. Arguably, this is partly because many IoT start-ups and electronics hobbyists lack security expertise, and some established companies do not make security a priority in their designs, and hence they produce IoT devices that are often ill-equipped in terms of security. In this paper, we discuss different IoT application areas, and identify security threats in IoT architecture. We consider security requirements and present typical security threats for each of the application domains. Finally, we present several possible security countermeasures, and introduce the IoT Hardware Platform Security Advisor (IoT-HarPSecA) framework, which is still under development. IoT-HarPSecA is aimed at facilitating the design and prototyping of secure IoT devices.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.