Biblio
In spite of being a promising technology which will make our lives a lot easier we cannot be oblivious to the fact IoT is not safe from online threat and attacks. Thus, along with the growth of IoT we also need to work on its aspects. Taking into account the limited resources that these devices have it is important that the security mechanisms should also be less complex and do not hinder the actual functionality of the device. In this paper, we propose an ECC based lightweight authentication for IoT devices which deploy RFID tags at the physical layer. ECC is a very efficient public key cryptography mechanism as it provides privacy and security with lesser computation overhead. We also present a security and performance analysis to verify the strength of our proposed approach.
The recent emergence of smartphones, cloud computing, and the Internet of Things has brought about the explosion of data creation. By collating and merging these enormous data with other information, services that use information become more sophisticated and advanced. However, at the same time, the consideration of privacy violations caused by such merging is indispensable. Various anonymization methods have been proposed to preserve privacy. The conventional perturbation-based anonymization method of location data adds comparatively larger noise, and the larger noise makes it difficult to utilize the data effectively for secondary use. In this research, to solve these problems, we first clarified the definition of privacy preservation and then propose TMk-anonymity according to the definition.
Personal privacy is an important issue when publishing social network data. An attacker may have information to reidentify private data. So, many researchers developed anonymization techniques, such as k-anonymity, k-isomorphism, l-diversity, etc. In this paper, we focus on graph k-degree anonymity by editing edges. Our method is divided into two steps. First, we propose an efficient algorithm to find a new degree sequence with theoretically minimal edit cost. Second, we insert and delete edges based on the new degree sequence to achieve k-degree anonymity.
This work investigates the fundamental constraints of anonymous communication (AC) protocols. We analyze the relationship between bandwidth overhead, latency overhead, and sender anonymity or recipient anonymity against the global passive (network-level) adversary. We confirm the trilemma that an AC protocol can only achieve two out of the following three properties: strong anonymity (i.e., anonymity up to a negligible chance), low bandwidth overhead, and low latency overhead. We further study anonymity against a stronger global passive adversary that can additionally passively compromise some of the AC protocol nodes. For a given number of compromised nodes, we derive necessary constraints between bandwidth and latency overhead whose violation make it impossible for an AC protocol to achieve strong anonymity. We analyze prominent AC protocols from the literature and depict to which extent those satisfy our necessary constraints. Our fundamental necessary constraints offer a guideline not only for improving existing AC systems but also for designing novel AC protocols with non-traditional bandwidth and latency overhead choices.
Social robots may make use of social abilities such as persuasion, commanding obedience, and lying. Meanwhile, the field of computer security and privacy has shown that these interpersonal skills can be applied by humans to perform social engineering attacks. Social engineering attacks are the deliberate application of manipulative social skills by an individual in an attempt to achieve a goal by convincing others to do or say things that may or may not be in their best interests. In our work we argue that robot social engineering attacks are already possible and that defenses should be developed to protect against these attacks. We do this by defining what a robot social engineer is, outlining how previous research has demonstrated robot social engineering, and discussing the risks that can accompany robot social engineering attacks.
The term "artificial intelligence" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as "artificial intelligence" versus simply advanced computer technologies mimicking aspects of natural intelligence. Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts. Several technological advances enable what is commonly referred to as "artificial intelligence". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities. In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.
User testing is often used to inform the development of user interfaces (UIs). But what if an interface needs to be developed for a system that does not yet exist? In that case, existing datasets can provide valuable input for UI development. We apply a data-driven approach to the development of a privacy-setting interface for Internet-of-Things (IoT) devices. Applying machine learning techniques to an existing dataset of users' sharing preferences in IoT scenarios, we develop a set of "smart" default profiles. Our resulting interface asks users to choose among these profiles, which capture their preferences with an accuracy of 82%—a 14% improvement over a naive default setting and a 12% improvement over a single smart default setting for all users.
The Internet of Things provides household device users with an ability to connect and manage numerous devices over a common platform. However, the sheer number of possible privacy settings creates issues such as choice overload. This article outlines a data-driven approach to understand how users make privacy decisions in household IoT scenarios. We demonstrate that users are not just influenced by the specifics of the IoT scenario, but also by aspects immaterial to the decision, such as the default setting and its framing.
Personalization, recommendations, and user modeling can be powerful tools to improve people's experiences with technology and to help them find information. However, we also know that people underestimate how much of their personal information is used by our technology and they generally do not understand how much algorithms can discover about them. Both privacy and ethical technology have issues of consent at their heart. While many personalization systems assume most users would consent to the way they employ personal data, research shows this is not necessarily the case. This talk will look at how to consider issues of privacy and consent when users cannot explicitly state their preferences, The Creepy Factor, and how to balance users' concerns with the benefits personalized technology can offer.
The advent and widespread adoption of wearable cameras and autonomous robots raises important issues related to privacy. The mobile cameras on these systems record and may re-transmit enormous amounts of video data that can then be used to identify, track, and characterize the behavior of the general populous. This paper presents a preliminary computational architecture designed to preserve specific types of privacy over a video stream by identifying categories of individuals, places, and things that require higher than normal privacy protection. This paper describes the architecture as a whole as well as preliminary results testing aspects of the system. Our intention is to implement and test the system on ground robots and small UAVs and demonstrate that the system can provide selective low-level masking or deletion of data requiring higher privacy protection.
The recent breakthroughs in Artificial Intelligence (AI) have allowed individuals to rely on automated systems for a variety of reasons. Some of these systems are the currently popular voice-enabled systems like Echo by Amazon and Home by Google that are also called as Intelligent Personal Assistants (IPAs). Though there are rising concerns about privacy and ethical implications, users of these IPAs seem to continue using these systems. We aim to investigate to what extent users are concerned about privacy and how they are handling these concerns while using the IPAs. By utilizing the reviews posted online along with the responses to a survey, this paper provides a set of insights about the detected markers related to user interests and privacy challenges. The insights suggest that users of these systems irrespective of their concerns about privacy, are generally positive in terms of utilizing IPAs in their everyday lives. However, there is a significant percentage of users who are concerned about privacy and take further actions to address related concerns. Some percentage of users expressed that they do not have any privacy concerns but when they learned about the "always listening" feature of these devices, their concern about privacy increased.
We provide a systemization of knowledge of the recent progress made in addressing the crucial problem of deep learning on encrypted data. The problem is important due to the prevalence of deep learning models across various applications, and privacy concerns over the exposure of deep learning IP and user's data. Our focus is on provably secure methodologies that rely on cryptographic primitives and not trusted third parties/platforms. Computational intensity of the learning models, together with the complexity of realization of the cryptography algorithms hinder the practical implementation a challenge. We provide a summary of the state-of-the-art, comparison of the existing solutions, as well as future challenges and opportunities.
Deep Learning Models are vulnerable to adversarial inputs, samples modified in order to maximize error of the system. We hereby introduce Spartan Networks, Deep Learning models that are inherently more resistant to adverarial examples, without doing any input preprocessing out of the network or adversarial training. These networks have an adversarial layer within the network designed to starve the network of information, using a new activation function to discard data. This layer trains the neural network to filter-out usually-irrelevant parts of its input. These models thus have a slightly lower precision, but report a higher robustness under attack than unprotected models.
Human behavior is increasingly sensed and recorded and used to create models that accurately predict the behavior of consumers, employees, and citizens. While behavioral models are important in many domains, the ability to predict individuals' behavior is in the focus of growing privacy concerns. The legal and technological measures for privacy do not adequately recognize and address the ability to infer behavior and traits. In this position paper, we first analyze the shortcoming of existing privacy theories in addressing AI's inferential abilities. We then point to legal and theoretical frameworks that can adequately describe the potential of AI to negatively affect people's privacy. We then present a technical privacy measure that can help bridge the divide between legal and technical thinking with respect to AI and privacy.
Nowadays, Vehicular ad hoc network confronts many challenges in terms of security and privacy, due to the fact that data transmitted are diffused in an open access environment. However, highest of drivers want to maintain their information discreet and protected, and they do not want to share their confidential information. So, the private information of drivers who are distributed in this network must be protected against various threats that may damage their privacy. That is why, confidentiality, integrity and availability are the important security requirements in VANET. This paper focus on security threat in vehicle network especially on the availability of this network. Then we regard the rational attacker who decides to lead an attack based on its adversary's strategy to maximize its own attack interests. Our aim is to provide reliability and privacy of VANET system, by preventing attackers from violating and endangering the network. to ensure this objective, we adopt a tree structure called attack tree to model the attacker's potential attack strategies. Also, we join the countermeasures to the attack tree in order to build attack-defense tree for defending these attacks.
Vulnerabilities in hypervisors are crucial in multi-tenant clouds and attractive for attackers because a vulnerability in the hypervisor can undermine all the virtual machine (VM) security. This paper focuses on vulnerabilities in instruction emulators inside hypervisors. Vulnerabilities in instruction emulators are not rare; CVE-2017-2583, CVE-2016-9756, CVE-2015-0239, CVE-2014-3647, to name a few. For backward compatibility with legacy x86 CPUs, conventional hypervisors emulate arbitrary instructions at any time if requested. This design leads to a large attack surface, making it hard to get rid of vulnerabilities in the emulator. This paper proposes FWinst that narrows the attack surface against vulnerabilities in the emulator. The key insight behind FWinst is that the emulator should emulate only a small subset of instructions, depending on the underlying CPU micro-architecture and the hypervisor configuration. FWinst recognizes emulation contexts in which the instruction emulator is invoked, and identifies a legitimate subset of instructions that are allowed to be emulated in the current context. By filtering out illegitimate instructions, FWinst narrows the attack surface. In particular, FWinst is effective on recent x86 micro-architectures because the legitimate subset becomes very small. Our experimental results demonstrate FWinst prevents existing vulnerabilities in the emulator from being exploited on Westmere micro-architecture, and the runtime overhead is negligible.
Software-Defined Network (SDN) is a novel architecture created to address the issues of traditional and vertically integrated networks. To increase cost-effectiveness and enable logical control, SDN provides high programmability and centralized view of the network through separation of network traffic delivery (the "data plane") from network configuration (the "control plane"). SDN controllers and related protocols are rapidly evolving to address the demands for scaling in complex enterprise networks. Because of the evolution of modern SDN technologies, production networks employing SDN are prone to several security vulnerabilities. The rate at which SDN frameworks are evolving continues to overtake attempts to address their security issues. According to our study, existing defense mechanisms, particularly SDN-based firewalls, face new and SDN-specific challenges in successfully enforcing security policies in the underlying network. In this paper, we identify problems associated with SDN-based firewalls, such as ambiguous flow path calculations and poor scalability in large networks. We survey existing SDN-based firewall designs and their shortcomings in protecting a dynamically scaling network like a data center. We extend our study by evaluating one such SDN-specific security solution called FlowGuard, and identifying new attack vectors and vulnerabilities. We also present corresponding threat detection techniques and respective mitigation strategies.
Side-channel attacks, such as Spectre and Meltdown, that leverage speculative execution pose a serious threat to computing systems. Worse yet, such attacks can be perpetrated by compromised operating system (OS) kernels to bypass defenses that protect applications from the OS kernel. This work evaluates the performance impact of three different defenses against in-kernel speculation side-channel attacks within the context of Virtual Ghost, a system that protects user data from compromised OS kernels: Intel MPX bounds checks, which require a memory fence; address bit-masking and testing, which creates a dependence between the bounds check and the load/store; and the use of separate virtual address spaces for applications, the OS kernel, and the Virtual Ghost virtual machine, forcing a speculation boundary. Our results indicate that an instrumentation-based bit-masking approach to protection incurs the least overhead by minimizing speculation boundaries. Our work also highlights possible improvements to Intel MPX that could help mitigate speculation side-channel attacks at a lower cost.
The concept of Internet of Things (IoT) has received considerable attention and development in recent years. There have been significant studies on access control models for IoT in academia, while companies have already deployed several cloud-enabled IoT platforms. However, there is no consensus on a formal access control model for cloud-enabled IoT. The access-control oriented (ACO) architecture was recently proposed for cloud-enabled IoT, with virtual objects (VOs) and cloud services in the middle layers. Building upon ACO, operational and administrative access control models have been published for virtual object communication in cloud-enabled IoT illustrated by a use case of sensing speeding cars as a running example. In this paper, we study AWS IoT as a major commercial cloud-IoT platform and investigate its suitability for implementing the afore-mentioned academic models of ACO and VO communication control. While AWS IoT has a notion of digital shadows closely analogous to VOs, it lacks explicit capability for VO communication and thereby for VO communication control. Thus there is a significant mismatch between AWS IoT and these academic models. The principal contribution of this paper is to reconcile this mismatch by showing how to use the mechanisms of AWS IoT to effectively implement VO communication models. To this end, we develop an access control model for virtual objects (shadows) communication in AWS IoT called AWS-IoT-ACMVO. We develop a proof-of-concept implementation of the speeding cars use case in AWS IoT under guidance of this model, and provide selected performance measurements. We conclude with a discussion of possible alternate implementations of this use case in AWS IoT.
Third-party software daemons called host agents are increasingly responsible for a modern host's security, automation, and monitoring tasks. Because of their location within the host, these agents are at risk of manipulation by malware and users. Additionally, in virtualized environments where multiple adjacent guests each run their own set of agents, the cumulative resources that agents consume adds up rapidly. Consolidating agents onto the hypervisor can address these problems, but places a technical burden on agent developers. This work presents a development methodology to re-engineer a host agent in to a hyperagent, an out-of-guest agent that gains unique hypervisor-based advantages while retaining its original in-guest capabilities. This three-phase methodology makes integrating Virtual Machine Introspection (VMI) functionality in to existing code easier and more accessible, minimizing an agent developer's re-engineering effort. The benefits of hyperagents are illustrated by porting the GRR live forensics agent, which retains 89% of its codebase, uses 40% less memory than its in-guest counterparts, and enables a 4.9x speedup for a representative data-intensive workload. This work shows that a conventional off-the-shelf host agent can be feasibly transformed into a hyperagent and provide a powerful, efficient tool for defending virtualized systems.
The diagnosis of performance issues in cloud environments is a challenging problem, due to the different levels of virtualization, the diversity of applications and their interactions on the same physical host. Moreover, because of privacy, security, ease of deployment and execution overhead, an agent-less method, which limits its data collection to the physical host level, is often the only acceptable solution. In this paper, a precise host-based method, to recover wait state for the processes inside a given Virtual Machine (VM), is proposed. The virtual Process State Detection (vPSD) algorithm computes the state of processes through host kernel tracing. The state of a virtual Process (vProcess) is displayed in an interactive trace viewer (Trace Compass) for further inspection. Our proposed VM trace analysis algorithm has been open-sourced for further enhancements and for the benefit of other developers. Experimental evaluations were conducted using a mix of workload types (CPU, Disk, and Network), with different applications like Hadoop, MySQL, and Apache. vPSD, being based on host hypervisor tracing, brings a lower overhead (around 0.03%) as compared to other approaches.
Now a days, Cloud computing has brought a unbelievable change in companies, organizations, firm and institutions etc. IT industries is advantage with low investment in infrastructure and maintenance with the growth of cloud computing. The Virtualization technique is examine as the big thing in cloud computing. Even though, cloud computing has more benefits; the disadvantage of the cloud computing environment is ensuring security. Security means, the Cloud Service Provider to ensure the basic integrity, availability, privacy, confidentiality, authentication and authorization in data storage, virtual machine security etc. In this paper, we presented a Local outlier factors mechanism, which may be helpful for the detection of Distributed Denial of Service attack in a cloud computing environment. As DDoS attack becomes strong with the passing of time, and then the attack may be reduced, if it is detected at first. So we fully focused on detecting DDoS attack to secure the cloud environment. In addition, our scheme is able to identify their possible sources, giving important clues for cloud computing administrators to spot the outliers. By using WEKA (Waikato Environment for Knowledge Analysis) we have analyzed our scheme with other clustering algorithm on the basis of higher detection rates and lower false alarm rate. DR-LOF would serve as a better DDoS detection tool, which helps to improve security framework in cloud computing.
Data privacy and security is a leading concern for providers and customers of cloud computing, where Virtual Machines (VMs) can co-reside within the same underlying physical machine. Side channel attacks within multi-tenant virtualized cloud environments are an established problem, where attackers are able to monitor and exfiltrate data from co-resident VMs. Virtualization services have attempted to mitigate such attacks by preventing VM-to-VM interference on shared hardware by providing logical resource isolation between co-located VMs via an internal virtual network. However, such approaches are also insecure, with attackers capable of performing network channel attacks which bypass mitigation strategies using vectors such as ARP Spoofing, TCP/IP steganography, and DNS poisoning. In this paper we identify a new vulnerability within the internal cloud virtual network, showing that through a combination of TAP impersonation and mirroring, a malicious VM can successfully redirect and monitor network traffic of VMs co-located within the same physical machine. We demonstrate the feasibility of this attack in a prominent cloud platform - OpenStack - under various security requirements and system conditions, and propose countermeasures for mitigation.
ARM devices (mobile phone, IoT devices) are getting more popular in our daily life due to the low power consumption and cost. These devices carry a huge number of user's private information, which attracts attackers' attention and increase the security risk. The operating systems (e.g., Android, Linux) works out many memory data protection strategies on user's private information. However, the monolithic OS may contain security vulnerabilities that are exploited by the attacker to get root or even kernel privilege. Once the kernel privilege is obtained by the attacker, all data protection strategies will be gone and user's private information can be taken away. In this paper, we propose a hardened memory data protection framework called H-Securebox to defeat kernel-level memory data stolen attacks. H-Securebox leverages ARM hardware virtualization technique to protect the data on the memory with hypervisor privilege. We designed three types H-Securebox for programing developers to use. Although the attacker may have kernel privilege, she can not touch private data inside H-Securebox, since hypervisor privilege is higher than kernel privilege. With the implementation of H-Securebox system assisting by a tiny hypervisor on Raspberry Pi2 development board, we measure the performance overhead of our system and do the security evaluations. The results positively show that the overhead is negligible and the malicious application with root or kernel privilege can not access the private data protected by our system.
The recently applied General Data Protection Regulation (GDPR) aims to protect all EU citizens from privacy and data breaches in an increasingly data-driven world. Consequently, this deeply affects the factory domain and its human-centric automation paradigm. Especially collaboration of human and machines as well as individual support are enabled and enhanced by processing audio and video data, e.g. by using algorithms which re-identify humans or analyse human behaviour. We introduce most significant impacts of the recent legal regulation change towards the automations domain at a glance. Furthermore, we introduce a representative scenario from production, deduce its legal affections from GDPR resulting in a privacy-aware software architecture. This architecture covers modern virtualization techniques along with authorization and end-to-end encryption to ensure a secure communication between distributes services and databases for distinct purposes.