Biblio
Traditional Intrusion Detection Systems (IDSes) are generally implemented on vendor proprietary appliances or middleboxes, which usually lack a general programming interface, and their versatility and flexibility are also very poor. Emerging Network Function Virtualization (NFV) technology can virtualize IDSes and elastically scale them to deal with attack traffic variations. However, existing NFV solutions treat a virtualized IDS as a monolithic piece of software, which could lead to inflexibility and significant waste of resources. In this paper, we propose a novel approach to virtualize IDSes as microservices where the virtualized IDSes can be customized on demand, and the underlying microservices could be shared and scaled independently. We also conduct experiments, which demonstrate that virtualizing IDSes as microservices can gain greater flexibility and resource efficiency.
Cybersecurity analysts are often presented suspicious machine activity that does not conclusively indicate compromise, resulting in undetected incidents or costly investigations into the most appropriate remediation actions. There are many reasons for this: deficiencies in the number and quality of security products that are deployed, poor configuration of those security products, and incomplete reporting of product-security telemetry. Managed Security Service Providers (MSSP's), which are tasked with detecting security incidents on behalf of multiple customers, are confronted with these data quality issues, but also possess a wealth of cross-product security data that enables innovative solutions. We use MSSP data to develop Virtual Product, which addresses the aforementioned data challenges by predicting what security events would have been triggered by a security product if it had been present. This benefits the analysts by providing more context into existing security incidents (albeit probabilistic) and by making questionable security incidents more conclusive. We achieve up to 99% AUC in predicting the incidents that some products would have detected had they been present.
In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.
As workloads and data move to the cloud, it is essential that software writers are able to protect their applications from untrusted hardware, systems software, and co-tenants. Intel® Software Guard Extensions (SGX) enables a new mode of execution that is protected from attacks in such an environment with strong confidentiality, integrity, and replay protection guarantees. Though SGX supports memory oversubscription via paging, virtualizing the protected memory presents a significant challenge to Virtual Machine Monitor (VMM) writers and comes with a high performance overhead. This paper introduces SGX Oversubscription Extensions that add additional instructions and virtualization support to the SGX architecture so that cloud service providers can oversubscribe secure memory in a less complex and more performant manner.
When transferring sensitive data to a non-trusted party, end-users require that the data be kept private. Mobile and IoT application developers want to leverage the sensitive data to provide better user experience and intelligent services. Unfortunately, existing programming abstractions make it impossible to reconcile these two seemingly conflicting objectives. In this paper, we present a novel programming mechanism for distributed managed execution environments that hides sensitive user data, while enabling developers to build powerful and intelligent applications, driven by the properties of the sensitive data. Specifically, the sensitive data is never revealed to clients, being protected by the runtime system. Our abstractions provide declarative and configurable data query interfaces, enforced by a lightweight distributed runtime system. Developers define when and how clients can query the sensitive data's properties (i.e., how long the data remains accessible, how many times its properties can be queried, which data query methods apply, etc.). Based on our evaluation, we argue that integrating our novel mechanism with the Java Virtual Machine (JVM) can address some of the most pertinent privacy problems of IoT and mobile applications.
Ultra-dense Networks are attracting significant interest due to their ability to provide the next generation 5G cellular networks with a high data rate, low delay, and seamless coverage. Several factors, such as interferences, energy constraints, and backhaul bottlenecks may limit wireless networks densification. In this paper, we study the effect of mobile node densification, access node densification, and their aggregation into virtual entities, referred to as virtual cells, on location privacy. Simulations show that the number of tracked mobile nodes might be statistically reduced up to 10 percent by implementing virtual cells. Moreover, experiments highlight that success of tracking attacks has an inverse relationship to the number of moving nodes. The present paper is a preliminary attempt to analyse the effectiveness of cell virtualization to mitigate location privacy threats in ultra-dense networks.
Users today enjoy access to a wealth of services that rely on user-contributed data, such as recommendation services, prediction services, and services that help classify and interpret data. The quality of such services inescapably relies on trustworthy contributions from users. However, validating the trustworthiness of contributions may rely on privacy-sensitive contextual data about the user, such as a user's location or usage habits, creating a conflict between privacy and trust: users benefit from a higher-quality service that identifies and removes illegitimate user contributions, but, at the same time, they may be reluctant to let the service access their private information to achieve this high quality. We argue that this conflict can be resolved with a pragmatic Glimmer of Trust, which allows services to validate user contributions in a trustworthy way without forfeiting user privacy. We describe how trustworthy hardware such as Intel's SGX can be used on the client-side–-in contrast to much recent work exploring SGX in cloud services–-to realize the Glimmer architecture, and demonstrate how this realization is able to resolve the tension between privacy and trust in a variety of cases.
The Internet of Things (IoT) comes together with the connection between sensors and devices. These smart devices have been upgraded from a standalone device which can only handle a specific task at one time to an interactive device that can handle multiple tasks in time. However, this technology has been exposed to many vulnerabilities especially on the malicious attacks of the devices. With the IoT constraints and low-security mechanisms applied, the malicious attacks could exploit the sensor vulnerability to provide wrong data where it can lead to wrong interpretation and actuation to the users. Due to this problems, this short paper presents an event-based access control framework that considers integrity, privacy and the authenticity in the IoT devices.
The present study's primary objective is to try to determine whether gender, combined with the educational background of the Internet users, have an effect on the way online privacy is perceived and practiced within the cloud services and specifically in social networking, e-commerce, and online banking. An online questionnaire was distributed through e-mail and the social media (Facebook, LinkedIn, and Google+). Our primary hypothesis is that an interrelationship may exist among a user's gender, educational background, and the way an online user perceives and acts regarding online privacy. An analysis of a representative sample of Greek Internet users revealed that there is an effect by gender on the online users' awareness regarding online privacy, as well as on the way they act upon it. Furthermore, we found that a correlation exists, as well regarding the Educational Background of the users and the issue of online privacy.
The growth in cloud-based services tailored for users means more and more personal data is being exploited, and with this comes the need to better handle user privacy. Software technologies concentrating on privacy preservation typically present a one-size fits all solution. However, users have different viewpoints of what privacy means to them and therefore, configurable and dynamic privacy preserving solutions have the potential to create useful and tailored services without breaching any user's privacy. In this paper, we present a model of user-centered privacy that can be used to analyse a service's behaviour against user preferences, such that a user can be informed of the privacy implications of that service and what fine-grained actions they can take to maintain their privacy. We show through study that the user-based privacy model can: i) provide customizable privacy aligned with user needs; and ii) identify potential privacy breaches.
The exponential increase of Internet of Things (IoT) devices have resulted in a range of new and unanticipated vulnerabilities associated with their use. IoT devices from smart homes to smart enterprises can easily be compromised. One of the major problems associated with the IoT is maintaining security; the vulnerable nature of IoT devices poses a challenge to many aspects of security, including security testing and analysis. It is trivial to perform the security analysis for IoT devices to understand the loop holes and very nature of the devices itself. Given these issues, there has been less emphasis on security testing and analysis of the IoT. In this paper, we show our preliminary efforts in the area of security analysis for IoT devices and introduce a security IoT testbed for performing security analysis. We also discuss the necessary design, requirements and the architecture to support our security analysis conducted via the proposed testbed.
Major online messaging services such as Facebook Messenger and WhatsApp are starting to provide users with real-time information about when people read their messages, while useful, the feature has the potential to negatively impact privacy as well as cause concern over access to self. We report on two surveys using Mechanical Turk which looked at senders' (N=402\vphantom\\ use of and reactions to the `message seen' feature, and recipients' (N=316) privacy and signaling behaviors in the face of such visibility. Our findings indicate that senders experience a range of emotions when their message is not read, or is read but not answered immediately. Recipients also engage in various signaling behaviors in the face of visibility by both replying or not replying immediately.
Augmented reality (AR) technologies, such as Microsoft's HoloLens head-mounted display and AR-enabled car windshields, are rapidly emerging. AR applications provide users with immersive virtual experiences by capturing input from a user's surroundings and overlaying virtual output on the user's perception of the real world. These applications enable users to interact with and perceive virtual content in fundamentally new ways. However, the immersive nature of AR applications raises serious security and privacy concerns. Prior work has focused primarily on input privacy risks stemming from applications with unrestricted access to sensor data. However, the risks associated with malicious or buggy AR output remain largely unexplored. For example, an AR windshield application could intentionally or accidentally obscure oncoming vehicles or safety-critical output of other AR applications. In this work, we address the fundamental challenge of securing AR output in the face of malicious or buggy applications. We design, prototype, and evaluate Arya, an AR platform that controls application output according to policies specified in a constrained yet expressive policy framework. In doing so, we identify and overcome numerous challenges in securing AR output.
The evolution of cloud gaming systems is substantially the security requirements for computer games. Although online game development often utilizes artificial intelligence and human computer interaction, game developers and providers often do not pay much attention to security techniques. In cloud gaming, location-based games are augmented reality games which take the original principals of the game and applies them to the real world. In other terms, it uses the real world to impact the game experience. Because the execution of such games is distributed in cloud computing, users cannot be certain where their input and output data are managed. This introduces the possibility to input incorrect data in the exchange between the gamer's terminal and the gaming platform. In this context, we propose a new gaming concept for augmented reality and location-based games in order to solve the aforementioned cheating scenario problem. The merit of our approach is to establish an accurate and verifiable proof that the gamer reached the goal or found the target. The major novelty in our method is that it allows the gamer to submit an authenticated proof related to the game result without altering the privacy of positioning data.
Robots operating alongside humans in field environments have the potential to greatly increase the situational awareness of their human teammates. A significant challenge, however, is the efficient conveyance of what the robot perceives to the human in order to achieve improved situational awareness. We believe augmented reality (AR), which allows a human to simultaneously perceive the real world and digital information situated virtually in the real world, has the potential to address this issue. We propose to demonstrate that augmented reality can be used to enable human-robot cooperative search, where the robot can both share search results and assist the human teammate in navigating to a search target.
Immersive augmented reality (AR) technologies are becoming a reality. Prior works have identified security and privacy risks raised by these technologies, primarily considering individual users or AR devices. However, we make two key observations: (1) users will not always use AR in isolation, but also in ecosystems of other users, and (2) since immersive AR devices have only recently become available, the risks of AR have been largely hypothetical to date. To provide a foundation for understanding and addressing the security and privacy challenges of emerging AR technologies, grounded in the experiences of real users, we conduct a qualitative lab study with an immersive AR headset, the Microsoft HoloLens. We conduct our study in pairs - 22 participants across 11 pairs - wherein participants engage in paired and individual (but physically co-located) HoloLens activities. Through semi-structured interviews, we explore participants' security, privacy, and other concerns, raising key findings. For example, we find that despite the HoloLens's limitations, participants were easily immersed, treating virtual objects as real (e.g., stepping around them for fear of tripping). We also uncover numerous security, privacy, and safety concerns unique to AR (e.g., deceptive virtual objects misleading users about the real world), and a need for access control among users to manage shared physical spaces and virtual content embedded in those spaces. Our findings give us the opportunity to identify broader lessons and key challenges to inform the design of emerging single-and multi-user AR technologies.
In context of Industry 4.0 Augmented Reality (AR) is frequently mentioned as the upcoming interface technology for human-machine communication and collaboration. Many prototypes have already arisen in both the consumer market and in the industrial sector. According to numerous experts it will take only few years until AR will reach the maturity level to be deployed in productive applications. Especially for industrial usage it is required to assess security risks and challenges this new technology implicates. Thereby we focus on plant operators, Original Equipment Manufacturers (OEMs) and component vendors as stakeholders. Starting from several industrial AR use cases and the structure of contemporary AR applications, in this paper we identify security assets worthy of protection and derive the corresponding security goals. Afterwards we elaborate the threats industrial AR applications are exposed to and develop an edge computing architecture for future AR applications which encompasses various measures to reduce security risks for our stakeholders.
This paper explores the potential of enabling SDN security and monitoring services by piggybacking on SDN reactive routing. As a case study, we implement and evaluate a piggybacking based intrusion prevention system called SDN-Defense. Our study of university WiFi traffic traces reveals that up to 73% of malicious flows can be detected by inspecting just the first three packets of a flow, and 90% of malicious flows from the first four packets. Using such empirical insights, we propose to forward the first K packets of each new flow to an augmented SDN controller for security inspection, where K is a dynamically configurable parameter. We characterize the cost-benefit trade-offs of SDN-Defense using real wireless traces and discuss potential scalability issues. Finally, we discuss other applications which can be enhanced by using our proposed piggybacking approach.
Software permeates every aspect of our world, from our homes to the infrastructure that provides mission-critical services. As the size and complexity of software systems increase, the number and sophistication of software security flaws increase as well. The analysis of these flaws began as a manual approach, but it soon became apparent that a manual approach alone cannot scale, and that tools were necessary to assist human experts in this task, resulting in a number of techniques and approaches that automated certain aspects of the vulnerability analysis process. Recently, DARPA carried out the Cyber Grand Challenge, a competition among autonomous vulnerability analysis systems designed to push the tool-assisted human-centered paradigm into the territory of complete automation, with the hope that, by removing the human factor, the analysis would be able to scale to new heights. However, when the autonomous systems were pitted against human experts it became clear that certain tasks, albeit simple, could not be carried out by an autonomous system, as they require an understanding of the logic of the application under analysis. Based on this observation, we propose a shift in the vulnerability analysis paradigm, from tool-assisted human-centered to human-assisted tool-centered. In this paradigm, the automated system orchestrates the vulnerability analysis process, and leverages humans (with different levels of expertise) to perform well-defined sub-tasks, whose results are integrated in the analysis. As a result, it is possible to scale the analysis to a larger number of programs, and, at the same time, optimize the use of expensive human resources. In this paper, we detail our design for a human-assisted automated vulnerability analysis system, describe its implementation atop an open-sourced autonomous vulnerability analysis system that participated in the Cyber Grand Challenge, and evaluate and discuss the significant improvements that non-expert human assistance can offer to automated analysis approaches.