Biblio
Cloud-backed file systems provide on-demand, high-availability, scalable storage. Their security may be improved with techniques such as erasure codes and secret sharing to fragment files and encryption keys in several clouds. Attacking the server-side of such systems involves penetrating one or more clouds, which can be extremely difficult. Despite all these benefits, a weak side remains: the client-side. The client devices store user credentials that, if stolen or compromised, may lead to confidentiality, integrity, and availability violations. In this paper we propose RockFS, a cloud-backed file system framework that aims to make the client-side of such systems resilient to attacks. RockFS protects data in the client device and allows undoing unintended file modifications.
When vertically aligned carbon nanotube arrays (CNT forests) are heated by optical, electrical, or any other means, heat confinement in the lateral directions (i.e. perpendicular to the CNTs' axes), which stems from the anisotropic structure of the forest, is expected to play an important role. It has been found that, in spite of being primarily conductive along the CNTs' axes, focusing a laser beam on the sidewall of a CNT forest can lead to a highly localized hot region-an effect known as ``Heat Trap''-and efficient thermionic emission. This unusual heat confinement phenomenon has applications where the spread of heat has to be minimized, but electrical conduction is required, notably in energy conversion (e.g. vacuum thermionics and thermoelectrics). However, despite its strong scientific and practical importance, the existence and role of the lateral heat confinement in the Heat Trap effect have so far been elusive. In this work, for the first time, by using a rotating elliptical laser beam, we directly observe the existence of this lateral heat confinement and its corresponding effects on the unusual temperature rise during the Heat Trap effect.
The Internet of Things provides household device users with an ability to connect and manage numerous devices over a common platform. However, the sheer number of possible privacy settings creates issues such as choice overload. This article outlines a data-driven approach to understand how users make privacy decisions in household IoT scenarios. We demonstrate that users are not just influenced by the specifics of the IoT scenario, but also by aspects immaterial to the decision, such as the default setting and its framing.
We explore a new security model for secure computation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage. We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.
In Smart Grids (SGs), data aggregation process is essential in terms of limiting packet size, data transmission amount and data storage requirements. This paper presents a novel Domingo-Ferrer additive privacy based Secure Data Aggregation (SDA) scheme for Fog Computing based SGs (FCSG). The proposed protocol achieves end-to-end confidentiality while ensuring low communication and storage overhead. Data aggregation is performed at fog layer to reduce the amount of data to be processed and stored at cloud servers. As a result, the proposed protocol achieves better response time and less computational overhead compared to existing solutions. Moreover, due to hierarchical architecture of FCSG and additive homomorphic encryption consumer privacy is protected from third parties. Theoretical analysis evaluates the effects of packet size and number of packets on transmission overhead and the amount of data stored in cloud server. In parallel with the theoretical analysis, our performance evaluation results show that there is a significant improvement in terms of data transmission and storage efficiency. Moreover, security analysis proves that the proposed scheme successfully ensures the privacy of collected data.
Protocols for securely testing the equality of two encrypted integers are common building blocks for a number of proposals in the literature that aim for privacy preservation. Being used repeatedly in many cryptographic protocols, designing efficient equality testing protocols is important in terms of computation and communication overhead. In this work, we consider a scenario with two parties where party A has two integers encrypted using an additively homomorphic scheme and party B has the decryption key. Party A would like to obtain an encrypted bit that shows whether the integers are equal or not but nothing more. We propose three secure equality testing protocols, which are more efficient in terms of communication, computation or both compared to the existing work. To support our claims, we present experimental results, which show that our protocols achieve up to 99% computation-wise improvement compared to the state-of-the-art protocols in a fair experimental set-up.
Nowadays, Vehicular ad hoc network confronts many challenges in terms of security and privacy, due to the fact that data transmitted are diffused in an open access environment. However, highest of drivers want to maintain their information discreet and protected, and they do not want to share their confidential information. So, the private information of drivers who are distributed in this network must be protected against various threats that may damage their privacy. That is why, confidentiality, integrity and availability are the important security requirements in VANET. This paper focus on security threat in vehicle network especially on the availability of this network. Then we regard the rational attacker who decides to lead an attack based on its adversary's strategy to maximize its own attack interests. Our aim is to provide reliability and privacy of VANET system, by preventing attackers from violating and endangering the network. to ensure this objective, we adopt a tree structure called attack tree to model the attacker's potential attack strategies. Also, we join the countermeasures to the attack tree in order to build attack-defense tree for defending these attacks.
In the past decade, the revolution in miniaturization (microprocessors, batteries, cameras etc.) and manufacturing of new type of sensors resulted in a new regime of applications based on smart objects called IoT. Majority of such applications or services are to ease human life and/or to setup efficient processes in automated environments. However, this convenience is coming up with new challenges related to data security and human privacy. The objects in IoT are resource constrained devices and cannot implement a fool-proof security framework. These end devices work like eyes and ears to interact with the physical world and collect data for analytics to make expedient decisions. The storage and analysis of the collected data is done remotely using cloud computing. The transfer of data from IoT to the computing clouds can introduce privacy issues and network delays. Some applications need a real-time decision and cannot tolerate the delays and jitters in the network. Here, edge computing or fog computing plays its role to settle down the mentioned issues by providing cloud-like facilities near the end devices. In this paper, we discuss IoT, fog computing, the relationship between IoT and fog computing, their security issues and solutions by different researchers. We summarize attack surface related to each layer of this paradigm which will help to propose new security solutions to escalate it acceptability among end users. We also propose a risk-based trust management model for smart healthcare environment to cope with security and privacy-related issues in this highly un-predictable heterogeneous ecosystem.
Recently, IoT, 5G mobile, big data, and artificial intelligence are increasingly used in the real world. These technologies are based on convergenced in Cyber Physical System(Cps). Cps technology requires core technologies to ensure reliability, real-time, safety, autonomy, and security. CPS is the system that can connect between cyberspace and physical space. Cyberspace attacks are confused in the real world and have a lot of damage. The personal information that dealing in CPS has high confidentiality, so the policies and technique will needed to protect the attack in advance. If there is an attack on the CPS, not only personal information but also national confidential data can be leaked. In order to prevent this, the risk is measured using the Factor Analysis of Information Risk (FAIR) Model, which can measure risk by element for situational awareness in CPS environment. To reduce risk by preventing attacks in CPS, this paper measures risk after using the concept of Crime Prevention Through Environmental Design(CPTED).
We introduce a new sub-linear space sketch—the Weight-Median Sketch—for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the model. This enables memory-limited execution of several statistical analyses over streams, including online feature selection, streaming data explanation, relative deltoid detection, and streaming estimation of pointwise mutual information. Unlike related sketches that capture the most frequently-occurring features (or items) in a data stream, the Weight-Median Sketch captures the features that are most discriminative of one stream (or class) compared to another. The Weight-Median Sketch adopts the core data structure used in the Count-Sketch, but, instead of sketching counts, it captures sketched gradient updates to the model parameters. We provide a theoretical analysis that establishes recovery guarantees for batch and online learning, and demonstrate empirical improvements in memory-accuracy trade-offs over alternative memory-budgeted methods, including count-based sketches and feature hashing.
This paper deals with the modeling and control of the NEREIDA wave generation power plant installed in Mutriku, Spain. This kind of Oscillating Water Column (OWC) plants usually employ a Wells turbine coupled to a Doubly Fed Induction Generator (DFIG). The stalling behavior of the Wells turbine limits the generated power. In this context, a sliding mode rotational speed control is proposed to help avoiding this phenomenon. This will regulate the speed by means of the Rotor Side Converter (RSC) of the Back-to-Back converter governing the generator. The results of the comparative study show that the proposed control provides a higher generated power compared to the uncontrolled case.
Our prototype app, Pocket Penjing, built using Unity3D, takes its name from the Chinese "Penjing." These tray plantings of miniature trees pre-date bonsai, often including miniature benches or figures to allude to people's relationship to the tree. App users choose a species, then create and name their tree. Swiping rotates a 3D globe showing flagged locations. Each flag represents a live online air quality monitoring station data stream that the app can scrape. Data is pulled in from the selected station and the AR window loads. The AR tree grows in real-time 3D. Its L-Systems form is determined by the selected live air quality data. We used this prototype as the basis of a two-part formative participatory design workshop with 63 participants.
Side-channel attacks, such as Spectre and Meltdown, that leverage speculative execution pose a serious threat to computing systems. Worse yet, such attacks can be perpetrated by compromised operating system (OS) kernels to bypass defenses that protect applications from the OS kernel. This work evaluates the performance impact of three different defenses against in-kernel speculation side-channel attacks within the context of Virtual Ghost, a system that protects user data from compromised OS kernels: Intel MPX bounds checks, which require a memory fence; address bit-masking and testing, which creates a dependence between the bounds check and the load/store; and the use of separate virtual address spaces for applications, the OS kernel, and the Virtual Ghost virtual machine, forcing a speculation boundary. Our results indicate that an instrumentation-based bit-masking approach to protection incurs the least overhead by minimizing speculation boundaries. Our work also highlights possible improvements to Intel MPX that could help mitigate speculation side-channel attacks at a lower cost.
The increasing deployment of smart meters at individual households has significantly improved people's experience in electricity bill payments and energy savings. It is, however, still challenging to guarantee the accurate detection of attacked meters' behaviors as well as the effective preservation of users'privacy information. In addition, rare existing research studies jointly consider both these two aspects. In this paper, we propose a Privacy-Preserving energy Theft Detection scheme (PPTD) to address the energy theft behaviors and information privacy issues in smart grid. Specifically, we use a recursive filter based on state estimation to estimate the user's energy consumption, and detect the abnormal data. During data transmission, we use the lightweight NTRU algorithm to encrypt the user's data to achieve privacy preservation. Security analysis demonstrates that in the PPTD scheme, only authorized units can transmit/receive data, and data privacy are also preserved. The performance evaluation results illustrate that our PPTD scheme can significantly reduce the communication and computation costs, and effectively detect abnormal users.
From the three basic paradigms to implement steganography, the concept to realise the information hiding by modifying preexisting cover objects (i.e. steganography by modification) is by far dominating the scientific work in this field, while the other two paradigms (steganography by cover selection or -synthesis) are marginalised although they inherently create stego objects that are closer to the statistical properties of unmodified covers and therefore would create better (i.e. harder to detect) stego channels. Here, we revisit the paradigm of steganography by synthesis to discuss its benefits and limitations on the example of face morphing in images as an interesting synthesis method. The reason to reject steganography by modification as no longer suitable lies in the current trend of steganography being used in modern day malicious software (malware) families like StuxNet, Duqu or Duqu 2. As a consequence, we discuss here the resulting shift in detection assumptions from cover-only- to cover-stegoattacks (or even further) automatically rendering even the most sophisticated steganography by modification methods useless. In this paper we use the example of face morphing to demonstrate the necessary conditions1 'undetectability' as well as 'plausibility and indeterminism' for characterizing suitable synthesis methods. The widespread usage of face morphing together with the content dependent, complex nature of the image manipulations required and the fact that it has been established that morphs are very hard to detect, respectively keep apart from other (assumedly innocent) image manipulations assures that it can successfully fulfil these necessary conditions. As a result it could be used as a core for driving steganography by synthesis schemes inherently resistant against cover-stego-attacks.
We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails a suffix array based index design, which allows optimal storage cost \$O(n)\$ with small hidden factor at the size of the string n. Moreover, we implemented our scheme and the state of the art protocol $\backslash$textbackslashciteChase to demonstrate the performance advantage of our solution with precise benchmark results.
Personalization, recommendations, and user modeling can be powerful tools to improve people's experiences with technology and to help them find information. However, we also know that people underestimate how much of their personal information is used by our technology and they generally do not understand how much algorithms can discover about them. Both privacy and ethical technology have issues of consent at their heart. While many personalization systems assume most users would consent to the way they employ personal data, research shows this is not necessarily the case. This talk will look at how to consider issues of privacy and consent when users cannot explicitly state their preferences, The Creepy Factor, and how to balance users' concerns with the benefits personalized technology can offer.
Smart meters provide fine-grained electricity consumption reporting to electricity providers. This constitutes an invasive factor into the privacy of the consumers, which has raised many privacy concerns. Although billing requires attributable consumption reporting, consumption reporting for operational monitoring and control measures can be non-attributable. However, the privacy-preserving AMS schemes in the literature tend to address these two categories disjointly — possibly due to their somewhat contradictory characteristics. In this paper, we propose an efficient two-party privacy-preserving cryptographic scheme that addresses operational control measures and billing jointly. It is computationally efficient as it is based on symmetric cryptographic primitives. No online trusted third party (TTP) is required.
The latent behavior of an information system that can exhibit extreme events, such as system faults or cyber-attacks, is complex. Recently, the invariant network has shown to be a powerful way of characterizing complex system behaviors. Structures and evolutions of the invariance network, in particular, the vanishing correlations, can shed light on identifying causal anomalies and performing system diagnosis. However, due to the dynamic and complex nature of real-world information systems, learning a reliable invariant network in a new environment often requires continuous collecting and analyzing the system surveillance data for several weeks or even months. Although the invariant networks learned from old environments have some common entities and entity relationships, these networks cannot be directly borrowed for the new environment due to the domain variety problem. To avoid the prohibitive time and resource consuming network building process, we propose TINET, a knowledge transfer based model for accelerating invariant network construction. In particular, we first propose an entity estimation model to estimate the probability of each source domain entity that can be included in the final invariant network of the target domain. Then, we propose a dependency construction model for constructing the unbiased dependency relationships by solving a two-constraint optimization problem. Extensive experiments on both synthetic and real-world datasets demonstrate the effectiveness and efficiency of TINET. We also apply TINET to a real enterprise security system for intrusion detection. TINET achieves superior detection performance at least 20 days lead-lag time in advance with more than 75% accuracy.
The recent emergence of smartphones, cloud computing, and the Internet of Things has brought about the explosion of data creation. By collating and merging these enormous data with other information, services that use information become more sophisticated and advanced. However, at the same time, the consideration of privacy violations caused by such merging is indispensable. Various anonymization methods have been proposed to preserve privacy. The conventional perturbation-based anonymization method of location data adds comparatively larger noise, and the larger noise makes it difficult to utilize the data effectively for secondary use. In this research, to solve these problems, we first clarified the definition of privacy preservation and then propose TMk-anonymity according to the definition.
The popularity of Android, not only in handsets but also in IoT devices, makes it a very attractive target for malware threats, which are actually expanding at a significant rate. The state-of-the-art in malware mitigation solutions mainly focuses on the detection of malicious Android apps using dynamic and static analysis features to segregate malicious apps from benign ones. Nevertheless, there is a small coverage for the Internet/network dimension of Android malicious apps. In this paper, we present ToGather, an automatic investigation framework that takes Android malware samples as input and produces insights about the underlying malicious cyber infrastructures. ToGather leverages state-of-the-art graph theory techniques to generate actionable, relevant and granular intelligence to mitigate the threat effects induced by the malicious Internet activity of Android malware apps. We evaluate ToGather on a large dataset of real malware samples from various Android families, and the obtained results are both interesting and promising.
Human behavior is increasingly sensed and recorded and used to create models that accurately predict the behavior of consumers, employees, and citizens. While behavioral models are important in many domains, the ability to predict individuals' behavior is in the focus of growing privacy concerns. The legal and technological measures for privacy do not adequately recognize and address the ability to infer behavior and traits. In this position paper, we first analyze the shortcoming of existing privacy theories in addressing AI's inferential abilities. We then point to legal and theoretical frameworks that can adequately describe the potential of AI to negatively affect people's privacy. We then present a technical privacy measure that can help bridge the divide between legal and technical thinking with respect to AI and privacy.
In rapid continuous software development, time- and cost-effective prototyping techniques are beneficial through enabling software designers to quickly explore and evaluate different design concepts. Regarding low-fidelity prototyping for augmented reality (AR) applications, software designers are so far restricted to non-digital prototypes, which enable the visualization of first design concepts, but can be laborious in capturing interactivity. The lack of empirical values and standards for designing user interactions in AR-software leads to a particular need for applying end-user feedback to software refinement. In this paper we present the concept of a tool for rapid digital prototyping for augmented reality applications, enabling software designers to rapidly design augmented reality prototypes, without requiring programming skills. The prototyping tool focuses on modeling multimodal interactions, especially regarding the interaction with physical objects, as well as performing user-based studies to integrate valuable end-user feedback into the refinement of software aspects.
Immersive augmented reality (AR) technologies are becoming a reality. Prior works have identified security and privacy risks raised by these technologies, primarily considering individual users or AR devices. However, we make two key observations: (1) users will not always use AR in isolation, but also in ecosystems of other users, and (2) since immersive AR devices have only recently become available, the risks of AR have been largely hypothetical to date. To provide a foundation for understanding and addressing the security and privacy challenges of emerging AR technologies, grounded in the experiences of real users, we conduct a qualitative lab study with an immersive AR headset, the Microsoft HoloLens. We conduct our study in pairs - 22 participants across 11 pairs - wherein participants engage in paired and individual (but physically co-located) HoloLens activities. Through semi-structured interviews, we explore participants' security, privacy, and other concerns, raising key findings. For example, we find that despite the HoloLens's limitations, participants were easily immersed, treating virtual objects as real (e.g., stepping around them for fear of tripping). We also uncover numerous security, privacy, and safety concerns unique to AR (e.g., deceptive virtual objects misleading users about the real world), and a need for access control among users to manage shared physical spaces and virtual content embedded in those spaces. Our findings give us the opportunity to identify broader lessons and key challenges to inform the design of emerging single-and multi-user AR technologies.
A tracking flow is a flow between an end user and a Web tracking service. We develop an extensive measurement methodology for quantifying at scale the amount of tracking flows that cross data protection borders, be it national or international, such as the EU28 border within which the General Data Protection Regulation (GDPR) applies. Our methodology uses a browser extension to fully render advertising and tracking code, various lists and heuristics to extract well known trackers, passive DNS replication to get all the IP ranges of trackers, and state-of-the art geolocation. We employ our methodology on a dataset from 350 real users of the browser extension over a period of more than four months, and then generalize our results by analyzing billions of web tracking flows from more than 60 million broadband and mobile users from 4 large European ISPs. We show that the majority of tracking flows cross national borders in Europe but, unlike popular belief, are pretty well confined within the larger GDPR jurisdiction. Simple DNS redirection and PoP mirroring can increase national confinement while sealing almost all tracking flows within Europe. Last, we show that cross boarder tracking is prevalent even in sensitive and hence protected data categories and groups including health, sexual orientation, minors, and others.