Biblio
Biometric authentication offers promise for mobile security, but its adoption can be controversial, both from a usability and security perspective. We describe a preliminary study, comparing recollections of biometric adoption by computer security experts and non-experts collected in semi-structured interviews. Initial decisions and thought processes around biometric adoption were recalled, as well as changes in those views over time. These findings should serve to better inform security education across differing levels of technical experience. Preliminary findings indicate that both user groups were influenced by similar sources of information; however, expert users differed in having more professional requirements affecting choices (e.g., BYOD). Furthermore, experts often added biometric authentication methods opportunistically during device updates, despite describing higher security concern and caution. Non-experts struggled with the setting up fingerprint biometrics, leading to poor adoption. Further interviews are still being conducted.
ARM devices (mobile phone, IoT devices) are getting more popular in our daily life due to the low power consumption and cost. These devices carry a huge number of user's private information, which attracts attackers' attention and increase the security risk. The operating systems (e.g., Android, Linux) works out many memory data protection strategies on user's private information. However, the monolithic OS may contain security vulnerabilities that are exploited by the attacker to get root or even kernel privilege. Once the kernel privilege is obtained by the attacker, all data protection strategies will be gone and user's private information can be taken away. In this paper, we propose a hardened memory data protection framework called H-Securebox to defeat kernel-level memory data stolen attacks. H-Securebox leverages ARM hardware virtualization technique to protect the data on the memory with hypervisor privilege. We designed three types H-Securebox for programing developers to use. Although the attacker may have kernel privilege, she can not touch private data inside H-Securebox, since hypervisor privilege is higher than kernel privilege. With the implementation of H-Securebox system assisting by a tiny hypervisor on Raspberry Pi2 development board, we measure the performance overhead of our system and do the security evaluations. The results positively show that the overhead is negligible and the malicious application with root or kernel privilege can not access the private data protected by our system.
Third-party software daemons called host agents are increasingly responsible for a modern host's security, automation, and monitoring tasks. Because of their location within the host, these agents are at risk of manipulation by malware and users. Additionally, in virtualized environments where multiple adjacent guests each run their own set of agents, the cumulative resources that agents consume adds up rapidly. Consolidating agents onto the hypervisor can address these problems, but places a technical burden on agent developers. This work presents a development methodology to re-engineer a host agent in to a hyperagent, an out-of-guest agent that gains unique hypervisor-based advantages while retaining its original in-guest capabilities. This three-phase methodology makes integrating Virtual Machine Introspection (VMI) functionality in to existing code easier and more accessible, minimizing an agent developer's re-engineering effort. The benefits of hyperagents are illustrated by porting the GRR live forensics agent, which retains 89% of its codebase, uses 40% less memory than its in-guest counterparts, and enables a 4.9x speedup for a representative data-intensive workload. This work shows that a conventional off-the-shelf host agent can be feasibly transformed into a hyperagent and provide a powerful, efficient tool for defending virtualized systems.
Smartphones have evolved over the years from simple devices to communicate with each other to fully functional portable computers although with comparatively less computational power but inholding multiple applications within. With the smartphone revolution, the value of personal data has increased. As technological complexities increase, so do the vulnerabilities in the system. Smartphones are the latest target for attacks. Android being an open source platform and also the most widely used smartphone OS draws the attention of many malware writers to exploit the vulnerabilities of it. Attackers try to take advantage of these vulnerabilities and fool the user and misuse their data. Malwares have come a long way from simple worms to sophisticated DDOS using Botnets, the latest trends in computer malware tend to go in the distributed direction, to evade the multiple anti-virus apps developed to counter generic viruses and Trojans. However, the recent trend in android system is to have a combination of applications which acts as malware. The applications are benign individually but when grouped, these may result into a malicious activity. This paper proposes a new category of distributed malware in android system, how it can be used to evade the current security, and how it can be detected with the help of graph matching algorithm.
This computer era leads human to interact with computers and networks but there is no such solution to get rid of security problems. Securities threats misleads internet, we are sometimes losing our hope and reliability with many server based access. Even though many more crypto algorithms are coming for integrity and authentic data in computer access still there is a non reliable threat penetrates inconsistent vulnerabilities in networks. These vulnerable sites are taking control over the user's computer and doing harmful actions without user's privileges. Though Firewalls and protocols may support our browsers via setting certain rules, still our system couldn't support for data reliability and confidentiality. Since these problems are based on network access, lets we consider TCP/IP parameters as a dataset for analysis. By doing preprocess of TCP/IP packets we can build sovereign model on data set and clump cluster. Further the data set gets classified into regular traffic pattern and anonymous pattern using KNN classification algorithm. Based on obtained pattern for normal and threats data sets, security devices and system will set rules and guidelines to learn by it to take needed stroke. This paper analysis the computer to learn security actions from the given data sets which already exist in the previous happens.
Reuse of pre-existing industry datasets for research purposes requires a multi-stakeholder solution that balances the researcher's analysis objectives with the need to engage the industry data custodian, whilst respecting the privacy rights of human data subjects. Current methods place the burden on the data custodian, whom may not be sufficiently trained to fully appreciate the nuances of data de-identification. Through modelling of functional, quality, and emotional goals, we propose a de-identification in the cloud approach whereby the researcher proposes analyses along with the extraction and de-identification operations, while engaging the industry data custodian with secure control over authorising the proposed analyses. We demonstrate our approach through implementation of a de-identification portal for sports club data.
We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants' private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) $\cdot$ loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost.
The success and widespread adoption of the Internet of Things (IoT) has increased many folds over the last few years. Industries, technologists and home users recognise the importance of IoT in their lives. Essentially, IoT has brought vast industrial revolution and has helped automate many processes within organisations and homes. However, the rapid growth of IoT is also a cause for significant concern. IoT is not only plagued with security, authentication and access control issues, it also doesn't work as well as it should with fourth industrial revolution, commonly known as Industry 4.0. The absence of effective regulation, standards and weak governance has led to a continual downward trend in the security of IoT networks and devices, as well as given rise to a broad range of privacy issues. This paper examines the IoT industry and discusses the urgent need for standardisation, the benefits of governance as well as the issues affecting the IoT sector due to the absence of regulation. Additionally, through this paper, we are introducing an IoT security framework (IoTSFW) for organisations to bridge the current lack of guidelines in the IoT industry. Implementation of the guidelines, defined in the proposed framework, will assist organisations in achieving security, privacy, sustainability and scalability within their IoT networks.
The utility of mediated environments increases when environmental scale (size and distance) is perceived accurately. We present the use of perceived affordances–-judgments of action capabilities–-as an objective way to assess space perception in an augmented reality (AR) environment. The current study extends the previous use of this methodology in virtual reality (VR) to AR. We tested two locomotion-based affordance tasks. In the first experiment, observers judged whether they could pass through a virtual aperture presented at different widths and distances, and also judged the distance to the aperture. In the second experiment, observers judged whether they could step over a virtual gap on the ground. In both experiments, the virtual objects were displayed with the HoloLens in a real laboratory environment. We demonstrate that affordances for passing through and perceived distance to the aperture are similar in AR to those measured in the real world, but that judgments of gap-crossing in AR were underestimated. These differences across two affordances may result from the different spatial characteristics of the virtual objects (on the ground versus extending off the ground).
The use of typing biometrics—the characteristic typing patterns of individual keyboard users—has been studied extensively in the context of enhancing multi-factor authentication services. The key starting point for such work has been the collection of high-fidelity local timing data, and the key (implicit) security assumption has been that such biometrics could not be obtained by other means. We show that the latter assumption to be false, and that it is entirely feasible to obtain useful typing biometric signatures from third-party timing logs. Specifically, we show that the logs produced by realtime collaboration services during their normal operation are of sufficient fidelity to successfully impersonate a user using remote data only. Since the logs are routinely shared as a byproduct of the services' operation, this creates an entirely new avenue of attack that few users would be aware of. As a proof of concept, we construct successful biometric attacks using only the log-based structure (complete editing history) of a shared Google Docs, or Zoho Writer, document which is readily available to all contributing parties. Using the largest available public data set of typing biometrics, we are able to create successful forgeries 100% of the time against a commercial biometric service. Our results suggest that typing biometrics are not robust against practical forgeries, and should not be given the same weight as other authentication factors. Another important implication is that the routine collection of detailed timing logs by various online services also inherently (and implicitly) contains biometrics. This not only raises obvious privacy concerns, but may also undermine the effectiveness of network anonymization solutions, such as ToR, when used with existing services.
Internet of Things (IoT) offers new opportunities for business, technology and science but it also raises new challenges in terms of security and privacy, mainly because of the inherent characteristics of this environment: IoT devices come from a variety of manufacturers and operators and these devices suffer from constrained resources in terms of computation, communication and storage. In this paper, we address the problem of trust establishment for IoT and propose a security solution that consists of a secure bootstrap mechanism for device identification as well as a message attestation mechanism for aggregate response validation. To achieve both security requirements, we approach the problem in a confined environment, named SubNets of Things (SNoT), where various devices depend on it. In this context, devices are uniquely and securely identified thanks to their environment and their role within it. Additionally, the underlying message authentication technique features signature aggregation and hence, generates one compact response on behalf of all devices in the subnet.
The RFID based communication between objects within the framework of IoT is potentially very efficient in terms of power requirements and system complexity. The new design incorporating the emerging chipless RFID tags has the potential to make the system more efficient and simple. However, these systems are prone to privacy and security risks and these challenges associated with such systems have not been addressed appropriately in the broader IoT framework. In this context, a lightweight collision free algorithm based on n-bit pseudo random number generator, X-OR hash function, and rotations for chipless RFID system is presented. The algorithm has been implemented on an 8-bit open-loop resonator based chipless RFID tag based system and is validated using BASYS 2 FPGA board based platform. The proposed scheme has been shown to possess security against various attacks such as Denial of Service (DoS), tag/reader anonymity, and tag impersonation.
A distinguisher is employed by an adversary to explore the privacy property of a cryptographic primitive. If a cryptographic primitive is said to be private, there is no distinguisher algorithm that can be used by an adversary to distinguish the encodings generated by this primitive with non-negligible advantage. Recently, two privacy-preserving matrix transformations first proposed by Salinas et al. have been widely used to achieve the matrix-related verifiable (outsourced) computation in data protection. Salinas et al. proved that these transformations are private (in terms of indistinguishability). In this paper, we first propose the concept of a linear distinguisher and two constructions of the linear distinguisher algorithms. Then, we take those two matrix transformations (including Salinas et al.\$'\$s original work and Yu et al.\$'\$s modification) as example targets and analyze their privacy property when our linear distinguisher algorithms are employed by the adversaries. The results show that those transformations are not private even against passive eavesdropping.
Ultra-dense Networks are attracting significant interest due to their ability to provide the next generation 5G cellular networks with a high data rate, low delay, and seamless coverage. Several factors, such as interferences, energy constraints, and backhaul bottlenecks may limit wireless networks densification. In this paper, we study the effect of mobile node densification, access node densification, and their aggregation into virtual entities, referred to as virtual cells, on location privacy. Simulations show that the number of tracked mobile nodes might be statistically reduced up to 10 percent by implementing virtual cells. Moreover, experiments highlight that success of tracking attacks has an inverse relationship to the number of moving nodes. The present paper is a preliminary attempt to analyse the effectiveness of cell virtualization to mitigate location privacy threats in ultra-dense networks.
Routing security plays an important role in Mobile Ad hoc Networks (MANETs). Despite many attempts to improve its security, the routing procedure of MANETs remains vulnerable to attacks. Existing approaches offer support for detecting attacks or debugging in different routing phases, but many of them have not considered the privacy of the nodes during the anomalies detection, which depend on the central control program or a third party to supervise the whole network. In this paper, we present an approach called LAD which uses the raw logs of routers to construct control a flow graph and find the existing communication rules in MANETs. With the reasoning rules, LAD can detect both active and passive attacks launched during the routing phase. LAD can also protect the privacy of the nodes in the verification phase with the specific Merkle hash tree. Without deploying any special nodes to assist the verification, LAD can detect multiple malicious nodes by itself. To show that our approach can be used to guarantee the security of the MANETs, we deploy our experiment in NS3 as well as the practical router environment. LAD can improve the accuracy rate from 2.28% to 29.22%. The results show that LAD performs limited time and memory usages, high detection and low false positives.
Physical Unclonable Functions (PUFs) have been designed for many security applications such as identification, authentication of devices and key generation, especially for lightweight electronics. Traditional approaches to enhancing security, such as hash functions, may be expensive and resource dependent. However, modelling attacks using machine learning (ML) show the vulnerability of most PUFs. In this paper, a combination of a 32-bit current mirror and 16-bit arbiter PUFs in 65nm CMOS technology is proposed to improve resilience against modelling attacks. Both PUFs are vulnerable to machine learning attacks and we reduce the output prediction rate from 99.2% and 98.8% individually, to 60%.
In this paper, we report our work on using machine learning techniques to predict back bending activity based on field data acquired in a local nursing home. The data are recorded by a privacy-aware compliance tracking system (PACTS). The objective of PACTS is to detect back-bending activities and issue real-time alerts to the participant when she bends her back excessively, which we hope could help the participant form good habits of using proper body mechanics when performing lifting/pulling tasks. We show that our algorithms can differentiate nursing staffs baseline and high-level bending activities by using human skeleton data without any expert rules.
The increasing publication of large amounts of data, theoretically anonymous, can lead to a number of attacks on the privacy of people. The publication of sensitive data without exposing the data owners is generally not part of the software developers concerns. The regulations for the data privacy-preserving create an appropriate scenario to focus on privacy from the perspective of the use or data exploration that takes place in an organization. The increasing number of sanctions for privacy violations motivates the systematic comparison of three known machine learning algorithms in order to measure the usefulness of the data privacy preserving. The scope of the evaluation is extended by comparing them with a known privacy preservation metric. Different parameter scenarios and privacy levels are used. The use of publicly available implementations, the presentation of the methodology, explanation of the experiments and the analysis allow providing a framework of work on the problem of the preservation of privacy. Problems are shown in the measurement of the usefulness of the data and its relationship with the privacy preserving. The findings motivate the need to create optimized metrics on the privacy preferences of the owners of the data since the risks of predicting sensitive attributes by means of machine learning techniques are not usually eliminated. In addition, it is shown that there may be a hundred percent, but it cannot be measured. As well as ensuring adequate performance of machine learning models that are of interest to the organization that data publisher.
We present the first formalisation of a blockchain-based distributed consensus protocol with a proof of its consistency mechanised in an interactive proof assistant. Our development includes a reference mechanisation of the block forest data structure, necessary for implementing provably correct per-node protocol logic. We also define a model of a network, implementing the protocol in the form of a replicated state-transition system. The protocol's executions are modeled via a small-step operational semantics for asynchronous message passing, in which packages can be rearranged or duplicated. In this work, we focus on the notion of global system safety, proving a form of eventual consistency. To do so, we provide a library of theorems about a pure functional implementation of block forests, define an inductive system invariant, and show that, in a quiescent system state, it implies a global agreement on the state of per-node transaction ledgers. Our development is parametric with respect to implementations of several security primitives, such as hash-functions, a notion of a proof object, a Validator Acceptance Function, and a Fork Choice Rule. We precisely characterise the assumptions, made about these components for proving the global system consensus, and discuss their adequacy. All results described in this paper are formalised in Coq.
The integrity and reliability of speech data have been important issues to probative use. Watermarking technologies supplies an alternative solution to guarantee the the authenticity of multiple data besides digital signature. This work proposes a novel digital watermarking based on a reversible compression algorithm with sample scanning to detect tampering in time domain. In order to detect tampering precisely, the digital speech data is divided into length-fixed frames and the content-based hash information of each frame is calculated and embedded into the speech data for verification. Huffman compression algorithm is applied to each four sampling bits from least significant bit in each sample after pulse-code modulation processing to achieve low distortion and high capacity for hiding payload. Experimental experiments on audio quality, detection precision and robustness towards attacks are taken, and the results show the effectiveness of tampering detection with a precision with an error around 0.032 s for a 10 s speech clip. Distortion is imperceptible with an average 22.068 dB for Huffman-based and 24.139 dB for intDCT-based method in terms of signal-to-noise, and with an average MOS 3.478 for Huffman-based and 4.378 for intDCT-based method. The bit error rate (BER) between stego data and attacked stego data in both of time-domain and frequency domain is approximate 28.6% in average, which indicates the robustness of the proposed hiding method.
Consent is a key measure for privacy protection and needs to be `meaningful' to give people informational power. It is increasingly important that individuals are provided with real choices and are empowered to negotiate for meaningful consent. Meaningful consent is an important area for consideration in IoT systems since privacy is a significant factor impacting on adoption of IoT. Obtaining meaningful consent is becoming increasingly challenging in IoT environments. It is proposed that an ``apparency, pragmatic/semantic transparency model'' adopted for data management could make consent more meaningful, that is, visible, controllable and understandable. The model has illustrated the why and what issues regarding data management for potential meaningful consent [1]. In this paper, we focus on the `how' issue, i.e. how to implement the model in IoT systems. We discuss apparency by focusing on the interactions and data actions in the IoT system; pragmatic transparency by centring on the privacy risks, threats of data actions; and semantic transparency by focusing on the terms and language used by individuals and the experts. We believe that our discussion would elicit more research on the apparency model' in IoT for meaningful consent.
Community Health Workers (CHWs) have been using Mobile Health Data Collection Systems (MDCSs) for supporting the delivery of primary healthcare and carrying out public health surveys, feeding national-level databases with families' personal data. Such systems are used for public surveillance and to manage sensitive data (i.e., health data), so addressing the privacy issues is crucial for successfully deploying MDCSs. In this paper we present a comprehensive privacy threat analysis for MDCSs, discuss the privacy challenges and provide recommendations that are specially useful to health managers and developers. We ground our analysis on a large-scale MDCS used for primary care (GeoHealth) and a well-known Privacy Impact Assessment (PIA) methodology. The threat analysis is based on a compilation of relevant privacy threats from the literature as well as brain-storming sessions with privacy and security experts. Among the main findings, we observe that existing MDCSs do not employ adequate controls for achieving transparency and interveinability. Thus, threatening fundamental privacy principles regarded as data quality, right to access and right to object. Furthermore, it is noticeable that although there has been significant research to deal with data security issues, the attention with privacy in its multiple dimensions is prominently lacking.
With the growth of smartphone sales and app usage, fingerprinting and identification of smartphone apps have become a considerable threat to user security and privacy. Traffic analysis is one of the most common methods for identifying apps. Traditional countermeasures towards traffic analysis includes traffic morphing and multipath routing. The basic idea of multipath routing is to increase the difficulty for adversary to eavesdrop all traffic by splitting traffic into several subflows and transmitting them through different routes. Previous works in multipath routing mainly focus on Wireless Sensor Networks (WSNs) or Mobile Ad Hoc Networks (MANETs). In this paper, we propose a multipath routing scheme for smartphones with edge network assistance to mitigate traffic analysis attack. We consider an adversary with limited capability, that is, he can only intercept the traffic of one node following certain attack probability, and try to minimize the traffic an adversary can intercept. We formulate our design as a flow routing optimization problem. Then a heuristic algorithm is proposed to solve the problem. Finally, we present the simulation results for our scheme and justify that our scheme can effectively protect smartphones from traffic analysis attack.
SDN (Software Defined Network) with multiple controllers draws more attention for the increasing scale of the network. The architecture can handle what SDN with single controller is not able to address. In order to understand what this architecture can accomplish and face precisely, we analyze it with formal methods. In this paper, we apply CSP (Communicating Sequential Processes) to model the routing service of SDN under HyperFlow architecture based on OpenFlow protocol. By using model checker PAT (Process Analysis Toolkit), we verify that the models satisfy three properties, covering deadlock freeness, consistency and fault tolerance.
Many smartphone security mechanisms prompt users to decide on sensitive resource requests. This approach fails if corresponding implications are not understood. Prior work identified ineffective user interfaces as a cause for insufficient comprehension and proposed augmented dialogs. We hypothesize that, prior to interface-design, efficient security dialogs require an underlying permission model based on user demands. We believe, that only an implementation which corresponds to users\guillemotright mental models, in terms of the handling, granularity and grouping of permission requests, allows for informed decisions. In this work, we propose a study design which leverages materialization for the extraction of the mental models. We present preliminary results of three Focus Groups. The findings indicate that the materialization provided sufficient support for non-experts to understand and discuss this complex topic. In addition to this, the results indicate that current permission approaches do not match users\guillemotright demands for information and control.