Visible to the public Biblio

Found 2348 results

Filters: Keyword is privacy  [Clear All Filters]
2018-02-15
Wang, Junjue, Amos, Brandon, Das, Anupam, Pillai, Padmanabhan, Sadeh, Norman, Satyanarayanan, Mahadev.  2017.  A Scalable and Privacy-Aware IoT Service for Live Video Analytics. Proceedings of the 8th ACM on Multimedia Systems Conference. :38–49.

We present OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy. Integrating OpenFace with inter-frame tracking, we build RTFace, a mechanism for denaturing video streams that selectively blurs faces according to specified policies at full frame rates. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions. Finally, we present a scalable, privacy-aware architecture for large camera networks using RTFace.

Klow, Jeffrey, Proby, Jordan, Rueben, Matthew, Sowell, Ross T., Grimm, Cindy M., Smart, William D..  2017.  Privacy, Utility, and Cognitive Load in Remote Presence Systems. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. :167–168.
As teleoperated robot technology becomes cheaper, more powerful, and more reliable, remotely-operated telepresence robots will become more prevalent in homes and businesses, allowing visitors and business partners to be present without the need to travel. Hindering adoption is the issue of privacy: an Internet-connected telepresence robot has the ability to spy on its local area, either for the remote operator or a third party with access to the video data. Additionally, since the remote operator may move about and manipulate objects without local-user intervention, certain typical privacy-protecting techniques such as moving objects to a different room or putting them in a cabinet may prove insufficient. In this paper, we examine the effects of three whole-image filters on the remote operator's ability to discern details while completing a navigation task.
Bittner, Daniel M., Sarwate, Anand D., Wright, Rebecca N..  2017.  Differentially Private Noisy Search with Applications to Anomaly Detection (Abstract). Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. :53–53.
We consider the problem of privacy-sensitive anomaly detection - screening to detect individuals, behaviors, areas, or data samples of high interest. What defines an anomaly is context-specific; for example, a spoofed rather than genuine user attempting to log in to a web site, a fraudulent credit card transaction, or a suspicious traveler in an airport. The unifying assumption is that the number of anomalous points is quite small with respect to the population, so that deep screening of all individual data points would potentially be time-intensive, costly, and unnecessarily invasive of privacy. Such privacy violations can raise concerns due sensitive nature of data being used, raise fears about violations of data use agreements, and make people uncomfortable with anomaly detection methods. Anomaly detection is well studied, but methods to provide anomaly detection along with privacy are less well studied. Our overall goal in this research is to provide a framework for identifying anomalous data while guaranteeing quantifiable privacy in a rigorous sense. Once identified, such anomalies could warrant further data collection and investigation, depending on the context and relevant policies. In this research, we focus on privacy protection during the deployment of anomaly detection. Our main contribution is a differentially private access mechanism for finding anomalies using a search algorithm based on adaptive noisy group testing. To achieve this, we take as our starting point the notion of group testing [1], which was most famously used to screen US military draftees for syphilis during World War II. In group testing, individuals are tested in groups to limit the number of tests. Using multiple rounds of screenings, a small number of positive individuals can be detected very efficiently. Group testing has the added benefit of providing privacy to individuals through plausible deniability - since the group tests use aggregate data, individual contributions to the test are masked by the group. We follow on these concepts by demonstrating a search model utilizing adaptive queries on aggregated group data. Our work takes the first steps toward strengthening and formalizing these privacy concepts by achieving differential privacy [2]. Differential privacy is a statistical measure of disclosure risk that captures the intuition that an individual's privacy is protected if the results of a computation have at most a very small and quantifiable dependence on that individual's data. In the last decade, there hpractical adoption underway by high-profile companies such as Apple, Google, and Uber. In order to make differential privacy meaningful in the context of a task that seeks to specifically identify some (anomalous) individuals, we introduce the notion of anomaly-restricted differential privacy. Using ideas from information theory, we show that noise can be added to group query results in a way that provides differential privacy for non-anomalous individuals and still enables efficient and accurate detection of the anomalous individuals. Our method ensures that using differentially private aggregation of groups of points, providing privacy to individuals within the group while refining the group selection to the point that we can probabilistically narrow attention to a small numbers of individuals or samples for further attention. To summarize: We introduce a new notion of anomaly-restriction differential privacy, which may be of independent interest. We provide a noisy group-based search algorithm that satisfies the anomaly-restricted differential privacy definition. We provide both theoretical and empirical analysis of our noisy search algorithm, showing that it performs well in some cases, and exhibits the usual privacy/accuracy tradeoff of differentially private mechanisms. Potential anomaly detection applications for our work might include spatial search for outliers: this would rely on new sensing technologies that can perform queries in aggregate to reveal and isolate anomalous outliers. For example, this could lead to privacy-sensitive methods for searching for outlying cell phone activity patterns or Internet activity patterns in a geographic location.
Vu, Xuan-Son, Jiang, Lili, Brändström, Anders, Elmroth, Erik.  2017.  Personality-based Knowledge Extraction for Privacy-preserving Data Analysis. Proceedings of the Knowledge Capture Conference. :44:1–44:4.
In this paper, we present a differential privacy preserving approach, which extracts personality-based knowledge to serve privacy guarantee data analysis on personal sensitive data. Based on the approach, we further implement an end-to-end privacy guarantee system, KaPPA, to provide researchers iterative data analysis on sensitive data. The key challenge for differential privacy is determining a reasonable amount of privacy budget to balance privacy preserving and data utility. Most of the previous work applies unified privacy budget to all individual data, which leads to insufficient privacy protection for some individuals while over-protecting others. In KaPPA, the proposed personality-based privacy preserving approach automatically calculates privacy budget for each individual. Our experimental evaluations show a significant trade-off of sufficient privacy protection and data utility.
van Do, Thanh, Engelstad, Paal, Feng, Boning, Do, Van Thuan.  2017.  A Near Real Time SMS Grey Traffic Detection. Proceedings of the 6th International Conference on Software and Computer Applications. :244–249.
Lately, mobile operators experience threats from SMS grey routes which are used by fraudsters to evade SMS fees and to deny them millions in revenues. But more serious are the threats to the user's security and privacy and consequently the operator's reputation. Therefore, it is crucial for operators to have adequate solutions to protect both their network and their customers against this kind of fraud. Unfortunately, so far there is no sufficiently efficient countermeasure against grey routes. This paper proposes a near real time SMS grey traffic detection which makes use of Counting Bloom Filters combined with blacklist and whitelist to detect SMS grey traffic on the fly and to block them. The proposed detection has been implemented and proved to be quite efficient. The paper provides also comprehensive explanation of SMS grey routes and the challenges in their detection. The implementation and verification are also described thoroughly.
Chanyaswad, T., Al, M., Chang, J. M., Kung, S. Y..  2017.  Differential mutual information forward search for multi-kernel discriminant-component selection with an application to privacy-preserving classification. 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.

In machine learning, feature engineering has been a pivotal stage in building a high-quality predictor. Particularly, this work explores the multiple Kernel Discriminant Component Analysis (mKDCA) feature-map and its variants. However, seeking the right subset of kernels for mKDCA feature-map can be challenging. Therefore, we consider the problem of kernel selection, and propose an algorithm based on Differential Mutual Information (DMI) and incremental forward search. DMI serves as an effective metric for selecting kernels, as is theoretically supported by mutual information and Fisher's discriminant analysis. On the other hand, incremental forward search plays a role in removing redundancy among kernels. Finally, we illustrate the potential of the method via an application in privacy-aware classification, and show on three mobile-sensing datasets that selecting an effective set of kernels for mKDCA feature-maps can enhance the utility classification performance, while successfully preserve the data privacy. Specifically, the results show that the proposed DMI forward search method can perform better than the state-of-the-art, and, with much smaller computational cost, can perform as well as the optimal, yet computationally expensive, exhaustive search.

Phan, N., Wu, X., Hu, H., Dou, D..  2017.  Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning. 2017 IEEE International Conference on Data Mining (ICDM). :385–394.

In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds "more noise" into features which are "less relevant" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.

Yonetani, R., Boddeti, V. N., Kitani, K. M., Sato, Y..  2017.  Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption. 2017 IEEE International Conference on Computer Vision (ICCV). :2059–2069.

We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of homomorphic encryption of high-dimensional classifiers, we (1) impose sparsity constraints on local classifier updates and (2) propose a novel efficient encryption scheme named doublypermuted homomorphic encryption (DPHE) which is tailored to sparse high-dimensional data. DPHE (i) decomposes sparse data into its constituent non-zero values and their corresponding support indices, (ii) applies homomorphic encryption only to the non-zero values, and (iii) employs double permutations on the support indices to make them secret. Our experimental evaluation on several public datasets shows that the proposed approach achieves comparable performance against state-of-the-art visual recognition methods while preserving privacy and significantly outperforms other privacy-preserving methods.

2018-02-14
Zuo, C., Shao, J., Liu, Z., Ling, Y., Wei, G..  2017.  Hidden-Token Searchable Public-Key Encryption. 2017 IEEE Trustcom/BigDataSE/ICESS. :248–254.

In this paper, we propose a variant of searchable public-key encryption named hidden-token searchable public-key encryption with two new security properties: token anonymity and one-token-per-trapdoor. With the former security notion, the client can obtain the search token from the data owner without revealing any information about the underlying keyword. Meanwhile, the client cannot derive more than one token from one trapdoor generated by the data owner according to the latter security notion. Furthermore, we present a concrete hiddentoken searchable public-key encryption scheme together with the security proofs in the random oracle model.

Ayed, H. Kaffel-Ben, Boujezza, H., Riabi, I..  2017.  An IDMS approach towards privacy and new requirements in IoT. 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC). :429–434.
Identities are known as the most sensitive information. With the increasing number of connected objects and identities (a connected object may have one or many identities), the computing and communication capabilities improved to manage these connected devices and meet the needs of this progress. Therefore, new IoT Identity Management System (IDMS) requirements have been introduced. In this work, we suggest an IDMS approach to protect private information and ensures domain change in IoT for mobile clients using a personal authentication device. Firstly, we present basic concepts, existing requirements and limits of related works. We also propose new requirements and show our motivations. Next, we describe our proposal. Finally, we give our security approach validation, perspectives, and some concluding remarks.
Kravitz, D. W., Cooper, J..  2017.  Securing user identity and transactions symbiotically: IoT meets blockchain. 2017 Global Internet of Things Summit (GIoTS). :1–6.
Swarms of embedded devices provide new challenges for privacy and security. We propose Permissioned Blockchains as an effective way to secure and manage these systems of systems. A long view of blockchain technology yields several requirements absent in extant blockchain implementations. Our approach to Permissioned Blockchains meets the fundamental requirements for longevity, agility, and incremental adoption. Distributed Identity Management is an inherent feature of our Permissioned Blockchain and provides for resilient user and device identity and attribute management.
Raju, S., Boddepalli, S., Gampa, S., Yan, Q., Deogun, J. S..  2017.  Identity management using blockchain for cognitive cellular networks. 2017 IEEE International Conference on Communications (ICC). :1–6.
Cloud-centric cognitive cellular networks utilize dynamic spectrum access and opportunistic network access technologies as a means to mitigate spectrum crunch and network demand. However, furnishing a carrier with personally identifiable information for user setup increases the risk of profiling in cognitive cellular networks, wherein users seek secondary access at various times with multiple carriers. Moreover, network access provisioning - assertion, authentication, authorization, and accounting - implemented in conventional cellular networks is inadequate in the cognitive space, as it is neither spontaneous nor scalable. In this paper, we propose a privacy-enhancing user identity management system using blockchain technology which places due importance on both anonymity and attribution, and supports end-to-end management from user assertion to usage billing. The setup enables network access using pseudonymous identities, hindering the reconstruction of a subscriber's identity. Our test results indicate that this approach diminishes access provisioning duration by up to 4x, decreases network signaling traffic by almost 40%, and enables near real-time user billing that may lead to approximately 3x reduction in payments settlement time.
2018-02-06
Li, X., Smith, J. D., Thai, M. T..  2017.  Adaptive Reconnaissance Attacks with Near-Optimal Parallel Batching. 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). :699–709.

In assessing privacy on online social networks, it is important to investigate their vulnerability to reconnaissance strategies, in which attackers lure targets into being their friends by exploiting the social graph in order to extract victims' sensitive information. As the network topology is only partially revealed after each successful friend request, attackers need to employ an adaptive strategy. Existing work only considered a simple strategy in which attackers sequentially acquire one friend at a time, which causes tremendous delay in waiting for responses before sending the next request, and which lack the ability to retry failed requests after the network has changed. In contrast, we investigate an adaptive and parallel strategy, of which attackers can simultaneously send multiple friend requests in batch and recover from failed requests by retrying after topology changes, thereby significantly reducing the time to reach the targets and greatly improving robustness. We cast this approach as an optimization problem, Max-Crawling, and show it inapproximable within (1 - 1/e + $ε$). We first design our core algorithm PM-AReST which has an approximation ratio of (1 - e-(1-1/e)) using adaptive monotonic submodular properties. We next tighten our algorithm to provide a nearoptimal solution, i.e. having a ratio of (1 - 1/e), via a two-stage stochastic programming approach. We further establish the gap bound of (1 - e-(1-1/e)2) between batch strategies versus the optimal sequential one. We experimentally validate our theoretical results, finding that our algorithm performs nearoptimally in practice and that this is robust under a variety of problem settings.

Boukoros, Spyros, Katzenbeisser, Stefan.  2017.  Measuring Privacy in High Dimensional Microdata Collections. Proceedings of the 12th International Conference on Availability, Reliability and Security. :15:1–15:8.

Microdata is collected by companies in order to enhance their quality of service as well as the accuracy of their recommendation systems. These data often become publicly available after they have been sanitized. Recent reidentification attacks on publicly available, sanitized datasets illustrate the privacy risks involved in microdata collections. Currently, users have to trust the provider that their data will be safe in case data is published or if a privacy breach occurs. In this work, we empower users by developing a novel, user-centric tool for privacy measurement and a new lightweight privacy metric. The goal of our tool is to estimate users' privacy level prior to sharing their data with a provider. Hence, users can consciously decide whether to contribute their data. Our tool estimates an individuals' privacy level based on published popularity statistics regarding the items in the provider's database, and the users' microdata. In this work, we describe the architecture of our tool as well as a novel privacy metric, which is necessary for our setting where we do not have access to the provider's database. Our tool is user friendly, relying on smart visual results that raise privacy awareness. We evaluate our tool using three real world datasets, collected from major providers. We demonstrate strong correlations between the average anonymity set per user and the privacy score obtained by our metric. Our results illustrate that our tool which uses minimal information from the provider, estimates users' privacy levels comparably well, as if it had access to the actual database.

Camenisch, J., Chen, L., Drijvers, M., Lehmann, A., Novick, D., Urian, R..  2017.  One TPM to Bind Them All: Fixing TPM 2.0 for Provably Secure Anonymous Attestation. 2017 IEEE Symposium on Security and Privacy (SP). :901–920.

The Trusted Platform Module (TPM) is an international standard for a security chip that can be used for the management of cryptographic keys and for remote attestation. The specification of the most recent TPM 2.0 interfaces for direct anonymous attestation unfortunately has a number of severe shortcomings. First of all, they do not allow for security proofs (indeed, the published proofs are incorrect). Second, they provide a Diffie-Hellman oracle w.r.t. the secret key of the TPM, weakening the security and preventing forward anonymity of attestations. Fixes to these problems have been proposed, but they create new issues: they enable a fraudulent TPM to encode information into an attestation signature, which could be used to break anonymity or to leak the secret key. Furthermore, all proposed ways to remove the Diffie-Hellman oracle either strongly limit the functionality of the TPM or would require significant changes to the TPM 2.0 interfaces. In this paper we provide a better specification of the TPM 2.0 interfaces that addresses these problems and requires only minimal changes to the current TPM 2.0 commands. We then show how to use the revised interfaces to build q-SDH-and LRSW-based anonymous attestation schemes, and prove their security. We finally discuss how to obtain other schemes addressing different use cases such as key-binding for U-Prove and e-cash.

Sain, M., Bruce, N., Kim, K. H., Lee, H. J..  2017.  A Communication Security Protocol for Ubiquitous Sensor Networks. 2017 19th International Conference on Advanced Communication Technology (ICACT). :228–231.

The data accessibility anytime and anywhere is nowadays the key feature for information technology enabled by the ubiquitous network system for huge applications. However, security and privacy are perceived as primary obstacles to its wide adoption when it is applied to the end user application. When sharing sensitive information, personal s' data protection is the paramount requirement for the security and privacy to ensure the trustworthiness of the service provider. To this end, this paper proposes communication security protocol to achieve data protection when a user is sending his sensitive data to the network through gateway. We design a cipher content and key exchange computation process. Finally, the performance analysis of the proposed scheme ensure the honesty of the gateway service provider, since the user has the ability to control who has access to his data by issuing a cryptographic access credential to data users.

Chakraborty, N., Kalaimannan, E..  2017.  Minimum Cost Security Measurements for Attack Tree Based Threat Models in Smart Grid. 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). :614–618.

In this paper, we focus on the security issues and challenges in smart grid. Smart grid security features must address not only the expected deliberate attacks, but also inadvertent compromises of the information infrastructure due to user errors, equipment failures, and natural disasters. An important component of smart grid is the advanced metering infrastructure which is critical to support two-way communication of real time information for better electricity generation, distribution and consumption. These reasons makes security a prominent factor of importance to AMI. In recent times, attacks on smart grid have been modelled using attack tree. Attack tree has been extensively used as an efficient and effective tool to model security threats and vulnerabilities in systems where the ultimate goal of an attacker can be divided into a set of multiple concrete or atomic sub-goals. The sub-goals are related to each other as either AND-siblings or OR-siblings, which essentially depicts whether some or all of the sub-goals must be attained for the attacker to reach the goal. On the other hand, as a security professional one needs to find out the most effective way to address the security issues in the system under consideration. It is imperative to assume that each attack prevention strategy incurs some cost and the utility company would always look to minimize the same. We present a cost-effective mechanism to identify minimum number of potential atomic attacks in an attack tree.

Mehrpouyan, H., Azpiazu, I. M., Pera, M. S..  2017.  Measuring Personality for Automatic Elicitation of Privacy Preferences. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :84–95.

The increasing complexity and ubiquity in user connectivity, computing environments, information content, and software, mobile, and web applications transfers the responsibility of privacy management to the individuals. Hence, making it extremely difficult for users to maintain the intelligent and targeted level of privacy protection that they need and desire, while simultaneously maintaining their ability to optimally function. Thus, there is a critical need to develop intelligent, automated, and adaptable privacy management systems that can assist users in managing and protecting their sensitive data in the increasingly complex situations and environments that they find themselves in. This work is a first step in exploring the development of such a system, specifically how user personality traits and other characteristics can be used to help automate determination of user sharing preferences for a variety of user data and situations. The Big-Five personality traits of openness, conscientiousness, extroversion, agreeableness, and neuroticism are examined and used as inputs into several popular machine learning algorithms in order to assess their ability to elicit and predict user privacy preferences. Our results show that the Big-Five personality traits can be used to significantly improve the prediction of user privacy preferences in a number of contexts and situations, and so using machine learning approaches to automate the setting of user privacy preferences has the potential to greatly reduce the burden on users while simultaneously improving the accuracy of their privacy preferences and security.

Heifetz, A., Mugunthan, V., Kagal, L..  2017.  Shade: A Differentially-Private Wrapper for Enterprise Big Data. 2017 IEEE International Conference on Big Data (Big Data). :1033–1042.

Enterprises usually provide strong controls to prevent cyberattacks and inadvertent leakage of data to external entities. However, in the case where employees and data scientists have legitimate access to analyze and derive insights from the data, there are insufficient controls and employees are usually permitted access to all information about the customers of the enterprise including sensitive and private information. Though it is important to be able to identify useful patterns of one's customers for better customization and service, customers' privacy must not be sacrificed to do so. We propose an alternative — a framework that will allow privacy preserving data analytics over big data. In this paper, we present an efficient and scalable framework for Apache Spark, a cluster computing framework, that provides strong privacy guarantees for users even in the presence of an informed adversary, while still providing high utility for analysts. The framework, titled Shade, includes two mechanisms — SparkLAP, which provides Laplacian perturbation based on a user's query and SparkSAM, which uses the contents of the database itself in order to calculate the perturbation. We show that the performance of Shade is substantially better than earlier differential privacy systems without loss of accuracy, particularly when run on datasets small enough to fit in memory, and find that SparkSAM can even exceed performance of an identical nonprivate Spark query.

Palanisamy, B., Li, C., Krishnamurthy, P..  2017.  Group Privacy-Aware Disclosure of Association Graph Data. 2017 IEEE International Conference on Big Data (Big Data). :1043–1052.

In the age of Big Data, we are witnessing a huge proliferation of digital data capturing our lives and our surroundings. Data privacy is a critical barrier to data analytics and privacy-preserving data disclosure becomes a key aspect to leveraging large-scale data analytics due to serious privacy risks. Traditional privacy-preserving data publishing solutions have focused on protecting individual's private information while considering all aggregate information about individuals as safe for disclosure. This paper presents a new privacy-aware data disclosure scheme that considers group privacy requirements of individuals in bipartite association graph datasets (e.g., graphs that represent associations between entities such as customers and products bought from a pharmacy store) where even aggregate information about groups of individuals may be sensitive and need protection. We propose the notion of $ε$g-Group Differential Privacy that protects sensitive information of groups of individuals at various defined group protection levels, enabling data users to obtain the level of information entitled to them. Based on the notion of group privacy, we develop a suite of differentially private mechanisms that protect group privacy in bipartite association graphs at different group privacy levels based on specialization hierarchies. We evaluate our proposed techniques through extensive experiments on three real-world association graph datasets and our results demonstrate that the proposed techniques are effective, efficient and provide the required guarantees on group privacy.

Shi, Y., Piao, C., Zheng, L..  2017.  Differential-Privacy-Based Correlation Analysis in Railway Freight Service Applications. 2017 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :35–39.

With the development of modern logistics industry railway freight enterprises as the main traditional logistics enterprises, the service mode is facing many problems. In the era of big data, for railway freight enterprises, coordinated development and sharing of information resources have become the requirements of the times, while how to protect the privacy of citizens has become one of the focus issues of the public. To prevent the disclosure or abuse of the citizens' privacy information, the citizens' privacy needs to be preserved in the process of information opening and sharing. However, most of the existing privacy preserving models cannot to be used to resist attacks with continuously growing background knowledge. This paper presents the method of applying differential privacy to protect associated data, which can be shared in railway freight service association information. First, the original service data need to slice by optimal shard length, then differential method and apriori algorithm is used to add Laplace noise in the Candidate sets. Thus the citizen's privacy information can be protected even if the attacker gets strong background knowledge. Last, sharing associated data to railway information resource partners. The steps and usefulness of the discussed privacy preservation method is illustrated by an example.

Zebboudj, S., Brahami, R., Mouzaia, C., Abbas, C., Boussaid, N., Omar, M..  2017.  Big Data Source Location Privacy and Access Control in the Framework of IoT. 2017 5th International Conference on Electrical Engineering - Boumerdes (ICEE-B). :1–5.

In the recent years, we have observed the development of several connected and mobile devices intended for daily use. This development has come with many risks that might not be perceived by the users. These threats are compromising when an unauthorized entity has access to private big data generated through the user objects in the Internet of Things. In the literature, many solutions have been proposed in order to protect the big data, but the security remains a challenging issue. This work is carried out with the aim to provide a solution to the access control to the big data and securing the localization of their generator objects. The proposed models are based on Attribute Based Encryption, CHORD protocol and $μ$TESLA. Through simulations, we compare our solutions to concurrent protocols and we show its efficiency in terms of relevant criteria.

Nosouhi, M. R., Pham, V. V. H., Yu, S., Xiang, Y., Warren, M..  2017.  A Hybrid Location Privacy Protection Scheme in Big Data Environment. GLOBECOM 2017 - 2017 IEEE Global Communications Conference. :1–6.

Location privacy has become a significant challenge of big data. Particularly, by the advantage of big data handling tools availability, huge location data can be managed and processed easily by an adversary to obtain user private information from Location-Based Services (LBS). So far, many methods have been proposed to preserve user location privacy for these services. Among them, dummy-based methods have various advantages in terms of implementation and low computation costs. However, they suffer from the spatiotemporal correlation issue when users submit consecutive requests. To solve this problem, a practical hybrid location privacy protection scheme is presented in this paper. The proposed method filters out the correlated fake location data (dummies) before submissions. Therefore, the adversary can not identify the user's real location. Evaluations and experiments show that our proposed filtering technique significantly improves the performance of existing dummy-based methods and enables them to effectively protect the user's location privacy in the environment of big data.

Hassoon, I. A., Tapus, N., Jasim, A. C..  2017.  Enhance Privacy in Big Data and Cloud via Diff-Anonym Algorithm. 2017 16th RoEduNet Conference: Networking in Education and Research (RoEduNet). :1–5.

The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.

2018-02-02
Akram, R. N., Markantonakis, K., Mayes, K., Habachi, O., Sauveron, D., Steyven, A., Chaumette, S..  2017.  Security, privacy and safety evaluation of dynamic and static fleets of drones. 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC). :1–12.

Interconnected everyday objects, either via public or private networks, are gradually becoming reality in modern life - often referred to as the Internet of Things (IoT) or Cyber-Physical Systems (CPS). One stand-out example are those systems based on Unmanned Aerial Vehicles (UAVs). Fleets of such vehicles (drones) are prophesied to assume multiple roles from mundane to high-sensitive applications, such as prompt pizza or shopping deliveries to the home, or to deployment on battlefields for battlefield and combat missions. Drones, which we refer to as UAVs in this paper, can operate either individually (solo missions) or as part of a fleet (group missions), with and without constant connection with a base station. The base station acts as the command centre to manage the drones' activities; however, an independent, localised and effective fleet control is necessary, potentially based on swarm intelligence, for several reasons: 1) an increase in the number of drone fleets; 2) fleet size might reach tens of UAVs; 3) making time-critical decisions by such fleets in the wild; 4) potential communication congestion and latency; and 5) in some cases, working in challenging terrains that hinders or mandates limited communication with a control centre, e.g. operations spanning long period of times or military usage of fleets in enemy territory. This self-aware, mission-focused and independent fleet of drones may utilise swarm intelligence for a), air-traffic or flight control management, b) obstacle avoidance, c) self-preservation (while maintaining the mission criteria), d) autonomous collaboration with other fleets in the wild, and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.