Visible to the public Biblio

Filters: Keyword is private data  [Clear All Filters]
2021-04-08
Althebyan, Q..  2019.  A Mobile Edge Mitigation Model for Insider Threats: A Knowledgebase Approach. 2019 International Arab Conference on Information Technology (ACIT). :188—192.
Taking care of security at the cloud is a major issue that needs to be carefully considered and solved for both individuals as well as organizations. Organizations usually expect more trust from employees as well as customers in one hand. On the other hand, cloud users expect their private data is maintained and secured. Although this must be case, however, some malicious outsiders of the cloud as well as malicious insiders who are cloud internal users tend to disclose private data for their malicious uses. Although outsiders of the cloud should be a concern, however, the more serious problems come from Insiders whose malicious actions are more serious and sever. Hence, insiders' threats in the cloud should be the top most problem that needs to be tackled and resolved. This paper aims to find a proper solution for the insider threat problem in the cloud. The paper presents a Mobile Edge Computing (MEC) mitigation model as a solution that suits the specialized nature of this problem where the solution needs to be very close to the place where insiders reside. This in fact gives real-time responses to attack, and hence, reduces the overhead in the cloud.
2021-03-29
Gupta, S., Buduru, A. B., Kumaraguru, P..  2020.  imdpGAN: Generating Private and Specific Data with Generative Adversarial Networks. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :64–72.
Generative Adversarial Network (GAN) and its variants have shown promising results in generating synthetic data. However, the issues with GANs are: (i) the learning happens around the training samples and the model often ends up remembering them, consequently, compromising the privacy of individual samples - this becomes a major concern when GANs are applied to training data including personally identifiable information, (ii) the randomness in generated data - there is no control over the specificity of generated samples. To address these issues, we propose imdpGAN-an information maximizing differentially private Generative Adversarial Network. It is an end-to-end framework that simultaneously achieves privacy protection and learns latent representations. With experiments on MNIST dataset, we show that imdpGAN preserves the privacy of the individual data point, and learns latent codes to control the specificity of the generated samples. We perform binary classification on digit pairs to show the utility versus privacy trade-off. The classification accuracy decreases as we increase privacy levels in the framework. We also experimentally show that the training process of imdpGAN is stable but experience a 10-fold time increase as compared with other GAN frameworks. Finally, we extend imdpGAN framework to CelebA dataset to show how the privacy and learned representations can be used to control the specificity of the output.
2020-10-26
Eryonucu, Cihan, Ayday, Erman, Zeydan, Engin.  2018.  A Demonstration of Privacy-Preserving Aggregate Queries for Optimal Location Selection. 2018 IEEE 19th International Symposium on "A World of Wireless, Mobile and Multimedia Networks" (WoWMoM). :1–3.
In recent years, service providers, such as mobile operators providing wireless services, collected location data in enormous extent with the increase of the usages of mobile phones. Vertical businesses, such as banks, may want to use this location information for their own scenarios. However, service providers cannot directly provide these private data to the vertical businesses because of the privacy and legal issues. In this demo, we show how privacy preserving solutions can be utilized using such location-based queries without revealing each organization's sensitive data. In our demonstration, we used partially homomorphic cryptosystem in our protocols and showed practicality and feasibility of our proposed solution.
2020-10-12
Foreman, Zackary, Bekman, Thomas, Augustine, Thomas, Jafarian, Haadi.  2019.  PAVSS: Privacy Assessment Vulnerability Scoring System. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :160–165.
Currently, the guidelines for business entities to collect and use consumer information from online sources is guided by the Fair Information Practice Principles set forth by the Federal Trade Commission in the United States. These guidelines are inadequate, outdated, and provide little protection for consumers. Moreover, there are many techniques to anonymize the stored data that was collected by large companies and governments. However, what does not exist is a framework that is capable of evaluating and scoring the effects of this information in the event of a data breach. In this work, a framework for scoring and evaluating the vulnerability of private data is presented. This framework is created to be used in parallel with currently adopted frameworks that are used to score and evaluate other areas of deficiencies within the software, including CVSS and CWSS. It is dubbed the Privacy Assessment Vulnerability Scoring System (PAVSS) and quantifies the privacy-breach vulnerability an individual takes on when using an online platform. This framework is based on a set of hypotheses about user behavior, inherent properties of an online platform, and the usefulness of available data in performing a cyber attack. The weight each of these metrics has within our model is determined by surveying cybersecurity experts. Finally, we test the validity of our user-behavior based hypotheses, and indirectly our model by analyzing user posts from a large twitter data set.
2020-04-06
Ahmadi, S. Sareh, Rashad, Sherif, Elgazzar, Heba.  2019.  Machine Learning Models for Activity Recognition and Authentication of Smartphone Users. 2019 IEEE 10th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :0561–0567.
Technological advancements have made smartphones to provide wide range of applications that enable users to perform many of their tasks easily and conveniently, anytime and anywhere. For this reason, many users are tend to store their private data in their smart phones. Since conventional methods for security of smartphones, such as passwords, personal identification numbers, and pattern locks are prone to many attacks, this research paper proposes a novel method for authenticating smartphone users based on performing seven different daily physical activity as behavioral biometrics, using smartphone embedded sensor data. This authentication scheme builds a machine learning model which recognizes users by performing those daily activities. Experimental results demonstrate the effectiveness of the proposed framework.
2020-03-23
Aguilar, Eryn, Dancel, Jevis, Mamaud, Deysaree, Pirosch, Dorothy, Tavacoli, Farin, Zhan, Felix, Pearce, Robbie, Novack, Margaret, Keehu, Hokunani, Lowe, Benjamin et al..  2019.  Highly Parallel Seedless Random Number Generation from Arbitrary Thread Schedule Reconstruction. 2019 IEEE International Conference on Big Knowledge (ICBK). :1–8.
Security is a universal concern across a multitude of sectors involved in the transfer and storage of computerized data. In the realm of cryptography, random number generators (RNGs) are integral to the creation of encryption keys that protect private data, and the production of uniform probability outcomes is a revenue source for certain enterprises (most notably the casino industry). Arbitrary thread schedule reconstruction of compare-and-swap operations is used to generate input traces for the Blum-Elias algorithm as a method for constructing random sequences, provided the compare-and-swap operations avoid cache locality. Threads accessing shared memory at the memory controller is a true random source which can be polled indirectly through our algorithm with unlimited parallelism. A theoretical and experimental analysis of the observation and reconstruction algorithm are considered. The quality of the random number generator is experimentally analyzed using two standard test suites, DieHarder and ENT, on three data sets.
2020-01-21
Shehu, Abubakar-Sadiq, Pinto, António, Correia, Manuel E..  2019.  Privacy Preservation and Mandate Representation in Identity Management Systems. 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). :1–6.
The growth in Internet usage has increased the use of electronic services requiring users to register their identity on each service they subscribe to. This has resulted in the prevalence of redundant users data on different services. To protect and regulate access by users to these services identity management systems (IdMs)are put in place. IdMs uses frameworks and standards e.g SAML, OAuth and Shibboleth to manage digital identities of users for identification and authentication process for a service provider. However, current IdMs have not been able to address privacy issues (unauthorised and fine-grained access)that relate to protecting users identity and private data on web services. Many implementations of these frameworks are only concerned with the identification and authentication process of users but not authorisation. They mostly give full control of users digital identities and data to identity and service providers with less or no users participation. This results in a less privacy enhanced solutions that manage users available data in the electronic space. This article proposes a user-centred mandate representation system that empowers resource owners to take full of their digital data; determine and delegate access rights using their mobile phone. Thereby giving users autonomous powers on their resources to grant access to authenticated entities at their will. Our solution is based on the OpenID Connect framework for authorisation service. To evaluate the proposal, we've compared it with some related works and the privacy requirements yardstick outlined in GDPR regulation [1] and [2]. Compared to other systems that use OAuth 2.0 or SAML our solution uses an additional layer of security, where data owner assumes full control over the disclosure of their identity data through an assertion issued from their mobile phones to authorisation server (AS), which in turn issues an access token. This would enable data owners to assert the authenticity of a request, while service providers and requestors also benefit from the correctness and freshness of identity data disclosed to them.
2020-01-07
Hammami, Hamza, Brahmi, Hanen, Ben Yahia, Sadok.  2018.  Secured Outsourcing towards a Cloud Computing Environment Based on DNA Cryptography. 2018 International Conference on Information Networking (ICOIN). :31-36.

Cloud computing denotes an IT infrastructure where data and software are stored and processed remotely in a data center of a cloud provider, which are accessible via an Internet service. This new paradigm is increasingly reaching the ears of companies and has revolutionized the marketplace of today owing to several factors, in particular its cost-effective architectures covering transmission, storage and intensive data computing. However, like any new technology, the cloud computing technology brings new problems of security, which represents the main restrain on turning to this paradigm. For this reason, users are reluctant to resort to the cloud because of security and protection of private data as well as lack of trust in cloud service providers. The work in this paper allows the readers to familiarize themselves with the field of security in the cloud computing paradigm while suggesting our contribution in this context. The security schema we propose allowing a distant user to ensure a completely secure migration of all their data anywhere in the cloud through DNA cryptography. Carried out experiments showed that our security solution outperforms its competitors in terms of integrity and confidentiality of data.

2019-11-11
Kunihiro, Noboru, Lu, Wen-jie, Nishide, Takashi, Sakuma, Jun.  2018.  Outsourced Private Function Evaluation with Privacy Policy Enforcement. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :412–423.
We propose a novel framework for outsourced private function evaluation with privacy policy enforcement (OPFE-PPE). Suppose an evaluator evaluates a function with private data contributed by a data contributor, and a client obtains the result of the evaluation. OPFE-PPE enables a data contributor to enforce two different kinds of privacy policies to the process of function evaluation: evaluator policy and client policy. An evaluator policy restricts entities that can conduct function evaluation with the data. A client policy restricts entities that can obtain the result of function evaluation. We demonstrate our construction with three applications: personalized medication, genetic epidemiology, and prediction by machine learning. Experimental results show that the overhead caused by enforcing the two privacy policies is less than 10% compared to function evaluation by homomorphic encryption without any privacy policy enforcement.
2019-02-14
Schuette, J., Brost, G. S..  2018.  LUCON: Data Flow Control for Message-Based IoT Systems. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :289-299.

Today's emerging Industrial Internet of Things (IIoT) scenarios are characterized by the exchange of data between services across enterprises. Traditional access and usage control mechanisms are only able to determine if data may be used by a subject, but lack an understanding of how it may be used. The ability to control the way how data is processed is however crucial for enterprises to guarantee (and provide evidence of) compliant processing of critical data, as well as for users who need to control if their private data may be analyzed or linked with additional information - a major concern in IoT applications processing personal information. In this paper, we introduce LUCON, a data-centric security policy framework for distributed systems that considers data flows by controlling how messages may be routed across services and how they are combined and processed. LUCON policies prevent information leaks, bind data usage to obligations, and enforce data flows across services. Policy enforcement is based on a dynamic taint analysis at runtime and an upfront static verification of message routes against policies. We discuss the semantics of these two complementing enforcement models and illustrate how LUCON policies are compiled from a simple policy language into a first-order logic representation. We demonstrate the practical application of LUCON in a real-world IoT middleware and discuss its integration into Apache Camel. Finally, we evaluate the runtime impact of LUCON and discuss performance and scalability aspects.

2019-01-21
Zhang, Z., Li, Z., Xia, C., Cui, J., Ma, J..  2018.  H-Securebox: A Hardened Memory Data Protection Framework on ARM Devices. 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC). :325–332.

ARM devices (mobile phone, IoT devices) are getting more popular in our daily life due to the low power consumption and cost. These devices carry a huge number of user's private information, which attracts attackers' attention and increase the security risk. The operating systems (e.g., Android, Linux) works out many memory data protection strategies on user's private information. However, the monolithic OS may contain security vulnerabilities that are exploited by the attacker to get root or even kernel privilege. Once the kernel privilege is obtained by the attacker, all data protection strategies will be gone and user's private information can be taken away. In this paper, we propose a hardened memory data protection framework called H-Securebox to defeat kernel-level memory data stolen attacks. H-Securebox leverages ARM hardware virtualization technique to protect the data on the memory with hypervisor privilege. We designed three types H-Securebox for programing developers to use. Although the attacker may have kernel privilege, she can not touch private data inside H-Securebox, since hypervisor privilege is higher than kernel privilege. With the implementation of H-Securebox system assisting by a tiny hypervisor on Raspberry Pi2 development board, we measure the performance overhead of our system and do the security evaluations. The results positively show that the overhead is negligible and the malicious application with root or kernel privilege can not access the private data protected by our system.

2018-03-26
Aslan, Ö, Samet, R..  2017.  Mitigating Cyber Security Attacks by Being Aware of Vulnerabilities and Bugs. 2017 International Conference on Cyberworlds (CW). :222–225.

Because the Internet makes human lives easier, many devices are connected to the Internet daily. The private data of individuals and large companies, including health-related data, user bank accounts, and military and manufacturing data, are increasingly accessible via the Internet. Because almost all data is now accessible through the Internet, protecting these valuable assets has become a major concern. The goal of cyber security is to protect such assets from unauthorized use. Attackers use automated tools and manual techniques to penetrate systems by exploiting existing vulnerabilities and software bugs. To provide good enough security; attack methodologies, vulnerability concepts and defence strategies should be thoroughly investigated. The main purpose of this study is to show that the patches released for existing vulnerabilities at the operating system (OS) level and in software programs does not completely prevent cyber-attack. Instead, producing specific patches for each company and fixing software bugs by being aware of the software running on each specific system can provide a better result. This study also demonstrates that firewalls, antivirus software, Windows Defender and other prevention techniques are not sufficient to prevent attacks. Instead, this study examines different aspects of penetration testing to determine vulnerable applications and hosts using the Nmap and Metasploit frameworks. For a test case, a virtualized system is used that includes different versions of Windows and Linux OS.

2018-02-15
Yonetani, R., Boddeti, V. N., Kitani, K. M., Sato, Y..  2017.  Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption. 2017 IEEE International Conference on Computer Vision (ICCV). :2059–2069.

We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of homomorphic encryption of high-dimensional classifiers, we (1) impose sparsity constraints on local classifier updates and (2) propose a novel efficient encryption scheme named doublypermuted homomorphic encryption (DPHE) which is tailored to sparse high-dimensional data. DPHE (i) decomposes sparse data into its constituent non-zero values and their corresponding support indices, (ii) applies homomorphic encryption only to the non-zero values, and (iii) employs double permutations on the support indices to make them secret. Our experimental evaluation on several public datasets shows that the proposed approach achieves comparable performance against state-of-the-art visual recognition methods while preserving privacy and significantly outperforms other privacy-preserving methods.

2018-02-06
Hassoon, I. A., Tapus, N., Jasim, A. C..  2017.  Enhance Privacy in Big Data and Cloud via Diff-Anonym Algorithm. 2017 16th RoEduNet Conference: Networking in Education and Research (RoEduNet). :1–5.

The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.