Biblio
In the modern day and age, credential based authentication systems no longer provide the level of security that many organisations and their services require. The level of trust in passwords has plummeted in recent years, with waves of cyber attacks predicated on compromised and stolen credentials. This method of authentication is also heavily reliant on the individual user's choice of password. There is the potential to build levels of security on top of credential based authentication systems, using a risk based approach, which preserves the seamless authentication experience for the end user. One method of adding this security to a risk based authentication framework, is keystroke dynamics. Monitoring the behaviour of the users and how they type, produces a type of digital signature which is unique to that individual. Learning this behaviour allows dynamic flags to be applied to anomalous typing patterns that are produced by attackers using stolen credentials, as a potential risk of fraud. Methods from statistics and machine learning have been explored to try and implement such solutions. This paper will look at an Autoencoder model for learning the keystroke dynamics of specific users. The results from this paper show an improvement over the traditional tried and tested statistical approaches with an Equal Error Rate of 6.51%, with the additional benefits of relatively low training times and less reliance on feature engineering.
Keystroke dynamics study the way in which users input text via their keyboards, which is unique to each individual, and can form a component of a behavioral biometric system to improve existing account security. Keystroke dynamics systems on free-text data use n-graphs that measure the timing between consecutive keystrokes to distinguish between users. Many algorithms require 500, 1,000, or more keystrokes to achieve EERs of below 10%. In this paper, we propose an instance-based graph comparison algorithm to reduce the number of keystrokes required to authenticate users. Commonly used features such as monographs and digraphs are investigated. Feature importance is determined and used to construct a fused classifier. Detection error tradeoff (DET) curves are produced with different numbers of keystrokes. The fused classifier outperforms the state-of-the-art with EERs of 7.9%, 5.7%, 3.4%, and 2.7% for test samples of 50, 100, 200, and 500 keystrokes.
Generally, methods of authentication and identification utilized in asserting users' credentials directly affect security of offered services. In a federated environment, service owners must trust external credentials and make access control decisions based on Assurance Information received from remote Identity Providers (IdPs). Communities (e.g. NIST, IETF and etc.) have tried to provide a coherent and justifiable architecture in order to evaluate Assurance Information and define Assurance Levels (AL). Expensive deployment, limited service owners' authority to define their own requirements and lack of compatibility between heterogeneous existing standards can be considered as some of the unsolved concerns that hinder developers to openly accept published works. By assessing the advantages and disadvantages of well-known models, a comprehensive, flexible and compatible solution is proposed to value and deploy assurance levels through a central entity called Proxy.
Research on keystroke dynamics has the good potential to offer continuous authentication that complements conventional authentication methods in combating insider threats and identity theft before more harm can be done to the genuine users. Unfortunately, the large amount of data required by free-text keystroke authentication often contain personally identifiable information, or PII, and personally sensitive information, such as a user's first name and last name, username and password for an account, bank card numbers, and social security numbers. As a result, there are privacy risks associated with keystroke data that must be mitigated before they are shared with other researchers. We conduct a systematic study to remove PII's from a recent large keystroke dataset. We find substantial amounts of PII's from the dataset, including names, usernames and passwords, social security numbers, and bank card numbers, which, if leaked, may lead to various harms to the user, including personal embarrassment, blackmails, financial loss, and identity theft. We thoroughly evaluate the effectiveness of our detection program for each kind of PII. We demonstrate that our PII detection program can achieve near perfect recall at the expense of losing some useful information (lower precision). Finally, we demonstrate that the removal of PII's from the original dataset has only negligible impact on the detection error tradeoff of the free-text authentication algorithm by Gunetti and Picardi. We hope that this experience report will be useful in informing the design of privacy removal in future keystroke dynamics based user authentication systems.
LBSs are Location-Based Services that provide certain service based on the current or past user's location. During the past decade, LBSs have become more popular as a result of the widespread use of mobile devices with position functions. Location information is a secondary information that can provide personal insight about one's life. This issue associated with sharing of data in cloud-based locations. For example, a hospital is a public space and the actual location of the hospital does not carry any sensitive information. However, it may become sensitive if the specialty of the hospital is analyzed. In this paper we proposed design presents a combination of methods for providing data privacy protection for location-based services (LBSs) with the use of cloud service. The work built in zero trust and we start to manage the access to the system through different levels. The proposal is based on a model that stores user location data in supplementary servers and not in non-trustable third-party applications. The approach of the present research is to analyze the privacy protection possibilities through data partitioning. The data collected from the different recourses are distributed into different servers according to the partitioning model based on multi-level policy. Access is granted to third party applications only to designated servers and the privacy of the user profile is also ensured in each server, as they are not trustable.
We propose an approach for allowing data owners to trade their data in digital data market scenarios, while keeping control over them. Our solution is based on a combination of selective encryption and smart contracts deployed on a blockchain, and ensures that only authorized users who paid an agreed amount can access a data item. We propose a safe interaction protocol for regulating the interplay between a data owner and subjects wishing to purchase (a subset of) her data, and an audit process for counteracting possible misbehaviors by any of the interacting parties. Our solution aims to make a step towards the realization of data market platforms where owners can benefit from trading their data while maintaining control.
With the advent of the big data era, information systems have exhibited some new features, including boundary obfuscation, system virtualization, unstructured and diversification of data types, and low coupling among function and data. These features not only lead to a big difference between big data technology (DT) and information technology (IT), but also promote the upgrading and evolution of network security technology. In response to these changes, in this paper we compare the characteristics between IT era and DT era, and then propose four DT security principles: privacy, integrity, traceability, and controllability, as well as active and dynamic defense strategy based on "propagation prediction, audit prediction, dynamic management and control". We further discuss the security challenges faced by DT and the corresponding assurance strategies. On this basis, the big data security technologies can be divided into four levels: elimination, continuation, improvement, and innovation. These technologies are analyzed, combed and explained according to six categories: access control, identification and authentication, data encryption, data privacy, intrusion prevention, security audit and disaster recovery. The results will support the evolution of security technologies in the DT era, the construction of big data platforms, the designation of security assurance strategies, and security technology choices suitable for big data.
Searchable Encryption (SE) schemes provide security and privacy to the cloud data. The existing SE approaches enable multiple users to perform search operation by using various schemes like Broadcast Encryption (BE), Attribute-Based Encryption (ABE), etc. However, these schemes do not allow multiple users to perform the search operation over the encrypted data of multiple owners. Some SE schemes involve a Proxy Server (PS) that allow multiple users to perform the search operation. However, these approaches incur huge computational burden on PS due to the repeated encryption of the user queries for transformation purpose so as to ensure that users' query is searchable over the encrypted data of multiple owners. Hence, to eliminate this computational burden on PS, this paper proposes a secure proxy server approach that performs the search operation without transforming the user queries. This approach also returns the top-k relevant documents to the user queries by using Euclidean distance similarity approach. Based on the experimental study, this approach is efficient with respect to search time and accuracy.
Cloud computing undoubtedly is the most unparalleled technique in rapidly developing industries. Protecting sensitive files stored in the clouds from being accessed by malicious attackers is essential to the success of the clouds. In proxy re-encryption schemes, users delegate their encrypted files to other users by using re-encryption keys, which elegantly transfers the users' burden to the cloud servers. Moreover, one can adopt conditional proxy re-encryption schemes to employ their access control policy on the files to be shared. However, we recognize that the size of re-encryption keys will grow linearly with the number of the condition values, which may be impractical in low computational devices. In this paper, we combine a key-aggregate approach and a proxy re-encryption scheme into a key-aggregate proxy re-encryption scheme. It is worth mentioning that the proposed scheme is the first key-aggregate proxy re-encryption scheme. As a side note, the size of re-encryption keys is constant.
Since the term “Fog Computing” has been coined by Cisco Systems in 2012, security and privacy issues of this promising paradigm are still open challenges. Among various security challenges, Access Control is a crucial concern for all cloud computing-like systems (e.g. Fog computing, Mobile edge computing) in the IoT era. Therefore, assigning the precise level of access in such an inherently scalable, heterogeneous and dynamic environment is not easy to perform. This work defines the uncertainty challenge for authentication phase of the access control in fog computing because on one hand fog has a number of characteristics that amplify uncertainty in authentication and on the other hand applying traditional access control models does not result in a flexible and resilient solution. Therefore, we have proposed a novel prediction model based on the extension of Attribute Based Access Control (ABAC) model. Our data-driven model is able to handle uncertainty in authentication. It is also able to consider the mobility of mobile edge devices in order to handle authentication. In doing so, we have built our model using and comparing four supervised classification algorithms namely as Decision Tree, Naïve Bayes, Logistic Regression and Support Vector Machine. Our model can achieve authentication performance with 88.14% accuracy using Logistic Regression.