Biblio
Mobile security remains a concern for multiple stakeholders. Safe user behavior is crucial key to avoid and mitigate mobile threats. The research used a survey design to capture key constructs of mobile user threat avoidance behavior. Analysis revealed that there is no significant difference between the two key drivers of secure behavior, threat appraisal and coping appraisal, for Android and iOS users. However, statistically significant differences in avoidance motivation and avoidance behavior of users of the two operating systems were displayed. This indicates that existing threat avoidance models may be insufficient to comprehensively deal with factors that affect mobile user behavior. A newly introduced variable, perceived security, shows a difference in the perceptions of their level of protection among the users of the two operating systems, providing a new direction for research into mobile security.
Cyber attacks and the associated costs made cybersecurity a vital part of any system. User behavior and decisions are still a major part in the coping with these risks. We developed a model of optimal investment and human decisions with security measures, given that the effectiveness of each measure depends partly on the performance of the others. In an online experiment, participants classified events as malicious or non-malicious, based on the value of an observed variable. Prior to making the decisions, they had invested in three security measures - a firewall, an IDS or insurance. In three experimental conditions, maximal investment in only one of the measures was optimal, while in a fourth condition, participants should not have invested in any of the measures. A previous paper presents the analysis of the investment decisions. This paper reports users' classifications of events when interacting with these systems. The use of security mechanisms helped participants gain higher scores. Participants benefited in particular from purchasing IDS and/or Cyber Insurance. Participants also showed higher sensitivity and compliance with the alerting system when they could benefit from investing in the IDS. Participants, however, did not adjust their behavior optimally to the security settings they had chosen. The results demonstrate the complex nature of risk-related behaviors and the need to consider human abilities and biases when designing cyber security systems.
Before accessing Internet websites or applications, network users first ask the Domain Name System (DNS) for the corresponding IP address, and then the user's browser or application accesses the required resources through the IP address. The server log of DNS keeps records of all users' requesting queries. This paper analyzes the user network accessing behavior by analyzing network DNS log in campus, constructing a behavior fingerprint model for each user. Different users and even same user's fingerprints in different periods can be used to determine whether the user's access is abnormal or safe, whether it is infected with malicious code. After detecting the behavior of abnormal user accessing, preventing the spread of viruses, Trojans, bots and attacks is made possible, which further assists the protection of users' network access security through corresponding techniques. Finally, analysis of user behavior fingerprints of campus network access is conducted.
Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains.
Among the various challenges faced by the P2P file sharing systems like BitTorrent, the most common attack on the basic foundation of such systems is: Free-riding. Generally, free-riders are the users in the file sharing network who avoid contributing any resources but tend to consume the resources unethically from the P2P network whereas white-washers are more specific category of free-riders that voluntarily leave the system in a frequent fashion and appearing again and again with different identities to escape from the penal actions imposed by the network. BitTorrent being a collaborative distributed platform requires techniques for discouraging and punishing such user behavior. In this paper, we propose that ``Instead of punishing, we may focus more on rewarding the honest peers''. This approach could be presented as an alternative to other mechanisms of rewarding the peers like tit-for-tat [10], reciprocity based etc., built for the BitTorrent platform. The prime objective of BitTrusty is: providing incentives to the cooperative peers by rewarding in terms of cryptocoins based on blockchain. We have anticipated three ways of achieving the above defined objective. We are further investigating on how to integrate these two technologies of distributed systems viz. P2P file sharing systems and blockchain, and with this new paradigm, interesting research areas can be further developed, both in the field of P2P cryptocurrency networks and also when these networks are combined with other distributed scenarios.
Security has always been a major issue in cloud. Data sources are the most valuable and vulnerable information which is aimed by attackers to steal. If data is lost, then the privacy and security of every cloud user are compromised. Even though a cloud network is secured externally, the threat of an internal attacker exists. Internal attackers compromise a vulnerable user node and get access to a system. They are connected to the cloud network internally and launch attacks pretending to be trusted users. Machine learning approaches are widely used for cloud security issues. The existing machine learning based security approaches classify a node as a misbehaving node based on short-term behavioral data. These systems do not differentiate whether a misbehaving node is a malicious node or a broken node. To address this problem, this paper proposes an Improvised Long Short-Term Memory (ILSTM) model which learns the behavior of a user and automatically trains itself and stores the behavioral data. The model can easily classify the user behavior as normal or abnormal. The proposed ILSTM not only identifies an anomaly node but also finds whether a misbehaving node is a broken node or a new user node or a compromised node using the calculated trust factor. The proposed model not only detects the attack accurately but also reduces the false alarm in the cloud network.
The development of cloud computing technology and the popularization of cloud services have a great impact on the industry. On the one hand, cloud technology enhances network's operation efficiency and reduces the cost. On the other hand, the cloud resource can be accessed by any network equipment. It increases the chances that the identity of user is misrepresented and then led to many security problems. Therefore, the actual needs of security can't be fully satisfied with controlling the malicious user access to the cloud resource by login authentication that relies solely on current user identity. User is the requester and provider of cloud resources. User behavior's credibility relates to the safety of cloud directly. So it's very important to evaluate whether user behaviors can be trusted or not on cloud. In this paper, the method is studied based on the multilevel fuzzy comprehensive evaluation. And in this evaluation study, indicators of user behavior credibility are carried on a thorough discussion.
Data loss is perceived as one of the major threats for cloud storage. Consequently, the security community developed several challenge-response protocols that allow a user to remotely verify whether an outsourced file is still intact. However, two important practical problems have not yet been considered. First, clients commonly outsource multiple files of different sizes, raising the question how to formalize such a scheme and in particular ensuring that all files can be simultaneously audited. Second, in case auditing of the files fails, existing schemes do not provide a client with any method to prove if the original files are still recoverable. We address both problems and describe appropriate solutions. The first problem is tackled by providing a new type of "Proofs of Retrievability" scheme, enabling a client to check all files simultaneously in a compact way. The second problem is solved by defining a novel procedure called "Proofs of Recoverability", enabling a client to obtain an assurance whether a file is recoverable or irreparably damaged. Finally, we present a combination of both schemes allowing the client to check the recoverability of all her original files, thus ensuring cloud storage file recoverability.
Enhancing trust among service providers and end-users with respect to data protection is an urgent matter in the growing information society. In response, CREDENTIAL proposes an innovative cloud-based service for storing, managing, and sharing of digital identity information and other highly critical personal data with a demonstrably higher level of security than other current solutions. CREDENTIAL enables end-to-end confidentiality and authenticity as well as improved privacy in cloud-based identity management and data sharing scenarios. In this paper, besides clarifying the vision and use cases, we focus on the adoption of CREDENTIAL. Firstly, for adoption by providers, we elaborate on the functionality of CREDENTIAL, the services implementing these functions, and the physical architecture needed to deploy such services. Secondly, we investigate factors from related research that could be used to facilitate CREDENTIAL's adoption and list key benefits as convincing arguments.
Cloud computing is a wide architecture based on diverse models for providing different services of software and hardware. Cloud computing paradigm attracts different users because of its several benefits such as high resource elasticity, expense reduction, scalability and simplicity which provide significant preserving in terms of investment and work force. However, the new approaches introduced by the cloud, related to computation outsourcing, distributed resources, multi-tenancy concept, high dynamism of the model, data warehousing and the nontransparent style of cloud increase the security and privacy concerns and makes building and handling trust among cloud service providers and consumers a critical security challenge. This paper proposes a new approach to improve security of data in cloud computing. It suggests a classification model to categorize data before being introduced into a suitable encryption system according to the category. Since data in cloud has not the same sensitivity level, encrypting it with the same algorithms can lead to a lack of security or of resources. By this method we try to optimize the resources consumption and the computation cost while ensuring data confidentiality.
Users today enjoy access to a wealth of services that rely on user-contributed data, such as recommendation services, prediction services, and services that help classify and interpret data. The quality of such services inescapably relies on trustworthy contributions from users. However, validating the trustworthiness of contributions may rely on privacy-sensitive contextual data about the user, such as a user's location or usage habits, creating a conflict between privacy and trust: users benefit from a higher-quality service that identifies and removes illegitimate user contributions, but, at the same time, they may be reluctant to let the service access their private information to achieve this high quality. We argue that this conflict can be resolved with a pragmatic Glimmer of Trust, which allows services to validate user contributions in a trustworthy way without forfeiting user privacy. We describe how trustworthy hardware such as Intel's SGX can be used on the client-side–-in contrast to much recent work exploring SGX in cloud services–-to realize the Glimmer architecture, and demonstrate how this realization is able to resolve the tension between privacy and trust in a variety of cases.
The Internet of Things (IoT) comes together with the connection between sensors and devices. These smart devices have been upgraded from a standalone device which can only handle a specific task at one time to an interactive device that can handle multiple tasks in time. However, this technology has been exposed to many vulnerabilities especially on the malicious attacks of the devices. With the IoT constraints and low-security mechanisms applied, the malicious attacks could exploit the sensor vulnerability to provide wrong data where it can lead to wrong interpretation and actuation to the users. Due to this problems, this short paper presents an event-based access control framework that considers integrity, privacy and the authenticity in the IoT devices.
In this paper, we provide a secure and efficient outsourcing scheme for multi-owner data sharing on the cloud. More in detail we consider the scenario where multiple data owners outsource their data to an untrusted cloud provider, and allow authorized users to query the resulting database, composed of the encrypted data contributed by the different owners. The scheme relies on a proxy re-encryption technique that is implemented using an El-Gamal Elliptic Curve(ECC) crypto-system. We experimentally assess the efficiency of the implementation in terms of computation time, including the key translation process, data encryption and re-encryption modules, and show that it improves over previous proposals.
The present study's primary objective is to try to determine whether gender, combined with the educational background of the Internet users, have an effect on the way online privacy is perceived and practiced within the cloud services and specifically in social networking, e-commerce, and online banking. An online questionnaire was distributed through e-mail and the social media (Facebook, LinkedIn, and Google+). Our primary hypothesis is that an interrelationship may exist among a user's gender, educational background, and the way an online user perceives and acts regarding online privacy. An analysis of a representative sample of Greek Internet users revealed that there is an effect by gender on the online users' awareness regarding online privacy, as well as on the way they act upon it. Furthermore, we found that a correlation exists, as well regarding the Educational Background of the users and the issue of online privacy.
The use of cloud computing and cloud federations has been the focus of studies in the last years. Many of these infrastructures delegate user authentication to Identity Providers. Once these services are available through the Internet, concerns about the confidentiality of user credentials and attributes are high. The main focus of this work is the security of the credentials and user attributes in authentication infrastructures, exploring secret sharing techniques and using cloud federations as a base for storing this information.
The growth in cloud-based services tailored for users means more and more personal data is being exploited, and with this comes the need to better handle user privacy. Software technologies concentrating on privacy preservation typically present a one-size fits all solution. However, users have different viewpoints of what privacy means to them and therefore, configurable and dynamic privacy preserving solutions have the potential to create useful and tailored services without breaching any user's privacy. In this paper, we present a model of user-centered privacy that can be used to analyse a service's behaviour against user preferences, such that a user can be informed of the privacy implications of that service and what fine-grained actions they can take to maintain their privacy. We show through study that the user-based privacy model can: i) provide customizable privacy aligned with user needs; and ii) identify potential privacy breaches.
Cloud computing services have gained a lot of attraction in the recent years, but the shift of data from user-owned desktops and laptops to cloud storage systems has led to serious data privacy implications for the users. Even though privacy notices supplied by the cloud vendors details the data practices and options to protect their privacy, the lengthy and free-flowing textual format of the notices are often difficult to comprehend by the users. Thus we propose a simplified presentation format for privacy practices and choices termed as "Privacy-Dashboard" based on Protection Motivation Theory (PMT) and we intend to test the effectiveness of presentation format using cognitive-fit theory. Also, we indirectly model the cloud privacy concerns using Item-Response Theory (IRT) model. We contribute to the information privacy literature by addressing the literature gap to develop privacy protection artifacts in order to improve the privacy protection behaviors of individual users. The proposed "privacy dashboard" would provide an easy-to-use choice mechanisms that allow consumers to control how their data is collected and used.
Internet-of-Things devices often collect and transmit sensitive information like camera footage, health monitoring data, or whether someone is home. These devices protect data in transit with end-to-end encryption, typically using TLS connections between devices and associated cloud services. But these TLS connections also prevent device owners from observing what their own devices are saying about them. Unlike in traditional Internet applications, where the end user controls one end of a connection (e.g., their web browser) and can observe its communication, Internet-of-Things vendors typically control the software in both the device and the cloud. As a result, owners have no way to audit the behavior of their own devices, leaving them little choice but to hope that these devices are transmitting only what they should. This paper presents TLS–Rotate and Release (TLS-RaR), a system that allows device owners (e.g., consumers, security researchers, and consumer watchdogs) to authorize devices, called auditors, to decrypt and verify recent TLS traffic without compromising future traffic. Unlike prior work, TLS-RaR requires no changes to TLS's wire format or cipher suites, and it allows the device's owner to conduct a surprise inspection of recent traffic, without prior notice to the device that its communications will be audited.
We present OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy. Integrating OpenFace with inter-frame tracking, we build RTFace, a mechanism for denaturing video streams that selectively blurs faces according to specified policies at full frame rates. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions. Finally, we present a scalable, privacy-aware architecture for large camera networks using RTFace.
Aiming at the problem of internal attackers of database system, anomaly detection method of user behaviour is used to detect the internal attackers of database system. With using Discrete-time Markov Chains (DTMC), an anomaly detection system of user behavior is proposed, which can detect the internal threats of database system. First, we make an analysis on SQL queries, which are user behavior features. Then, we use DTMC model extract behavior features of a normal user and the detected user and make a comparison between them. If the deviation of features is beyond threshold, the detected user behavior is judged as an anomaly behavior. The experiments are used to test the feasibility of the detction system. The experimental results show that this detction system can detect normal and abnormal user behavior precisely and effectively.