Biblio
Currently, usable security and web accessibility design principles exist separately. Although literature at the intersect of accessibility and security is developing, it is limited in its understanding of how users with vision loss operate the web securely. In this paper, we propose heuristics that fuse the nuances of both fields. With these heuristics, we evaluate 10 websites and uncover several issues that can impede users' ability to abide by common security advice.
Revealing private and sensitive information on Social Network Sites (SNSs) like Facebook is a common practice which sometimes results in unwanted incidents for the users. One approach for helping users to avoid regrettable scenarios is through awareness mechanisms which inform a priori about the potential privacy risks of a self-disclosure act. Privacy heuristics are instruments which describe recurrent regrettable scenarios and can support the generation of privacy awareness. One important component of a heuristic is the group of people who should not access specific private information under a certain privacy risk. However, specifying an exhaustive list of unwanted recipients for a given regrettable scenario can be a tedious task which necessarily demands the user's intervention. In this paper, we introduce an approach based on decision trees to instantiate the audience component of privacy heuristics with minor intervention from the users. We introduce Disclosure- Acceptance Trees, a data structure representative of the audience component of a heuristic and describe a method for their generation out of user-centred privacy preferences.
In the realm of Internet of Things (IoT), information security is a critical issue. Security standards, including their assessment items, are essential instruments in the evaluation of systems security. However, a key question remains open: ``Which test cases are most effective for security assessment?'' To create security assessment designs with suitable assessment items, we need to know the security properties and assessment dimensions covered by a standard. We propose an approach for selecting and analyzing security assessment items; its foundations come from a set of assessment heuristics and it aims to increase the coverage of assessment dimensions and security characteristics in assessment designs. The main contribution of this paper is the definition of a core set of security assessment heuristics. We systematize the security assessment process by means of a conceptual formalization of the security assessment area. Our approach can be applied to security standards to select or to prioritize assessment items with respect to 11 security properties and 6 assessment dimensions. The approach is flexible allowing the inclusion of dimensions and properties. Our proposal was applied to a well know security standard (ISO/IEC 27001) and its assessment items were analyzed. The proposal is meant to support: (i) the generation of high-coverage assessment designs, which include security assessment items with assured coverage of the main security characteristics, and (ii) evaluation of security standards with respect to the coverage of security aspects.
Network connectivity is a primary attribute and a characteristic phenomenon of any networked system. A high connectivity is often desired within networks; for instance to increase robustness to failures, and resilience against attacks. A typical approach to increasing network connectivity is to strategically add links; however adding links is not always the most suitable option. In this paper, we propose an alternative approach to improving network connectivity, that is by making a small subset of nodes and edges “trusted,” which means that such nodes and edges remain intact at all times and are insusceptible to failures. We then show that by controlling the number of trusted nodes and edges, any desired level of network connectivity can be obtained. Along with characterizing network connectivity with trusted nodes and edges, we present heuristics to compute a small number of such nodes and edges. Finally, we illustrate our results on various networks.
In this paper, we propose a technique to detect phishing attacks based on behavior of human when exposed to fake website. Some online users submit fake credentials to the login page before submitting their actual credentials. He/She observes the login status of the resulting page to check whether the website is fake or legitimate. We automate the same behavior with our application (FeedPhish) which feeds fake values into login page. If the web page logs in successfully, it is classified as phishing otherwise it undergoes further heuristic filtering. If the suspicious site passes through all heuristic filters then the website is classified as a legitimate site. As per the experimentation results, our application has achieved a true positive rate of 97.61%, true negative rate of 94.37% and overall accuracy of 96.38%. Our application neither demands third party services nor prior knowledge like web history, whitelist or blacklist of URLS. It is able to detect not only zero-day phishing attacks but also detects phishing sites which are hosted on compromised domains.
Botnets are considered one of the most dangerous species of network-based attack today because they involve the use of very large coordinated groups of hosts simultaneously. The behavioral analysis of computer networks is at the basis of the modern botnet detection methods, in order to intercept traffic generated by malwares for which signatures do not exist yet. Defining a pattern of features to be placed at the basis of behavioral analysis, puts the emphasis on the quantity and quality of information to be caught and used to mark data streams as normal or abnormal. The problem is even more evident if we consider extensive computer networks or clouds. With the present paper we intend to show how heuristics applied to large-scale proxy logs, considering a typical phase of the life cycle of botnets such as the search for C&C Servers through AGDs (Algorithmically Generated Domains), may provide effective and extremely rapid results. The present work will introduce some novel paradigms. The first is that some of the elements of the supply chain of botnets could be completed without any interaction with the Internet, mostly in presence of wide computer networks and/or clouds. The second is that behind a large number of workstations there are usually "human beings" and it is unlikely that their behaviors will cause marked changes in the interaction with the Internet in a fairly narrow time frame. Finally, AGDs can highlight, at the moment, common lexical features, detectable quickly and without using any black/white list.