Biblio
Software patterns are created with the goal of capturing expert
knowledge so it can be efficiently and effectively shared with the
software development community. However, patterns in practice
may or may not achieve these goals. Empirical studies of the use
of software patterns can help in providing deeper insight into
whether these goals have been met. The objective of this paper is
to aid researchers in designing empirical studies of software
patterns by summarizing the study designs of software patterns
available in the literature. The important components of these
study designs include the evaluation criteria and how the patterns
are presented to study participants. We select and analyze 19
distinct empirical studies and identify 17 independent variables in
three different categories (participants demographics; pattern
presentation; problem presentation). We also extract 10 evaluation
criteria with 23 associated observable measures. Additionally, by
synthesizing the reported observations, we identify challenges
faced during study execution. Provision of multiple domainspecific
examples of pattern application and tool support to assist
in pattern selection are helpful for the study participants in
understanding and completing the study task. Capturing data
regarding the cognitive processes of participants can provide
insights into the findings of the study.
To help users create stronger text-based passwords, many web sites have deployed password meters that provide visual feedback on password strength. Although these meters are in wide use, their effects on the security and usability of passwords have not been well studied.
We present a 2,931-subject study of password creation in the presence of 14 password meters. We found that meters with a variety of visual appearances led users to create longer passwords. However, significant increases in resistance to a password-cracking algorithm were only achieved using meters that scored passwords stringently. These stringent meters also led participants to include more digits, symbols, and uppercase letters.
Password meters also affected the act of password creation. Participants who saw stringent meters spent longer creating their password and were more likely to change their password while entering it, yet they were also more likely to find the password meter annoying. However, the most stringent meter and those without visual bars caused participants to place less importance on satisfying the meter. Participants who saw more lenient meters tried to fill the meter and were averse to choosing passwords a meter deemed "bad" or "poor." Our findings can serve as guidelines for administrators seeking to nudge users towards stronger passwords.
Despite the abundance of information security guidelines, system developers have difficulties implementing technical solutions that are reasonably secure. Security patterns are one possible solution to help developers reuse security knowledge. The challenge is that it takes experts to develop security patterns. To address this challenge, we need a framework to identify and assess patterns and pattern application practices that are accessible to non-experts. In this paper, we narrowly define what we mean by patterns by focusing on requirements patterns and the considerations that may inform how we identify and validate patterns for knowledge reuse. We motivate this discussion using examples from the requirements pattern literature and theory in cognitive psychology.
Systematic exploration is an approach to finding race conditions by deterministically executing every possible interleaving of thread transitions and identifying which ones expose bugs. Current systematic exploration techniques are suitable for testing user-space programs, but are inadequate for testing kernels, where the testing framework’s control over concurrency is more complicated. We present Landslide, a systematic exploration tool for finding races in kernels. Landslide targets Pebbles, the kernel specification that students implement in the undergraduate Operating Systems course at Carnegie Mellon University (15- 410). We discuss the techniques Landslide uses to address the general challenges of kernel-level concurrency, and we evaluate its effectiveness and usability as a debugging aid. We show that our techniques make systematic testing in kernel-space feasible and that Landslide is a useful tool for doing so in the context of 15-410.
To help users create stronger text-based passwords, many web sites have deployed password meters that provide visual feedback on password strength. Although these meters are in wide use, their effects on the security and usability of passwords have not been well studied. We present a 2,931-subject study of password creation in the presence of 14 password meters. We found that meters with a variety of visual appearances led users to create longer passwords. However, significant increases in resistance to a password-cracking algorithm were only achieved using meters that scored passwords stringently. These stringent meters also led participants to include more digits, symbols, and uppercase letters. Password meters also affected the act of password creation. Participants who saw stringent meters spent longer creating their password and were more likely to change their password while entering it, yet they were also more likely to find the password meter annoying. However, the most stringent meter and those without visual bars caused participants to place less importance on satisfying the meter. Participants who saw more lenient meters tried to fill the meter and were averse to choosing passwords a meter deemed “bad” or “poor.” Our findings can serve as guidelines for administrators seeking to nudge users towards stronger passwords.
We show that competitive engagements within the agents of a network can result in resilience in consensus dynamics with respect to the presence of an adversary. We first show that interconnections with an adversary, with linear dynamics, can make the consensus dynamics diverge, or drive its evolution to a state different from the average.We then introduce a second network, interconnected with the original network via an engagement topology. This network has no information about the adversary and each agent in it has only access to partial information about the state of the other network. We introduce a dynamics on the coupled network which corresponds to a saddle-point dynamics of a certain zero-sum game and is distributed over each network, as well as the engagement topology. We show that, by appropriately choosing a design parameter corresponding to the competition between these two networks, the coupled dynamics can be made resilient with respect to the presence of the adversary.Our technical approach combines notions of graph theory and stable perturbations of nonsymmetric matrices.We demonstrate our results on an example of kinematic-based flocking in presence of an adversary.
Smart mobile devices such as smartphones and tablets have become an integral part of our society. However, it also becomes a prime target for attackers with malicious intents. There have been a number of efforts on developing innovative courseware to promote cybersecurity education and to improve student learning; however, hands-on labs are not well developed for smart mobile devices and for mobile security topics. In this paper, we propose to design and develop a mobile security labware with smart mobile devices to promote the cybersecurity education. The integration of mobile computing technologies and smart devices into cybersecurity education will connect the education to leading-edge information technologies, motivate and engage students in security learning, fill in the gap with IT industry need, and help faculties build expertise on mobile computing. In addition, the hands-on experience with mobile app development will promote student learning and supply them with a better understanding of security knowledge not only in classical security domains but also in the emerging mobile security areas.
Malware researchers rely on the observation of malicious code in execution to collect datasets for a wide array of experiments, including generation of detection models, study of longitudinal behavior, and validation of prior research. For such research to reflect prudent science, the work needs to address a number of concerns relating to the correct and representative use of the datasets, presentation of methodology in a fashion sufficiently transparent to enable reproducibility, and due consideration of the need not to harm others. In this paper we study the methodological rigor and prudence in 36 academic publications from 2006-2011 that rely on malware execution. 40% of these papers appeared in the 6 highest-ranked academic security conferences. We find frequent shortcomings, including problematic assumptions regarding the use of execution-driven datasets (25% of the papers), absence of description of security precautions taken during experiments (71% of the articles), and oftentimes insufficient description of the experimental setup. Deficiencies occur in top-tier venues and elsewhere alike, highlighting a need for the community to improve its handling of malware datasets. In the hope of aiding authors, reviewers, and readers, we frame guidelines regarding transparency, realism, correctness, and safety for collecting and using malware datasets.
The home computer user is often said to be the weakest link in computer security. They do not always follow security advice, and they take actions, as in phishing, that compromise themselves. In general, we do not understand why users do not always behave safely, which would seem to be in their best interest. This paper reviews the literature of surveys and studies of factors that influence security decisions for home computer users. We organize the review in four sections: understanding of threats, perceptions of risky behavior, efforts to avoid security breaches and attitudes to security interventions. We find that these studies reveal a lot of reasons why current security measures may not match the needs or abilities of home computer users and suggest future work needed to inform how security is delivered to this user group.
What does it mean to trust, or not trust, an augmented reality system? Froma computer security point of view, trust in augmented reality represents a real threat to real people. The fact that augmented reality allows the programmer to tinker with the user's senses creates many opportunities for malfeasance. It might be natural to think that if we warn users to be careful it will lower their trust in the system, greatly reducing risk.
Despite the abundance of information security guidelines, system developers have difficulties implementing technical solutions that are reasonably secure. Security patterns are one possible solution to help developers reuse security knowledge. The challenge is that it takes experts to develop security patterns. To address this challenge, we need a framework to identify and assess patterns and pattern application practices that are accessible to non-experts. In this paper, we narrowly define what we mean by patterns by focusing on requirements patterns and the considerations that may inform how we identify and validate patterns for knowledge reuse. We motivate this discussion using examples from the requirements pattern literature and theory in cognitive psychology.