Visible to the public Biblio

Filters: Keyword is security risk management  [Clear All Filters]
2020-02-17
Prajanti, Anisa Dewi, Ramli, Kalamullah.  2019.  A Proposed Framework for Ranking Critical Information Assets in Information Security Risk Assessment Using the OCTAVE Allegro Method with Decision Support System Methods. 2019 34th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC). :1–4.
The security of an organization lies not only in physical buildings, but also in its information assets. Safeguarding information assets requires further study to establish optimal security mitigation steps. In determining the appropriate mitigation of information assets, both an information security risk assessment and a clear and measurable rating are required. Most risk management methods do not provide the right focus on ranking the critical information assets of an organization. This paper proposes a framework approach for ranking critical information assets. The proposed framework uses the OCTAVE Allegro method, which focuses on profiling information assets by combining ranking priority measurements using decision support system methods, such as Simple Additive Weighting (SAW) and Analytic Hierarchy Process (AHP). The combined OCTAVE Allegro-SAW and OCTAVE Allegro-AHP methods are expected to better address risk priority as an input to making mitigation decisions for critical information assets. These combinations will help management to avoid missteps in adjusting budget needs allocation or time duration by selecting asset information mitigation using the ranking results of the framework.
Lundgren, Martin, Bergström, Erik.  2019.  Security-Related Stress: A Perspective on Information Security Risk Management. 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–8.
In this study, the enactment of information security risk management by novice practitioners is studied by applying an analytical lens of security-related stress. Two organisations were targeted in the study using a case study approach to obtain data about their practices. The study identifies stressors and stress inhibitors in the ISRM process and the supporting ISRM tools and discusses the implications for practitioners. For example, a mismatch between security standards and how they are interpreted in practice has been identified. This mismatch was further found to be strengthened by the design of the used ISRM tools. Those design shortcomings hamper agility since they may enforce a specific workflow or may restrict documentation. The study concludes that security-related stress can provide additional insight into security-novice practitioners' ISRM challenges.
Gharehbaghi, Koorosh, Myers, Matt.  2019.  Intelligent System Intricacies: Safety, Security and Risk Management Apprehensions of ITS. 2019 8th International Conference on Industrial Technology and Management (ICITM). :37–40.
While the general idea of Intelligent Transportation System (ITS) is to employ suitable, sophisticated information and communications technologies, however, such tool also encompass many system complexities. Fittingly, this paper aims to highlight the most contemporary system complications of ITS and in doing so, will also underline the safety, security and risk management concerns. More importantly, effectively treating such issues will ultimately improve the reliability and efficiency of transportation systems. Whereas such issues are among the most significant subjects for any intelligent system, for ITS in particular they the most dominant. For such intelligent systems, the safety, security and risk management issues must not only be decidedly prioritized, but also methodically integrated. As a part of such ITS integration, this paper will delicately examine the Emergency Management System (EMS) development and application. Accurate EMS is not only a mandatory feature of intelligent systems, but it is a fundamental component of ITS which will vigilantly respond to safety, security and risk management apprehensions. To further substantiate such scheme, the Sydney Metro's EMS will be also conferred. It was determined that, the Sydney Metro's EMS although highly advanced, it was also vigilantly aligned with specific designated safety, security and risk management strategies.
Rindell, Kalle, Holvitie, Johannes.  2019.  Security Risk Assessment and Management as Technical Debt. 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–8.
The endeavor to achieving software security consists of a set of risk-based security engineering processes during software development. In iterative software development, the software design typically evolves as the project matures, and the technical environment may undergo considerable changes. This increases the work load of identifying, assessing and managing the security risk by each iteration, and after every change. Besides security risk, the changes also accumulate technical debt, an allegory for postponed or sub-optimally performed work. To manage the security risk in software development efficiently, and in terms and definitions familiar to software development organizations, the concept of technical debt is extended to contain security debt. To accommodate new technical debt with potential security implications, a security debt management approach is introduced. The selected approach is an extension to portfolio-based technical debt management framework. This includes identifying security risk in technical debt, and also provides means to expose debt by security engineering techniques that would otherwise remained hidden. The proposed approach includes risk-based extensions to prioritization mechanisms in existing technical debt management systems. Identification, management and repayment techniques are presented to identify, assess, and mitigate the security debt.
Stoykov, Stoyko.  2019.  Risk Management as a Strategic Management Element in the Security System. 2019 International Conference on Creative Business for Smart and Sustainable Growth (CREBUS). :1–4.
Strategic management and security risk management are part of the general government of the country, and therefore it is not possible to examine it separately and even if it was, one separate examination would not have give us a complete idea of how to implement this process. A modern understanding of the strategic security management requires not only continuous efforts to improve security policy formation and implementation but also new approaches and particular solutions to modernize the security system by making it adequate to the requirements of the dynamic security environment.
2019-10-23
Alshawish, Ali, Spielvogel, Korbinian, de Meer, Hermann.  2019.  A Model-Based Time-to-Compromise Estimator to Assess the Security Posture of Vulnerable Networks. 2019 International Conference on Networked Systems (NetSys). :1-3.

Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.

2019-06-17
Miedl, Philipp, Thiele, Lothar.  2018.  The Security Risks of Power Measurements in Multicores. Proceedings of the 33rd Annual ACM Symposium on Applied Computing. :1585-1592.

Two of the main goals of power management in modern multicore processors are reducing the average power dissipation and delivering the maximum performance up to the physical limits of the system, when demanded. To achieve these goals, hardware manufacturers and operating system providers include sophisticated power and performance management systems, which require detailed information about the current processor state. For example, Intel processors offer the possibility to measure the power dissipation of the processor. In this work, we are evaluating whether such power measurements can be used to establish a covert channel between two isolated applications on the same system; the power covert channel. We present a detailed theoretical and experimental evaluation of the power covert channel on two platforms based on Intel processors. Our theoretical analysis is based on detailed modelling and allows us to derive a channel capacity bound for each platform. Moreover, we conduct an extensive experimental study under controlled, yet realistic, conditions. Our study shows, that the platform dependent channel capacities are in the order of 2000 bps and that it is possible to achieve throughputs of up to 1000 bps with a bit error probability of less than 15%, using a simple implementation. This illustrates the potential of leaking sensitive information and breaking a systems security framework using a covert channel based on power measurements.

Borgolte, Kevin, Fiebig, Tobias, Hao, Shuang, Kruegel, Christopher, Vigna, Giovanni.  2018.  Cloud Strife: Mitigating the Security Risks of Domain-Validated Certificates. Proceedings of the Applied Networking Research Workshop. :4-4.

Infrastructure-as-a-Service (IaaS), more generally the "cloud," changed the landscape of system operations on the Internet. Clouds' elasticity allow operators to rapidly allocate and use resources as needed, from virtual machines, to storage, to IP addresses, which is what made clouds popular. We show that the dynamic component paired with developments in trust-based ecosystems (e.g., TLS certificates) creates so far unknown attacks. We demonstrate that it is practical to allocate IP addresses to which stale DNS records point. Considering the ubiquity of domain validation in trust ecosystems, like TLS, an attacker can then obtain a valid and trusted certificate. The attacker can then impersonate the service, exploit residual trust for phishing, or might even distribute malicious code. Even worse, an aggressive attacker could succeed in less than 70 seconds, well below common time-to-live (TTL) for DNS. In turn, she could exploit normal service migrations to obtain a valid certificate, and, worse, she might not be bound by DNS records being (temporarily) stale. We introduce a new authentication method for trust-based domain validation, like IETF's automated certificate management environment (ACME), that mitigates staleness issues without incurring additional certificate requester effort by incorporating the existing trust of a name into the validation process. Based on previously published work [1]. [1] Kevin Borgolte, Tobias Fiebig, Shuang Hao, Christopher Kruegel, Giovanni Vigna. February 2018. Cloud Strife: Mitigating the Security Risks of Domain-Validated Certificates. In Proceedings of the 25th Network and Distributed Systems Security Symposium (NDSS '18). Internet Society (ISOC). DOI: 10.14722/ndss.2018.23327. URL: https://doi.org/10.14722/nd

Frey, Sylvain, Rashid, Awais, Anthonysamy, Pauline, Pinto-Albuquerque, Maria, Naqvi, Syed Asad.  2018.  The Good, the Bad and the Ugly: A Study of Security Decisions in a Cyber-Physical Systems Game. Proceedings of the 40th International Conference on Software Engineering. :496-496.

Motivation: The security of any system is a direct consequence of stakeholders' decisions regarding security requirements. Such decisions are taken with varying degrees of expertise, and little is currently understood about how various demographics - security experts, general computer scientists, managers - approach security decisions and the strategies that underpin those decisions. What are the typical decision patterns, the consequences of such patterns and their impact on the security of the system in question? Nor is there any substantial understanding of how the strategies and decision patterns of these different groups contrast. Is security expertise necessarily an advantage when making security decisions in a given context? Answers to these questions are key to understanding the "how" and "why" behind security decision processes. The Game: In this talk1, we present a tabletop game: Decisions and Disruptions (D-D)2 that tasks a group of players with managing the security of a small utility company while facing a variety of threats. The game is kept short - 2 hours - and simple enough to be played without prior training. A cyber-physical infrastructure, depicted through a Lego\textregistered board, makes the game easy to understand and accessible to players from varying backgrounds and security expertise, without being too trivial a setting for security experts. Key insights: We played D-D with 43 players divided into homogeneous groups: 4 groups of security experts, 4 groups of nontechnical managers and 4 groups of general computer scientists. • Strategies: Security experts had a strong interest in advanced technological solutions and tended to neglect intelligence gathering, to their own detriment. Managers, too, were technology-driven and focused on data protection while neglecting human factors more than other groups. Computer scientists tended to balance human factors and intelligence gathering with technical solutions, and achieved the best results of the three demographics. • Decision Processes: Technical experience significantly changes the way players think. Teams with little technical experience had shallow, intuition-driven discussions with few concrete arguments. Technical teams, and the most experienced in particular, had much richer debates, driven by concrete scenarios, anecdotes from experience, and procedural thinking. Security experts showed a high confidence in their decisions - despite some of them having bad consequences - while the other groups tended to doubt their own skills - even when they were playing good games. • Patterns: A number of characteristic plays were identified, some good (balance between priorities, open-mindedness, and adapting strategies based on inputs that challenge one's pre-conceptions), some bad (excessive focus on particular issues, confidence in charismatic leaders), some ugly ("tunnel vision" syndrome by over-confident players). These patterns are documented in the full paper - showing the virtue of the positive ones, discouraging the negative ones, and inviting the readers to do their own introspection. Conclusion: Beyond the analysis of the security decisions of the three demographics, there is a definite educational and awareness-raising aspect to D-D (as noted consistently by players in all our subject groups). Game boxes will be brought to the conference for demonstration purposes, and the audience will be invited to experiment with D-D themselves, make their own decisions, and reflect on their own perception of security.

Sion, Laurens, Yskout, Koen, Van Landuyt, Dimitri, Joosen, Wouter.  2018.  Risk-Based Design Security Analysis. Proceedings of the 1st International Workshop on Security Awareness from Design to Deployment. :11-18.

Implementing security by design in practice often involves the application of threat modeling to elicit security threats and to aid designers in focusing efforts on the most stringent problems first. Existing threat modeling methodologies are capable of generating lots of threats, yet they lack even basic support to triage these threats, except for relying on the expertise and manual assessment by the threat modeler. Since the essence of creating a secure design is to minimize associated risk (and countermeasure costs), risk analysis approaches offer a very compelling solution to this problem. By combining risk analysis and threat modeling, elicited threats in a design can be enriched with risk analysis information in order to provide support in triaging and prioritizing threats and focusing security efforts on the high-risk threats. It requires the following inputs: the asset values, the strengths of countermeasures, and an attacker model. In his paper, we provide an integrated threat elicitation and risk analysis approach, implemented in a threat modeling tool prototype, and evaluate it using a real-world application, namely the SecureDrop whistleblower submission system. We show that the security measures implemented in SecureDrop indeed correspond to the high-risk threats identified by our approach. Therefore, the risk-based security analysis provides useful guidance on focusing security efforts on the most important problems first.

Väisänen, Teemu, Noponen, Sami, Latvala, Outi-Marja, Kuusijärvi, Jarkko.  2018.  Combining Real-Time Risk Visualization and Anomaly Detection. Proceedings of the 12th European Conference on Software Architecture: Companion Proceedings. :55:1-55:7.

Traditional risk management produces a rather static listing of weaknesses, probabilities and mitigations. Large share of cyber security risks realize through computer networks. These attacks or attack attempts produce events that are detected by various monitoring techniques such as Intrusion Detection Systems (IDS). Often the link between detecting these potentially dangerous real-time events and risk management process is lacking, or completely missing. This paper presents means for transferring and visualizing the network events in the risk management instantly with a tool called Metrics Visualization System (MVS). The tool is used to dynamically visualize network security events of a Terrestrial Trunked Radio (TETRA) network running in Software Defined Networking (SDN) context as a case study. Visualizations are presented with a treelike graph, that gives a quick easily understandable overview of the cyber security situation. This paper also discusses what network security events are monitored and how they affect the more general risk levels. The major benefit of this approach is that the risk analyst is able to map the designed risk tree/security metrics into actual real-time events and view the system's security posture with the help of a runtime visualization view.

Marshall, Allen, Jahan, Sharmin, Gamble, Rose.  2018.  Toward Evaluating the Impact of Self-Adaptation on Security Control Certification. Proceedings of the 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems. :149-160.

Certifying security controls is required for information systems that are either federally maintained or maintained by a US government contractor. As described in the NIST SP800-53, certified and accredited information systems are deployed with an acceptable security threat risk. Self-adaptive information systems that allow functional and decision-making changes to be dynamically configured at runtime may violate security controls increasing the risk of security threat to the system. Methods are needed to formalize the process of certification for security controls by expressing and verifying the functional and non-functional requirements to determine what risks are introduced through self-adaptation. We formally express the existence and behavior requirements of the mechanisms needed to guarantee the security controls' effectiveness using audit controls on program example. To reason over the risk of security control compliance given runtime self-adaptations, we use the KIV theorem prover on the functional requirements, extracting the verification concerns and workflow associated with the proof process. We augment the MAPE-K control loop planner with knowledge of the mechanisms that satisfy the existence criteria expressed by the security controls. We compare self-adaptive plans to assess their risk of security control violation prior to plan deployment.

Goman, Maksim.  2018.  Towards Unambiguous IT Risk Definition. Proceedings of the Central European Cybersecurity Conference 2018. :15:1-15:6.

The paper addresses the fundamental methodological problem of risk analysis and control in information technology (IT) – the definition of risk as a subject of interest. Based on analysis of many risk concepts, we provide a consistent definition that describes the phenomenon. The proposed terminology is sound in terms of system analysis principles and applicable to practical use in risk assessment and control. Implication to risk assessment methods were summarized.

Martinelli, Fabio, Michailidou, Christina, Mori, Paolo, Saracino, Andrea.  2018.  Too Long, Did Not Enforce: A Qualitative Hierarchical Risk-Aware Data Usage Control Model for Complex Policies in Distributed Environments. Proceedings of the 4th ACM Workshop on Cyber-Physical System Security. :27–37.

Distributed environments such as Internet of Things, have an increasing need of introducing access and usage control mechanisms, to manage the rights to perform specific operations and regulate the access to the plethora of information daily generated by these devices. Defining policies which are specific to these distributed environments could be a challenging and tedious task, mainly due to the large set of attributes that should be considered, hence the upcoming of unforeseen conflicts or unconsidered conditions. In this paper we propose a qualitative risk-based usage control model, aimed at enabling a framework where is possible to define and enforce policies at different levels of granularity. In particular, the proposed framework exploits the Analytic Hierarchy Process (AHP) to coalesce the risk value assigned to different attributes in relation to a specific operation, in a single risk value, to be used as unique attribute of usage control policies. Two sets of experiments that show the benefits both in policy definition and in performance, validate the proposed model, demonstrating the equivalence of enforcement among standard policies and the derived single-attributed policies.

2019-02-13
Salfer, Martin, Eckert, Claudia.  2018.  Attack Graph-Based Assessment of Exploitability Risks in Automotive On-Board Networks. Proceedings of the 13th International Conference on Availability, Reliability and Security. :21:1–21:10.

High-end vehicles incorporate about one hundred computers; physical and virtualized ones; self-driving vehicles even more. This allows a plethora of attack combinations. This paper demonstrates how to assess exploitability risks of vehicular on-board networks via automatically generated and analyzed attack graphs. Our stochastic model and algorithm combine all possible attack vectors and consider attacker resources more efficiently than Bayesian networks. We designed and implemented an algorithm that assesses a compilation of real vehicle development documents within only two CPU minutes, using an average of about 100 MB RAM. Our proof of concept "Security Analyzer for Exploitability Risks" (SAlfER) is 200 to 5 000 times faster and 40 to 200 times more memory-efficient than an implementation with UnBBayes1. Our approach aids vehicle development by automatically re-checking the architecture for attack combinations that may have been enabled by mistake and which are not trivial to spot by the human developer. Our approach is intended for and relevant for industrial application. Our research is part of a collaboration with a globally operating automotive manufacturer and is aimed at supporting the security of autonomous, connected, electrified, and shared vehicles.

2018-05-30
Mohaisen, Aziz, Al-Ibrahim, Omar, Kamhoua, Charles, Kwiat, Kevin, Njilla, Laurent.  2017.  Rethinking Information Sharing for Threat Intelligence. Proceedings of the Fifth ACM/IEEE Workshop on Hot Topics in Web Systems and Technologies. :6:1–6:7.

In the past decade, the information security and threat landscape has grown significantly making it difficult for a single defender to defend against all attacks at the same time. This called for introducing information sharing, a paradigm in which threat indicators are shared in a community of trust to facilitate defenses. Standards for representation, exchange, and consumption of indicators are proposed in the literature, although various issues are undermined. In this paper, we take the position of rethinking information sharing for actionable intelligence, by highlighting various issues that deserve further exploration. We argue that information sharing can benefit from well-defined use models, threat models, well-understood risk by measurement and robust scoring, well-understood and preserved privacy and quality of indicators and robust mechanism to avoid free riding behavior of selfish agents. We call for using the differential nature of data and community structures for optimizing sharing designs and structures.

Moriano, Pablo, Pendleton, Jared, Rich, Steven, Camp, L Jean.  2017.  Insider Threat Event Detection in User-System Interactions. Proceedings of the 2017 International Workshop on Managing Insider Security Threats. :1–12.

Detection of insider threats relies on monitoring individuals and their interactions with organizational resources. Identification of anomalous insiders typically relies on supervised learning models that use labeled data. However, such labeled data is not easily obtainable. The labeled data that does exist is also limited by current insider threat detection methods and undetected insiders would not be included. These models also inherently assume that the insider threat is not rapidly evolving between model generation and use of the model in detection. Yet there is a large body of research that illustrates that the insider threat changes significantly after some types of precipitating events, such as layoffs, significant restructuring, and plant or facility closure. To capture this temporal evolution of user-system interactions, we use an unsupervised learning framework to evaluate whether potential insider threat events are triggered following precipitating events. The analysis leverages a bipartite graph of user and system interactions. The approach shows a clear correlation between precipitating events and the number of apparent anomalies. The results of our empirical analysis show a clear shift in behaviors after events which have previously been shown to increase insider activity, specifically precipitating events. We argue that this metadata about the level of insider threat behaviors validates the potential of the approach. We apply our method to a dataset that comprises interactions between engineers and software components in an enterprise version control system spanning more than 22 years. We use this unlabeled dataset and automatically detect statistically significant events. We show that there is statistically significant evidence that a subset of users diversify their committing behavior after precipitating events have been announced. Although these findings do not constitute detection of insider threat events per se, they do identify patterns of potentially malicious high-risk insider behavior. They reinforce the idea that insider operations can be motivated by the insiders' environment. Our proposed framework outperforms algorithms based on naive random approaches and algorithms using volume dependent statistics. This graph mining technique has potential for early detection of insider threat behavior in user-system interactions independent of the volume of interactions. The proposed method also enables organizations without a corpus of identified insider threats to train its own anomaly detection system.

Sadeghi, Alireza, Esfahani, Naeem, Malek, Sam.  2017.  Mining Mobile App Markets for Prioritization of Security Assessment Effort. Proceedings of the 2Nd ACM SIGSOFT International Workshop on App Market Analytics. :1–7.

Like any other software engineering activity, assessing the security of a software system entails prioritizing the resources and minimizing the risks. Techniques ranging from the manual inspection to automated static and dynamic analyses are commonly employed to identify security vulnerabilities prior to the release of the software. However, none of these techniques is perfect, as static analysis is prone to producing lots of false positives and negatives, while dynamic analysis and manual inspection are unwieldy, both in terms of required time and cost. This research aims to improve these techniques by mining relevant information from vulnerabilities found in the app markets. The approach relies on the fact that many modern software systems, in particular mobile software, are developed using rich application development frameworks (ADF), allowing us to raise the level of abstraction for detecting vulnerabilities and thereby making it possible to classify the types of vulnerabilities that are encountered in a given category of application. By coupling this type of information with severity of the vulnerabilities, we are able to improve the efficiency of static and dynamic analyses, and target the manual effort on the riskiest vulnerabilities.

Ifinedo, Princely.  2017.  Effects of Organization Insiders' Self-Control and Relevant Knowledge on Participation in Information Systems Security Deviant Behavior: [Best Paper Nominee]. Proceedings of the 2017 ACM SIGMIS Conference on Computers and People Research. :79–86.

Disastrous consequences tend to befall organizations whose employees participate in information systems security deviant behavior (ISSDB) (e.g., connecting computers to the Internet through an insecure wireless network and opening emails from unverified senders). Although organizations recognize that ISSDB poses a serious problem, understanding what motivates its occurrence continues to be a key concern. While studies on information technology (IT) misuse abounds, research specifically focusing on the drivers of ISSDB remains scant in the literature. Using self-control theory, augmented with knowledge of relevant factors, this study examined the effects of employees' self-control, knowledge of computers/IT, and information systems (IS) security threats and risks on participation in ISSDB. A research model, including the aforementioned factors, was proposed and tested using the partial least squares technique. Data was collected from a survey of Canadian professionals. The results show that low self-control and lower levels of knowledge of computers/IT are related to employees' involvement in ISSDB. The data did not provide a meaningful relationship between employees' knowledge of IS security threats/risks and desire to participate in ISSDB.

Joy, Joshua, Gerla, Mario.  2017.  Privacy Risks in Vehicle Grids and Autonomous Cars. Proceedings of the 2Nd ACM International Workshop on Smart, Autonomous, and Connected Vehicular Systems and Services. :19–23.

Traditionally, the vehicle has been the extension of the manual ambulatory system, docile to the drivers' commands. Recent advances in communications, controls and embedded systems have changed this model, paving the way to the Intelligent Vehicle Grid. The car is now a formidable sensor platform, absorbing information from the environment, from other cars (and from the driver) and feeding it to other cars and infrastructure to assist in safe navigation, pollution control and traffic management. The next step in this evolution is just around the corner: the Internet of Autonomous Vehicles. Like other important instantiations of the Internet of Things (e.g., the smart building, etc), the Internet of Vehicles will not only upload data to the Internet with V2I. It will also use V2V communications, storage, intelligence, and learning capabilities to anticipate the customers' intentions and learn from other peers. V2I and V2V are essential to the autonomous vehicle, but carry the risk of attacks. This paper will address the privacy attacks to which vehicles are exposed when they upload private data to Internet Servers. It will also outline efficient methods to preserve privacy.

Duan, Ruian, Bijlani, Ashish, Xu, Meng, Kim, Taesoo, Lee, Wenke.  2017.  Identifying Open-Source License Violation and 1-Day Security Risk at Large Scale. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2169–2185.

With millions of apps available to users, the mobile app market is rapidly becoming very crowded. Given the intense competition, the time to market is a critical factor for the success and profitability of an app. In order to shorten the development cycle, developers often focus their efforts on the unique features and workflows of their apps and rely on third-party Open Source Software (OSS) for the common features. Unfortunately, despite their benefits, careless use of OSS can introduce significant legal and security risks, which if ignored can not only jeopardize security and privacy of end users, but can also cause app developers high financial loss. However, tracking OSS components, their versions, and interdependencies can be very tedious and error-prone, particularly if an OSS is imported with little to no knowledge of its provenance. We therefore propose OSSPolice, a scalable and fully-automated tool for mobile app developers to quickly analyze their apps and identify free software license violations as well as usage of known vulnerable versions of OSS. OSSPolice introduces a novel hierarchical indexing scheme to achieve both high scalability and accuracy, and is capable of efficiently comparing similarities of app binaries against a database of hundreds of thousands of OSS sources (billions of lines of code). We populated OSSPolice with 60K C/C++ and 77K Java OSS sources and analyzed 1.6M free Google Play Store apps. Our results show that 1) over 40K apps potentially violate GPL/AGPL licensing terms, and 2) over 100K of apps use known vulnerable versions of OSS. Further analysis shows that developers violate GPL/AGPL licensing terms due to lack of alternatives, and use vulnerable versions of OSS despite efforts from companies like Google to improve app security. OSSPolice is available on GitHub.

Vlachos, Vasileios, Stamatiou, Yannis C., Madhja, Adelina, Nikoletseas, Sotiris.  2017.  Privacy Flag: A Crowdsourcing Platform for Reporting and Managing Privacy and Security Risks. Proceedings of the 21st Pan-Hellenic Conference on Informatics. :27:1–27:4.

Nowadays we are witnessing an unprecedented evolution in how we gather and process information. Technological advances in mobile devices as well as ubiquitous wireless connectivity have brought about new information processing paradigms and opportunities for virtually all kinds of scientific and business activity. These new paradigms rest on three pillars: i) numerous powerful portable devices operated by human intelligence, ubiquitous in space and available, most of the time, ii) unlimited environment sensing capabilities of the devices, and iii) fast networks connecting the devices to Internet information processing platforms and services. These pillars implement the concepts of crowdsourcing and collective intelligence. These concepts describe online services that are based on the massive participation of users and the capabilities of their devices.in order to produce results and information which are "more than the sum of the part". The EU project Privacy Flag relies exactly on these two concepts in order to mobilize roaming citizens to contribute, through crowdsourcing, information about risky applications and dangerous web sites whose processing may produce emergent threat patterns, not evident in the contributed information alone, reelecting a collective intelligence action. Crowdsourcing and collective intelligence, in this context, has numerous advantages, such as raising privacy-awareness among people. In this paper we summarize our work in this project and describe the capabilities and functionalities of the Privacy Flag Platform.

2018-05-24
Kul, Gokhan, Upadhyaya, Shambhu, Hughes, Andrew.  2017.  Complexity of Insider Attacks to Databases. Proceedings of the 2017 International Workshop on Managing Insider Security Threats. :25–32.

Insider attacks are one of the most dangerous threats to an organization. Unfortunately, they are very difficult to foresee, detect, and defend against due to the trust and responsibilities placed on the employees. In this paper, we first define the notion of user intent, and construct a model for the most common threat scenario used in the literature that poses a very high risk for sensitive data stored in the organization's database. We show that the complexity of identifying pseudo-intents of a user is coNP-Complete in this domain, and launching a harvester insider attack within the boundaries of the defined threat model takes linear time while a targeted threat model is an NP-Complete problem. We also discuss about the general defense mechanisms against the modeled threats, and show that countering against the harvester insider attack model takes quadratic time while countering against the targeted insider attack model can take linear to quadratic time depending on the strategy chosen. Finally, we analyze the adversarial behavior, and show that launching an attack with minimum risk is also an NP-Complete problem.

2018-05-09
Snyder, Peter, Taylor, Cynthia, Kanich, Chris.  2017.  Most Websites Don'T Need to Vibrate: A Cost-Benefit Approach to Improving Browser Security. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :179–194.

Modern web browsers have accrued an incredibly broad set of features since being invented for hypermedia dissemination in 1990. Many of these features benefit users by enabling new types of web applications. However, some features also bring risk to users' privacy and security, whether through implementation error, unexpected composition, or unintended use. Currently there is no general methodology for weighing these costs and benefits. Restricting access to only the features which are necessary for delivering desired functionality on a given website would allow users to enforce the principle of lease privilege on use of the myriad APIs present in the modern web browser. However, security benefits gained by increasing restrictions must be balanced against the risk of breaking existing websites. This work addresses this problem with a methodology for weighing the costs and benefits of giving websites default access to each browser feature. We model the benefit as the number of websites that require the feature for some user-visible benefit, and the cost as the number of CVEs, lines of code, and academic attacks related to the functionality. We then apply this methodology to 74 Web API standards implemented in modern browsers. We find that allowing websites default access to large parts of the Web API poses significant security and privacy risks, with little corresponding benefit. We also introduce a configurable browser extension that allows users to selectively restrict access to low-benefit, high-risk features on a per site basis. We evaluated our extension with two hardened browser configurations, and found that blocking 15 of the 74 standards avoids 52.0% of code paths related to previous CVEs, and 50.0% of implementation code identified by our metric, without affecting the functionality of 94.7% of measured websites.

2018-02-06
Allodi, Luca, Massacci, Fabio.  2017.  Attack Potential in Impact and Complexity. Proceedings of the 12th International Conference on Availability, Reliability and Security. :32:1–32:6.

Vulnerability exploitation is reportedly one of the main attack vectors against computer systems. Yet, most vulnerabilities remain unexploited by attackers. It is therefore of central importance to identify vulnerabilities that carry a high 'potential for attack'. In this paper we rely on Symantec data on real attacks detected in the wild to identify a trade-off in the Impact and Complexity of a vulnerability in terms of attacks that it generates; exploiting this effect, we devise a readily computable estimator of the vulnerability's Attack Potential that reliably estimates the expected volume of attacks against the vulnerability. We evaluate our estimator performance against standard patching policies by measuring foiled attacks and demanded workload expressed as the number of vulnerabilities entailed to patch. We show that our estimator significantly improves over standard patching policies by ruling out low-risk vulnerabilities, while maintaining invariant levels of coverage against attacks in the wild. Our estimator can be used as a first aid for vulnerability prioritisation to focus assessment efforts on high-potential vulnerabilities.