Biblio

Filters: Keyword is decision-making  [Clear All Filters]
2023-07-21
Eze, Emmanuel O., Keates, Simeon, Pedram, Kamran, Esfahani, Alireza, Odih, Uchenna.  2022.  A Context-Based Decision-Making Trust Scheme for Malicious Detection in Connected and Autonomous Vehicles. 2022 International Conference on Computing, Electronics & Communications Engineering (iCCECE). :31—36.
The fast-evolving Intelligent Transportation Systems (ITS) are crucial in the 21st century, promising answers to congestion and accidents that bother people worldwide. ITS applications such as Connected and Autonomous Vehicle (CAVs) update and broadcasts road incident event messages, and this requires significant data to be transmitted between vehicles for a decision to be made in real-time. However, broadcasting trusted incident messages such as accident alerts between vehicles pose a challenge for CAVs. Most of the existing-trust solutions are based on the vehicle's direct interaction base reputation and the psychological approaches to evaluate the trustworthiness of the received messages. This paper provides a scheme for improving trust in the received incident alert messages for real-time decision-making to detect malicious alerts between CAVs using direct and indirect interactions. This paper applies artificial intelligence and statistical data classification for decision-making on the received messages. The model is trained based on the US Department of Technology Safety Pilot Deployment Model (SPMD). An Autonomous Decision-making Trust Scheme (ADmTS) that incorporates a machine learning algorithm and a local trust manager for decision-making has been developed. The experiment showed that the trained model could make correct predictions such as 98% and 0.55% standard deviation accuracy in predicting false alerts on the 25% malicious data
2021-06-01
Xing, Hang, Zhou, Chunjie, Ye, Xinhao, Zhu, Meipan.  2020.  An Edge-Cloud Synergy Integrated Security Decision-Making Method for Industrial Cyber-Physical Systems. 2020 IEEE 9th Data Driven Control and Learning Systems Conference (DDCLS). :989–995.
With the introduction of new technologies such as cloud computing and big data, the security issues of industrial cyber-physical systems (ICPSs) have become more complicated. Meanwhile, a lot of current security research lacks adaptation to industrial system upgrades. In this paper, an edge-cloud synergy framework for security decision-making is proposed, which takes advantage of the huge convenience and advantages brought by cloud computing and edge computing, and can make security decisions on a global perspective. Under this framework, a combination of Bayesian network-based risk assessment and stochastic game model-based security decision-making is proposed to generate an optimal defense strategy to minimize system losses. This method trains models in the clouds and infers at the edge computing nodes to achieve rapid defense strategy generation. Finally, a case study on the hardware-in-the-loop simulation platform proves the feasibility of the approach.
2021-06-02
Bychkov, Igor, Feoktistov, Alexander, Gorsky, Sergey, Edelev, Alexei, Sidorov, Ivan, Kostromin, Roman, Fereferov, Evgeniy, Fedorov, Roman.  2020.  Supercomputer Engineering for Supporting Decision-making on Energy Systems Resilience. 2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT). :1—6.
We propose a new approach to creating a subject-oriented distributed computing environment. Such an environment is used to support decision-making in solving relevant problems of ensuring energy systems resilience. The proposed approach is based on the idea of advancing and integrating the following important capabilities in supercomputer engineering: continuous integration, delivery, and deployment of the system and applied software, high-performance computing in heterogeneous environments, multi-agent intelligent computation planning and resource allocation, big data processing and geo-information servicing for subject information, including weakly structured data, and decision-making support. This combination of capabilities and their advancing are unique to the subject domain under consideration, which is related to combinatorial studying critical objects of energy systems. Evaluation of decision-making alternatives is carrying out through applying combinatorial modeling and multi-criteria selection rules. The Orlando Tools framework is used as the basis for an integrated software environment. It implements a flexible modular approach to the development of scientific applications (distributed applied software packages).
2020-05-04
Jie, Bao, Liu, Jingju, Wang, Yongjie, Zhou, Xuan.  2019.  Digital Ant Mechanism and Its Application in Network Security. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :710–714.
Digital ant technology is a new distributed and self-organization cyberspace defense paradigm. This paper describes digital ants system's developing process, characteristics, system architecture and mechanisms to illustrate its superiority, searches the possible applications of digital ants system. The summary of the paper and the trends of digital ants system are pointed out.
2020-04-03
Kozlov, Aleksandr, Noga, Nikolai.  2019.  The Method of Assessing the Level of Compliance of Divisions of the Complex Network for the Corporate Information Security Policy Indicators. 2019 Twelfth International Conference "Management of large-scale system development" (MLSD). :1—5.

The method of assessment of degree of compliance of divisions of the complex distributed corporate information system to a number of information security indicators is offered. As a result of the methodology implementation a comparative assessment of compliance level of each of the divisions for the corporate information security policy requirements may be given. This assessment may be used for the purpose of further decision-making by the management of the corporation on measures to minimize risks as a result of possible implementation of threats to information security.

2020-08-24
Gao, Hongbiao, Li, Jianbin, Cheng, Jingde.  2019.  Industrial Control Network Security Analysis and Decision-Making by Reasoning Method Based on Strong Relevant Logic. 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :289–294.
To improve production efficiency, more industrial control systems are connected to IT networks, and more IT technologies are applied to industrial control networks, network security has become an important problem. Industrial control network security analysis and decision-making is a effective method to solve the problem, which can predict risks and support to make decisions before the actual fault of the industrial control network system has not occurred. This paper proposes a security analysis and decision-making method with forward reasoning based on strong relevant logic for industrial control networks. The paper presents a case study in security analysis and decision-making for industrial control networks. The result of the case study shows that the proposed method is effective.
2020-11-17
Russell, S., Abdelzaher, T., Suri, N..  2019.  Multi-Domain Effects and the Internet of Battlefield Things. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :724—730.

This paper reviews the definitions and characteristics of military effects, the Internet of Battlefield Things (IoBT), and their impact on decision processes in a Multi-Domain Operating environment (MDO). The aspects of contemporary military decision-processes are illustrated and an MDO Effect Loop decision process is introduced. We examine the concept of IoBT effects and their implications in MDO. These implications suggest that when considering the concept of MDO, as a doctrine, the technological advances of IoBTs empower enhancements in decision frameworks and increase the viability of novel operational approaches and options for military effects.

2019-02-25
Hassan, M. H., Mostafa, S. A., Mustapha, A., Wahab, M. H. Abd, Nor, D. Md.  2018.  A Survey of Multi-Agent System Approach in Risk Assessment. 2018 International Symposium on Agent, Multi-Agent Systems and Robotics (ISAMSR). :1–6.
Risk Assessment is a foundation of decision-making about a future project behaviour or action. The related decision made might entail further analyzes to perform risk- reduction. The risk is a general phenomenon that takes different depicts and types. Static risk and its circumstances do not significantly change over time while dynamic risk arises out of the changes in interrelated circumstances. A Multi-Agent System (MAS) approach has become a popular tool to tackle different problems that relate to risk. The MAS helps in the decision aid processes and when responding to the consequences of the risk. This paper surveys some of the existing methods and techniques of risk assessment in different application domains. The survey focuses on the employment of MAS approach in risk assessment. The survey outcomes an illustration of the roles and contributions of the MAS in the Dynamic Risk Assessment (DRA) field.
2018-05-24
Parycek, P., Pereira, G. Viale.  2017.  Drivers of Smart Governance: Towards to Evidence-Based Policy-Making. Proceedings of the 18th Annual International Conference on Digital Government Research. :564–565.

This paper presents the preliminary framework proposed by the authors for drivers of Smart Governance. The research question of this study is: What are the drivers for Smart Governance to achieve evidence-based policy-making? The framework suggests that in order to create a smart governance model, data governance and collaborative governance are the main drivers. These pillars are supported by legal framework, normative factors, principles and values, methods, data assets or human resources, and IT infrastructure. These aspects will guide a real time evaluation process in all levels of the policy cycle, towards to the implementation of evidence-based policies.

2018-02-02
Tramèr, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., Juels, A., Lin, H..  2017.  FairTest: Discovering Unwarranted Associations in Data-Driven Applications. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :401–416.

In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice. We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities. We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects. We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger.

2018-08-23
Felmlee, D., Lupu, E., McMillan, C., Karafili, E., Bertino, E..  2017.  Decision-making in policy governed human-autonomous systems teams. 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1–6.

Policies govern choices in the behavior of systems. They are applied to human behavior as well as to the behavior of autonomous systems but are defined differently in each case. Generally humans have the ability to interpret the intent behind the policies, to bring about their desired effects, even occasionally violating them when the need arises. In contrast, policies for automated systems fully define the prescribed behavior without ambiguity, conflicts or omissions. The increasing use of AI techniques and machine learning in autonomous systems such as drones promises to blur these boundaries and allows us to conceive in a similar way more flexible policies for the spectrum of human-autonomous systems collaborations. In coalition environments this spectrum extends across the boundaries of authority in pursuit of a common coalition goal and covers collaborations between human and autonomous systems alike. In social sciences, social exchange theory has been applied successfully to explain human behavior in a variety of contexts. It provides a framework linking the expected rewards, costs, satisfaction and commitment to explain and anticipate the choices that individuals make when confronted with various options. We discuss here how it can be used within coalition environments to explain joint decision making and to help formulate policies re-framing the concepts where appropriate. Social exchange theory is particularly attractive within this context as it provides a theory with “measurable” components that can be readily integrated in machine reasoning processes.

2016-07-01
Pearson, Carl J., Welk, Allaire K., Boettcher, William A., Mayer, Roger C., Streck, Sean, Simons-Rudolph, Joseph M., Mayhorn, Christopher B..  2016.  Differences in Trust Between Human and Automated Decision Aids. Proceedings of the Symposium and Bootcamp on the Science of Security. :95–98.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

2017-05-16
Pearson, Carl J., Welk, Allaire K., Boettcher, William A., Mayer, Roger C., Streck, Sean, Simons-Rudolph, Joseph M., Mayhorn, Christopher B..  2016.  Differences in Trust Between Human and Automated Decision Aids. Proceedings of the Symposium and Bootcamp on the Science of Security. :95–98.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

2019-09-09
E. Peterson.  2016.  Dagger: Modeling and visualization for mission impact situation awareness. MILCOM 2016 - 2016 IEEE Military Communications Conference. :25-30.

Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.

2016-12-05
Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2015.  Discovering Decision-Making Patterns for Security Novices and Experts.

Security analysis requires some degree of knowledge to align threats to vulnerabilities in information technology. Despite the abundance of security requirements, the evidence suggests that security experts are not applying these checklists. Instead, they default to their background knowledge to identify security vulnerabilities. To better understand the different effects of security checklists, analysis and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security requirements analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness theory to elicit responses that were analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend and project future threats, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We discuss suggestions of how our method could be adapted and applied to improve training and education instruments of security analysts.

2017-03-07
Dehghanniri, H., Letier, E., Borrion, H..  2015.  Improving security decision under uncertainty: A multidisciplinary approach. 2015 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1–7.

Security decision-making is a critical task in tackling security threats affecting a system or process. It often involves selecting a suitable resolution action to tackle an identified security risk. To support this selection process, decision-makers should be able to evaluate and compare available decision options. This article introduces a modelling language that can be used to represent the effects of resolution actions on the stakeholders' goals, the crime process, and the attacker. In order to reach this aim, we develop a multidisciplinary framework that combines existing knowledge from the fields of software engineering, crime science, risk assessment, and quantitative decision analysis. The framework is illustrated through an application to a case of identity theft.

2016-12-05
Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2014.  A Framework to Measure Experts' Decision Making in Security Requirements Analysis. 2014 IEEE 1st International Workshop on Evolving Security and Privacy Requirements Engineering (ESPRE).

Research shows that commonly accepted security requirements   are  not  generally  applied  in  practice.  Instead  of relying on requirements checklists, security experts rely on their expertise and background knowledge to identify security vulnerabilities.  To  understand  the  gap  between  available checklists  and  practice,  we  conducted  a  series  of  interviews  to encode   the   decision-making   process   of  security   experts   and novices during security requirements analysis. Participants were asked to analyze two types of artifacts: source code, and network diagrams  for  vulnerabilities  and  to  apply  a  requirements checklist to mitigate some of those vulnerabilities.  We framed our study using Situation Awareness—a cognitive theory from psychology—to   elicit  responses   that  we  later  analyzed   using coding theory and grounded analysis.  We report our preliminary results of analyzing two interviews that reveal possible decision- making patterns that could characterize how analysts perceive, comprehend   and  project  future  threats  which  leads  them  to decide upon requirements  and their specifications,  in addition, to how  experts  use  assumptions  to  overcome  ambiguity  in specifications.  Our goal is to build a model that researchers  can use to evaluate their security requirements methods against how experts transition through different situation awareness levels in their decision-making  process.

2015-04-30
Riveiro, M., Lebram, M., Warston, H..  2014.  On visualizing threat evaluation configuration processes: A design proposal. Information Fusion (FUSION), 2014 17th International Conference on. :1-8.

Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.

2015-05-06
Hui Xia, Zhiping Jia, Sha, E.H.-M..  2014.  Research of trust model based on fuzzy theory in mobile ad hoc networks. Information Security, IET. 8:88-103.

The performance of ad hoc networks depends on the cooperative and trust nature of the distributed nodes. To enhance security in ad hoc networks, it is important to evaluate the trustworthiness of other nodes without central authorities. An information-theoretic framework is presented, to quantitatively measure trust and build a novel trust model (FAPtrust) with multiple trust decision factors. These decision factors are incorporated to reflect trust relationship's complexity and uncertainty in various angles. The weight of these factors is set up using fuzzy analytic hierarchy process theory based on entropy weight method, which makes the model has a better rationality. Moreover, the fuzzy logic rules prediction mechanism is adopted to update a node's trust for future decision-making. As an application of this model, a novel reactive trust-based multicast routing protocol is proposed. This new trusted protocol provides a flexible and feasible approach in routing decision-making, taking into account both the trust constraint and the malicious node detection in multi-agent systems. Comprehensive experiments have been conducted to evaluate the efficiency of trust model and multicast trust enhancement in the improvement of network interaction quality, trust dynamic adaptability, malicious node identification, attack resistance and enhancements of system's security.