Visible to the public Biblio

Filters: Keyword is psychology  [Clear All Filters]
2020-10-12
Jeong, Jongkil, Mihelcic, Joanne, Oliver, Gillian, Rudolph, Carsten.  2019.  Towards an Improved Understanding of Human Factors in Cybersecurity. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :338–345.
Cybersecurity cannot be addressed by technology alone; the most intractable aspects are in fact sociotechnical. As a result, the 'human factor' has been recognised as being the weakest and most obscure link in creating safe and secure digital environments. This study examines the subjective and often complex nature of human factors in the cybersecurity context through a systematic literature review of 27 articles which span across technical, behavior and social sciences perspectives. Results from our study suggest that there is still a predominately a technical focus, which excludes the consideration of human factors in cybersecurity. Our literature review suggests that this is due to a lack of consolidation of the attributes pertaining to human factors; the application of theoretical frameworks; and a lack of in-depth qualitative studies. To ensure that these gaps are addressed, we propose that future studies take into consideration (a) consolidating the human factors; (b) examining cyber security from an interdisciplinary approach; (c) conducting additional qualitative research whilst investigating human factors in cybersecurity.
Granatyr, Jones, Gomes, Heitor Murilo, Dias, João Miguel, Paiva, Ana Maria, Nunes, Maria Augusta Silveira Netto, Scalabrin, Edson Emílio, Spak, Fábio.  2019.  Inferring Trust Using Personality Aspects Extracted from Texts. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :3840–3846.
Trust mechanisms are considered the logical protection of software systems, preventing malicious people from taking advantage or cheating others. Although these concepts are widely used, most applications in this field do not consider affective aspects to aid in trust computation. Researchers of Psychology, Neurology, Anthropology, and Computer Science argue that affective aspects are essential to human's decision-making processes. So far, there is a lack of understanding about how these aspects impact user's trust, particularly when they are inserted in an evaluation system. In this paper, we propose a trust model that accounts for personality using three personality models: Big Five, Needs, and Values. We tested our approach by extracting personality aspects from texts provided by two online human-fed evaluation systems and correlating them to reputation values. The empirical experiments show statistically significant better results in comparison to non-personality-wise approaches.
2020-10-05
Ong, Desmond, Soh, Harold, Zaki, Jamil, Goodman, Noah.  2019.  Applying Probabilistic Programming to Affective Computing. IEEE Transactions on Affective Computing. :1—1.

Affective Computing is a rapidly growing field spurred by advancements in artificial intelligence, but often, held back by the inability to translate psychological theories of emotion into tractable computational models. To address this, we propose a probabilistic programming approach to affective computing, which models psychological-grounded theories as generative models of emotion, and implements them as stochastic, executable computer programs. We first review probabilistic approaches that integrate reasoning about emotions with reasoning about other latent mental states (e.g., beliefs, desires) in context. Recently-developed probabilistic programming languages offer several key desidarata over previous approaches, such as: (i) flexibility in representing emotions and emotional processes; (ii) modularity and compositionality; (iii) integration with deep learning libraries that facilitate efficient inference and learning from large, naturalistic data; and (iv) ease of adoption. Furthermore, using a probabilistic programming framework allows a standardized platform for theory-building and experimentation: Competing theories (e.g., of appraisal or other emotional processes) can be easily compared via modular substitution of code followed by model comparison. To jumpstart adoption, we illustrate our points with executable code that researchers can easily modify for their own models. We end with a discussion of applications and future directions of the probabilistic programming approach

2020-09-21
Razin, Yosef, Feigh, Karen.  2019.  Toward Interactional Trust for Humans and Automation: Extending Interdependence. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1348–1355.
Trust in human-automation interaction is increasingly imperative as AI and robots become ubiquitous at home, school, and work. Interdependence theory allows for the identification of one-on-one interactions that require trust by analyzing the structure of the potential outcomes. This paper synthesizes multiple, formerly disparate research approaches by extending Interdependence theory to create a unified framework for outcome-based trust in human-automation interaction. This framework quantitatively contextualizes validated empirical results from social psychology on relationship formation, stability, and betrayal. It also contributes insights into trust-related concepts, such as power and commitment, which help further our understanding of trustworthy system design. This new integrated interactional approach reveals how trust and trustworthiness machines from merely reliable tools to trusted teammates working hand-in-actuator toward an automated future.
2020-06-19
Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.

2020-02-17
Yang, Chen, Liu, Tingting, Zuo, Lulu, Hao, Zhiyong.  2019.  An Empirical Study on the Data Security and Privacy Awareness to Use Health Care Wearable Devices. 2019 16th International Conference on Service Systems and Service Management (ICSSSM). :1–6.
Recently, several health care wearable devices which can intervene in health and collect personal health data have emerged in the medical market. Although health care wearable devices promote the integration of multi-layer medical resources and bring new ways of health applications for users, it is inevitable that some problems will be brought. This is mainly manifested in the safety protection of medical and health data and the protection of user's privacy. From the users' point of view, the irrational use of medical and health data may bring psychological and physical negative effects to users. From the government's perspective, it may be sold by private businesses in the international arena and threaten national security. The most direct precaution against the problem is users' initiative. For better understanding, a research model is designed by the following five aspects: Security knowledge (SK), Security attitude (SAT), Security practice (SP), Security awareness (SAW) and Security conduct (SC). To verify the model, structural equation analysis which is an empirical approach was applied to examine the validity and all the results showed that SK, SAT, SP, SAW and SC are important factors affecting users' data security and privacy protection awareness.
2020-02-10
Hoey, Jesse, Sheikhbahaee, Zahra, MacKinnon, Neil J..  2019.  Deliberative and Affective Reasoning: a Bayesian Dual-Process Model. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :388–394.
The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological and sociological theory.
Schneeberger, Tanja, Scholtes, Mirella, Hilpert, Bernhard, Langer, Markus, Gebhard, Patrick.  2019.  Can Social Agents elicit Shame as Humans do? 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :164–170.
This paper presents a study that examines whether social agents can elicit the social emotion shame as humans do. For that, we use job interviews, which are highly evaluative situations per se. We vary the interview style (shame-eliciting vs. neutral) and the job interviewer (human vs. social agent). Our dependent variables include observational data regarding the social signals of shame and shame regulation as well as self-assessment questionnaires regarding the felt uneasiness and discomfort in the situation. Our results indicate that social agents can elicit shame to the same amount as humans. This gives insights about the impact of social agents on users and the emotional connection between them.
Barnes, Chloe M., Ekárt, Anikó, Lewis, Peter R..  2019.  Social Action in Socially Situated Agents. 2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO). :97–106.
Two systems pursuing their own goals in a shared world can interact in ways that are not so explicit - such that the presence of another system alone can interfere with how one is able to achieve its own goals. Drawing inspiration from human psychology and the theory of social action, we propose the notion of employing social action in socially situated agents as a means of alleviating interference in interacting systems. Here we demonstrate that these specific issues of behavioural and evolutionary instability caused by the unintended consequences of interactions can be addressed with agents capable of a fusion of goal-rationality and traditional action, resulting in a stable society capable of achieving goals during the course of evolution.
2020-01-27
Pascucci, Antonio, Masucci, Vincenzo, Monti, Johanna.  2019.  Computational Stylometry and Machine Learning for Gender and Age Detection in Cyberbullying Texts. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :1–6.

The aim of this paper is to show the importance of Computational Stylometry (CS) and Machine Learning (ML) support in author's gender and age detection in cyberbullying texts. We developed a cyberbullying detection platform and we show the results of performances in terms of Precision, Recall and F -Measure for gender and age detection in cyberbullying texts we collected.

2019-12-09
Tucker, Scot.  2018.  Engineering Trust: A Graph-Based Algorithm for Modeling, Validating, and Evaluating Trust. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1–9.
Trust is an important topic in today's interconnected world. Breaches of trust in today's systems has had profound effects upon us all, and they are very difficult and costly to fix especially when caused by flaws in the system's architecture. Trust modeling can expose these types of issues, but modeling trust in complex multi-tiered system architectures can be very difficult. Often experts have differing views of trust and how it applies to systems within their domain. This work presents a graph-based modeling methodology that normalizes the application of trust across disparate system domains allowing the modeling of complex intersystem trust relationships. An algorithm is proposed that applies graph theory to model, validate and evaluate trust in system architectures. Also, it provides the means to apply metrics to compare and prioritize the effectiveness of trust management in system and component architectures. The results produced by the algorithm can be used in conjunction with systems engineering processes to ensure both trust and the efficient use of resources.
2019-09-09
Kumar, M., Bhandari, R., Rupani, A., Ansari, J. H..  2018.  Trust-Based Performance Evaluation of Routing Protocol Design with Security and QoS over MANET. 2018 International Conference on Advances in Computing and Communication Engineering (ICACCE). :139-142.

Nowadays, The incorporation of different function of the network, as well as routing, administration, and security, is basic to the effective operation of a mobile circumstantial network these days, in MANET thought researchers manages the problems of QoS and security severally. Currently, each the aspects of security and QoS influence negatively on the general performance of the network once thought-about in isolation. In fact, it will influence the exceptionally operating of QoS and security algorithms and should influence the important and essential services needed within the MANET. Our paper outlines 2 accomplishments via; the accomplishment of security and accomplishment of quality. The direction towards achieving these accomplishments is to style and implement a protocol to suite answer for policy-based network administration, and methodologies for key administration and causing of IPsec in a very MANET.

2019-09-05
Gryzunov, V. V., Bondarenko, I. Y..  2018.  A Social Engineer in Terms of Control Theory. 2018 Third International Conference on Human Factors in Complex Technical Systems and Environments (ERGO)s and Environments (ERGO). :202-204.

Problem: Today, many methods of influencing on personnel in the communication process are available to social engineers and information security specialists, but in practice it is difficult to say which method and why it is appropriate to use one. Criteria and indicators of effective communication are not formalized. Purpose: to formalize the concept of effective communication, to offer a tool for combining existing methods and means of communication, to formalize the purpose of communication. Methods: Use of the terminal model of a control system for a non-stochastic communication object. Results. Two examples demonstrating the possibility of using the terminal model of the communication control system, which allows you to connect tools and methods of communication, justify the requirements for the structure and feedback of communication, select the necessary communication algorithms depending on the observed response of the communication object. Practical significance: the results of the research can be used in planning and conducting effective communication in the process of information protection, in business, in private relationships and in other areas of human activity.

2019-05-08
Moore, A. P., Cassidy, T. M., Theis, M. C., Bauer, D., Rousseau, D. M., Moore, S. B..  2018.  Balancing Organizational Incentives to Counter Insider Threat. 2018 IEEE Security and Privacy Workshops (SPW). :237–246.

Traditional security practices focus on negative incentives that attempt to force compliance through constraints, monitoring, and punishment. This paper describes a missing dimension of most organizations' insider threat defense-one that explicitly considers positive incentives for attracting individuals to act in the interests of the organization. Positive incentives focus on properties of the organizational context of workforce management practices - including those relating to organizational supportiveness, coworker connectedness, and job engagement. Without due attention to the organizational context in which insider threats occur, insider misbehaviors may simply reoccur as a natural response to counterproductive or dysfunctional management practices. A balanced combination of positive and negative incentives can improve employees' relationships with the organization and provide a means for employees to better cope with personal and professional stressors. An insider threat program that balances organizational incentives can become an advocate for the workforce and a means for improving employee work life - a welcome message to employees who feel threatened by programs focused on discovering insider wrongdoing.

Basu, S., Chua, Y. H. Victoria, Lee, M. Wah, Lim, W. G., Maszczyk, T., Guo, Z., Dauwels, J..  2018.  Towards a data-driven behavioral approach to prediction of insider-threat. 2018 IEEE International Conference on Big Data (Big Data). :4994–5001.

Insider threats pose a challenge to all companies and organizations. Identification of culprit after an attack is often too late and result in detrimental consequences for the organization. Majority of past research on insider threat has focused on post-hoc personality analysis of known insider threats to identify personality vulnerabilities. It has been proposed that certain personality vulnerabilities place individuals to be at risk to perpetuating insider threats should the environment and opportunity arise. To that end, this study utilizes a game-based approach to simulate a scenario of intellectual property theft and investigate behavioral and personality differences of individuals who exhibit insider-threat related behavior. Features were extracted from games, text collected through implicit and explicit measures, simultaneous facial expression recordings, and personality variables (HEXACO, Dark Triad and Entitlement Attitudes) calculated from questionnaire. We applied ensemble machine learning algorithms and show that they produce an acceptable balance of precision and recall. Our results showcase the possibility of harnessing personality variables, facial expressions and linguistic features in the modeling and prediction of insider-threat.

2019-02-22
Gaston, J., Narayanan, M., Dozier, G., Cothran, D. L., Arms-Chavez, C., Rossi, M., King, M. C., Xu, J..  2018.  Authorship Attribution vs. Adversarial Authorship from a LIWC and Sentiment Analysis Perspective. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :920-927.

Although Stylometry has been effectively used for Authorship Attribution, there is a growing number of methods being developed that allow authors to mask their identity [2, 13]. In this paper, we investigate the usage of non-traditional feature sets for Authorship Attribution. By using non-traditional feature sets, one may be able to reveal the identity of adversarial authors who are attempting to evade detection from Authorship Attribution systems that are based on more traditional feature sets. In addition, we demonstrate how GEFeS (Genetic & Evolutionary Feature Selection) can be used to evolve high-performance hybrid feature sets composed of two non-traditional feature sets for Authorship Attribution: LIWC (Linguistic Inquiry & Word Count) and Sentiment Analysis. These hybrids were able to reduce the Adversarial Effectiveness on a test set presented in [2] by approximately 33.4%.

2018-12-10
Hu, Y., Abuzainab, N., Saad, W..  2018.  Dynamic Psychological Game for Adversarial Internet of Battlefield Things Systems. 2018 IEEE International Conference on Communications (ICC). :1–6.

In this paper, a novel game-theoretic framework is introduced to analyze and enhance the security of adversarial Internet of Battlefield Things (IoBT) systems. In particular, a dynamic, psychological network interdiction game is formulated between a soldier and an attacker. In this game, the soldier seeks to find the optimal path to minimize the time needed to reach a destination, while maintaining a desired bit error rate (BER) performance by selectively communicating with certain IoBT devices. The attacker, on the other hand, seeks to find the optimal IoBT devices to attack, so as to maximize the BER of the soldier and hinder the soldier's progress. In this game, the soldier and attacker's first- order and second-order beliefs on each others' behavior are formulated to capture their psychological behavior. Using tools from psychological game theory, the soldier and attacker's intention to harm one another is captured in their utilities, based on their beliefs. A psychological forward induction-based solution is proposed to solve the dynamic game. This approach can find a psychological sequential equilibrium of the game, upon convergence. Simulation results show that, whenever the soldier explicitly intends to frustrate the attacker, the soldier's material payoff is increased by up to 15.6% compared to a traditional dynamic Bayesian game.

2018-02-14
Kulyk, O., Reinheimer, B. M., Gerber, P., Volk, F., Volkamer, M., Mühlhäuser, M..  2017.  Advancing Trust Visualisations for Wider Applicability and User Acceptance. 2017 IEEE Trustcom/BigDataSE/ICESS. :562–569.
There are only a few visualisations targeting the communication of trust statements. Even though there are some advanced and scientifically founded visualisations-like, for example, the opinion triangle, the human trust interface, and T-Viz-the stars interface known from e-commerce platforms is by far the most common one. In this paper, we propose two trust visualisations based on T-Viz, which was recently proposed and successfully evaluated in large user studies. Despite being the most promising proposal, its design is not primarily based on findings from human-computer interaction or cognitive psychology. Our visualisations aim to integrate such findings and to potentially improve decision making in terms of correctness and efficiency. A large user study reveals that our proposed visualisations outperform T-Viz in these factors.
2018-02-06
Mehrpouyan, H., Azpiazu, I. M., Pera, M. S..  2017.  Measuring Personality for Automatic Elicitation of Privacy Preferences. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :84–95.

The increasing complexity and ubiquity in user connectivity, computing environments, information content, and software, mobile, and web applications transfers the responsibility of privacy management to the individuals. Hence, making it extremely difficult for users to maintain the intelligent and targeted level of privacy protection that they need and desire, while simultaneously maintaining their ability to optimally function. Thus, there is a critical need to develop intelligent, automated, and adaptable privacy management systems that can assist users in managing and protecting their sensitive data in the increasingly complex situations and environments that they find themselves in. This work is a first step in exploring the development of such a system, specifically how user personality traits and other characteristics can be used to help automate determination of user sharing preferences for a variety of user data and situations. The Big-Five personality traits of openness, conscientiousness, extroversion, agreeableness, and neuroticism are examined and used as inputs into several popular machine learning algorithms in order to assess their ability to elicit and predict user privacy preferences. Our results show that the Big-Five personality traits can be used to significantly improve the prediction of user privacy preferences in a number of contexts and situations, and so using machine learning approaches to automate the setting of user privacy preferences has the potential to greatly reduce the burden on users while simultaneously improving the accuracy of their privacy preferences and security.

2017-12-28
Noureddine, M. A., Marturano, A., Keefe, K., Bashir, M., Sanders, W. H..  2017.  Accounting for the Human User in Predictive Security Models. 2017 IEEE 22nd Pacific Rim International Symposium on Dependable Computing (PRDC). :329–338.

Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.

2017-12-20
Williams, N., Li, S..  2017.  Simulating Human Detection of Phishing Websites: An Investigation into the Applicability of the ACT-R Cognitive Behaviour Architecture Model. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–8.

The prevalence and effectiveness of phishing attacks, despite the presence of a vast array of technical defences, are due largely to the fact that attackers are ruthlessly targeting what is often referred to as the weakest link in the system - the human. This paper reports the results of an investigation into how end users behave when faced with phishing websites and how this behaviour exposes them to attack. Specifically, the paper presents a proof of concept computer model for simulating human behaviour with respect to phishing website detection based on the ACT-R cognitive architecture, and draws conclusions as to the applicability of this architecture to human behaviour modelling within a phishing detection scenario. Following the development of a high-level conceptual model of the phishing website detection process, the study draws upon ACT-R to model and simulate the cognitive processes involved in judging the validity of a representative webpage based primarily around the characteristics of the HTTPS padlock security indicator. The study concludes that despite the low-level nature of the architecture and its very basic user interface support, ACT-R possesses strong capabilities which map well onto the phishing use case, and that further work to more fully represent the range of human security knowledge and behaviours in an ACT-R model could lead to improved insights into how best to combine technical and human defences to reduce the risk to end users from phishing attacks.

2017-12-12
Legg, P. A., Buckley, O., Goldsmith, M., Creese, S..  2017.  Automated Insider Threat Detection System Using User and Role-Based Profile Assessment. IEEE Systems Journal. 11:503–512.

Organizations are experiencing an ever-growing concern of how to identify and defend against insider threats. Those who have authorized access to sensitive organizational data are placed in a position of power that could well be abused and could cause significant damage to an organization. This could range from financial theft and intellectual property theft to the destruction of property and business reputation. Traditional intrusion detection systems are neither designed nor capable of identifying those who act maliciously within an organization. In this paper, we describe an automated system that is capable of detecting insider threats within an organization. We define a tree-structure profiling approach that incorporates the details of activities conducted by each user and each job role and then use this to obtain a consistent representation of features that provide a rich description of the user's behavior. Deviation can be assessed based on the amount of variance that each user exhibits across multiple attributes, compared against their peers. We have performed experimentation using ten synthetic data-driven scenarios and found that the system can identify anomalous behavior that may be indicative of a potential threat. We also show how our detection system can be combined with visual analytics tools to support further investigation by an analyst.

Gamachchi, A., Boztas, S..  2017.  Insider Threat Detection Through Attributed Graph Clustering. 2017 IEEE Trustcom/BigDataSE/ICESS. :112–119.

While most organizations continue to invest in traditional network defences, a formidable security challenge has been brewing within their own boundaries. Malicious insiders with privileged access in the guise of a trusted source have carried out many attacks causing far reaching damage to financial stability, national security and brand reputation for both public and private sector organizations. Growing exposure and impact of the whistleblower community and concerns about job security with changing organizational dynamics has further aggravated this situation. The unpredictability of malicious attackers, as well as the complexity of malicious actions, necessitates the careful analysis of network, system and user parameters correlated with insider threat problem. Thus it creates a high dimensional, heterogeneous data analysis problem in isolating suspicious users. This research work proposes an insider threat detection framework, which utilizes the attributed graph clustering techniques and outlier ranking mechanism for enterprise users. Empirical results also confirm the effectiveness of the method by achieving the best area under curve value of 0.7648 for the receiver operating characteristic curve.

Zaytsev, A., Malyuk, A., Miloslavskaya, N..  2017.  Critical Analysis in the Research Area of Insider Threats. 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud). :288–296.

The survey of related works on insider information security (IS) threats is presented. Special attention is paid to works that consider the insiders' behavioral models as it is very up-to-date for behavioral intrusion detection. Three key research directions are defined: 1) the problem analysis in general, including the development of taxonomy for insiders, attacks and countermeasures; 2) study of a specific IS threat with forecasting model development; 3) early detection of a potential insider. The models for the second and third directions are analyzed in detail. Among the second group the works on three IS threats are examined, namely insider espionage, cyber sabotage and unintentional internal IS violation. Discussion and a few directions for the future research conclude the paper.

2017-03-07
Kilger, M..  2015.  Integrating Human Behavior Into the Development of Future Cyberterrorism Scenarios. 2015 10th International Conference on Availability, Reliability and Security. :693–700.

The development of future cyber terrorism scenarios is a key component in building a more comprehensive understanding of cyber threats that are likely to emerge in the near-to mid-term future. While developing concepts of likely new, emerging digital technologies is an important part of this process, this article suggests that understanding the psychological and social forces involved in cyber terrorism is also a key component in the analysis and that the synergy of these two dimensions may produce more accurate and detailed future cyber threat scenarios than either analytical element alone.