Biblio
This paper proposes a context-aware, graph-based approach for identifying anomalous user activities via user profile analysis, which obtains a group of users maximally similar among themselves as well as to the query during test time. The main challenges for the anomaly detection task are: (1) rare occurrences of anomalies making it difficult for exhaustive identification with reasonable false-alarm rate, and (2) continuously evolving new context-dependent anomaly types making it difficult to synthesize the activities apriori. Our proposed query-adaptive graph-based optimization approach, solvable using maximum flow algorithm, is designed to fully utilize both mutual similarities among the user models and their respective similarities with the query to shortlist the user profiles for a more reliable aggregated detection. Each user activity is represented using inputs from several multi-modal resources, which helps to localize anomalies from time-dependent data efficiently. Experiments on public datasets of insider threats and gesture recognition show impressive results.
An important topic in cybersecurity is validating Active Indicators (AI), which are stimuli that can be implemented in systems to trigger responses from individuals who might or might not be Insider Threats (ITs). The way in which a person responds to the AI is being validated for identifying a potential threat and a non-threat. In order to execute this validation process, it is important to create a paradigm that allows manipulation of AIs for measuring response. The scenarios are posed in a manner that require participants to be situationally aware that they are being monitored and have to act deceptively. In particular, manipulations in the environment should no differences between conditions relative to immersion and ease of use, but the narrative should be the driving force behind non-deceptive and IT responses. The success of the narrative and the simulation environment to induce such behaviors is determined by immersion, usability, and stress response questionnaires, and performance. Initial results of the feasibility to use a narrative reliant upon situation awareness of monitoring and evasion are discussed.
This publication presents some techniques for insider threats and cryptographic protocols in secure processes. Those processes are dedicated to the information management of strategic data splitting. Strategic data splitting is dedicated to enterprise management processes as well as methods of securely storing and managing this type of data. Because usually strategic data are not enough secure and resistant for unauthorized leakage, we propose a new protocol that allows to protect data in different management structures. The presented data splitting techniques will concern cryptographic information splitting algorithms, as well as data sharing algorithms making use of cognitive data analysis techniques. The insider threats techniques will concern data reconstruction methods and cognitive data analysis techniques. Systems for the semantic analysis and secure information management will be used to conceal strategic information about the condition of the enterprise. Using the new approach, which is based on cognitive systems allow to guarantee the secure features and make the management processes more efficient.
Insider threats can cause immense damage to organizations of different types, including government, corporate, and non-profit organizations. Being an insider, however, does not necessarily equate to being a threat. Effectively identifying valid threats, and assessing the type of threat an insider presents, remain difficult challenges. In this work, we propose a novel breakdown of eight insider threat types, identified by using three insider traits: predictability, susceptibility, and awareness. In addition to presenting this framework for insider threat types, we implement a computational model to demonstrate the viability of our framework with synthetic scenarios devised after reviewing real world insider threat case studies. The results yield useful insights into how further investigation might proceed to reveal how best to gauge predictability, susceptibility, and awareness, and precisely how they relate to the eight insider types.
Organizations are experiencing an ever-growing concern of how to identify and defend against insider threats. Those who have authorized access to sensitive organizational data are placed in a position of power that could well be abused and could cause significant damage to an organization. This could range from financial theft and intellectual property theft to the destruction of property and business reputation. Traditional intrusion detection systems are neither designed nor capable of identifying those who act maliciously within an organization. In this paper, we describe an automated system that is capable of detecting insider threats within an organization. We define a tree-structure profiling approach that incorporates the details of activities conducted by each user and each job role and then use this to obtain a consistent representation of features that provide a rich description of the user's behavior. Deviation can be assessed based on the amount of variance that each user exhibits across multiple attributes, compared against their peers. We have performed experimentation using ten synthetic data-driven scenarios and found that the system can identify anomalous behavior that may be indicative of a potential threat. We also show how our detection system can be combined with visual analytics tools to support further investigation by an analyst.
Existing access control mechanisms are based on the concept of identity enrolment and recognition and assume that recognized identity is a synonym to ethical actions, yet statistics over the years show that the most severe security breaches are the results of trusted, identified, and legitimate users who turned into malicious insiders. Insider threat damages vary from intellectual property loss and fraud to information technology sabotage. As insider threat incidents evolve, there exist demands for a nonidentity-based authentication measure that rejects access to authorized individuals who have mal-intents of access. In this paper, we study the possibility of using the user's intention as an access control measure using the involuntary electroencephalogram reactions toward visual stimuli. We propose intent-based access control (IBAC) that detects the intentions of access based on the existence of knowledge about an intention. IBAC takes advantage of the robustness of the concealed information test to assess access risk. We use the intent and intent motivation level to compute the access risk. Based on the calculated risk and risk accepted threshold, the system makes the decision whether to grant or deny access requests. We assessed the model using experiments on 30 participants that proved the robustness of the proposed solution.
As increasingly more enterprises are deploying cloud file-sharing services, this adds a new channel for potential insider threats to company data and IPs. In this paper, we introduce a two-stage machine learning system to detect anomalies. In the first stage, we project the access logs of cloud file-sharing services onto relationship graphs and use three complementary graph-based unsupervised learning methods: OddBall, PageRank and Local Outlier Factor (LOF) to generate outlier indicators. In the second stage, we ensemble the outlier indicators and introduce the discrete wavelet transform (DWT) method, and propose a procedure to use wavelet coefficients with the Haar wavelet function to identify outliers for insider threat. The proposed system has been deployed in a real business environment, and demonstrated effectiveness by selected case studies.
While most organizations continue to invest in traditional network defences, a formidable security challenge has been brewing within their own boundaries. Malicious insiders with privileged access in the guise of a trusted source have carried out many attacks causing far reaching damage to financial stability, national security and brand reputation for both public and private sector organizations. Growing exposure and impact of the whistleblower community and concerns about job security with changing organizational dynamics has further aggravated this situation. The unpredictability of malicious attackers, as well as the complexity of malicious actions, necessitates the careful analysis of network, system and user parameters correlated with insider threat problem. Thus it creates a high dimensional, heterogeneous data analysis problem in isolating suspicious users. This research work proposes an insider threat detection framework, which utilizes the attributed graph clustering techniques and outlier ranking mechanism for enterprise users. Empirical results also confirm the effectiveness of the method by achieving the best area under curve value of 0.7648 for the receiver operating characteristic curve.
The survey of related works on insider information security (IS) threats is presented. Special attention is paid to works that consider the insiders' behavioral models as it is very up-to-date for behavioral intrusion detection. Three key research directions are defined: 1) the problem analysis in general, including the development of taxonomy for insiders, attacks and countermeasures; 2) study of a specific IS threat with forecasting model development; 3) early detection of a potential insider. The models for the second and third directions are analyzed in detail. Among the second group the works on three IS threats are examined, namely insider espionage, cyber sabotage and unintentional internal IS violation. Discussion and a few directions for the future research conclude the paper.
Insider threat is a significant security risk for information system, and detection of insider threat is a major concern for information system organizers. Recently existing work mainly focused on the single pattern analysis of user single-domain behavior, which were not suitable for user behavior pattern analysis in multi-domain scenarios. However, the fusion of multi-domain irrelevant features may hide the existence of anomalies. Previous feature learning methods have relatively a large proportion of information loss in feature extraction. Therefore, this paper proposes a hybrid model based on the deep belief network (DBN) to detect insider threat. First, an unsupervised DBN is used to extract hidden features from the multi-domain feature extracted by the audit logs. Secondly, a One-Class SVM (OCSVM) is trained from the features learned by the DBN. The experimental results on the CERT dataset demonstrate that the DBN can be used to identify the insider threat events and it provides a new idea to feature processing for the insider threat detection.
In presence of known and unknown vulnerabilities in code and flow control of programs, virtual machine alike isolation and sandboxing to confine maliciousness of process, by monitoring and controlling the behaviour of untrusted application, is an effective strategy. A confined malicious application cannot effect system resources and other applications running on same operating system. But present techniques used for sandboxing have some drawbacks ranging from scope to methodology. Some of proposed techniques restrict specific aspect of execution e.g. system calls and file system access. In the same way techniques that truly isolate the application by providing separate execution environment either require modification in kernel or full blown operating system. Moreover these do not provide isolation from top to bottom but only virtualize operating system services. In this paper, we propose a design to confine native Linux process in virtual machine equivalent isolation by using hardware virtualization extensions with nominal initialization and acceptable execution overheads. We implemented our prototype called Process Virtual Machine that transition a native process into virtual machine, provides minimal possible execution environment, intercept and virtualize system calls to execute it on host kernel. Experimental results show effectiveness of our proposed technique.
Internet of Things is gaining research attention as one of the important fields that will affect our daily life vastly. Today, around us this revolutionary technology is growing and evolving day by day. This technology offers certain benefits like automatic processing, improved logistics and device communication that would help us to improve our social life, health, living standards and infrastructure. However, due to their simple architecture and presence on wide variety of fields they pose serious concern to security. Due to the low end architecture there are many security issues associated with IoT network devices. In this paper, we try to address the security issue by proposing JavaScript sandbox as a method to execute IoT program. Using this sandbox we also implement the strategy to control the execution of the sandbox while the program is being executed on it.
In a world where highly skilled actors involved in cyber-attacks are constantly increasing and where the associated underground market continues to expand, organizations should adapt their defence strategy and improve consequently their security incident management. In this paper, we give an overview of Advanced Persistent Threats (APT) attacks life cycle as defined by security experts. We introduce our own compiled life cycle model guided by attackers objectives instead of their actions. Challenges and opportunities related to the specific camouflage actions performed at the end of each APT phase of the model are highlighted. We also give an overview of new APT protection technologies and discuss their effectiveness at each one of life cycle phases.
Phishing is a form of online identity theft that deceives unaware users into disclosing their confidential information. While significant effort has been devoted to the mitigation of phishing attacks, much less is known about the entire life-cycle of these attacks in the wild, which constitutes, however, a main step toward devising comprehensive anti-phishing techniques. In this paper, we present a novel approach to sandbox live phishing kits that completely protects the privacy of victims. By using this technique, we perform a comprehensive real-world assessment of phishing attacks, their mechanisms, and the behavior of the criminals, their victims, and the security community involved in the process – based on data collected over a period of five months. Our infrastructure allowed us to draw the first comprehensive picture of a phishing attack, from the time in which the attacker installs and tests the phishing pages on a compromised host, until the last interaction with real victims and with security researchers. Our study presents accurate measurements of the duration and effectiveness of this popular threat, and discusses many new and interesting aspects we observed by monitoring hundreds of phishing campaigns.
Separation of network control from devices in Software Defined Network (SDN) allows for centralized implementation and management of security policies in a cloud computing environment. The ease of programmability also makes SDN a great platform implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. Dynamic change of network topology, or host reconfiguration in such networks might require corresponding changes to the flow rules in the SDN based cloud environment. Verifying adherence of these new flow policies in the environment to the organizational security policies and ensuring a conflict free environment is especially challenging. In this paper, we extend the work on rule conflicts from a traditional environment to an SDN environment, introducing a new classification to describe conflicts stemming from cross-layer conflicts. Our framework ensures that in any SDN based cloud, flow rules do not have conflicts at any layer; thereby ensuring that changes to the environment do not lead to unintended consequences. We demonstrate the correctness, feasibility and scalability of our framework through a proof-of-concept prototype.
Legacy work on correcting firewall anomalies operate with the premise of creating totally disjunctive rules. Unfortunately, such solutions are impractical from implementation point of view as they lead to an explosion of the number of firewall rules. In a related previous work, we proposed a new approach for performing assisted corrective actions, which in contrast to the-state-of-the-art family of radically disjunctive approaches, does not lead to a prohibitive increase of the configuration size. In this sense, we allow relaxation in the correction process by clearly distinguishing between constructive anomalies that can be tolerated and destructive anomalies that should be systematically fixed. However, a main disadvantage of the latter approach was its dependency on the guided input from the administrator which controversially introduces a new risk for human errors. In order to circumvent the latter disadvantage, we present in this paper a Firewall Policy Query Engine (FPQE) that renders the whole process of anomaly resolution a fully automated one and which does not require any human intervention. In this sense, instead of prompting the administrator for inserting the proper order corrective actions, FPQE executes those queries against a high level firewall policy. We have implemented the FPQE and the first results of integrating it with our legacy anomaly resolver are promising.
Federated cloud networks are formed by federating virtual network segments from different clouds, e.g. in a hybrid cloud, into a single federated network. Such networks should be protected with a global federated cloud network security policy. The availability of network function virtualisation and service function chaining in cloud platforms offers an opportunity for implementing and enforcing global federated cloud network security policies. In this paper we describe an approach for enforcing global security policies in federated cloud networks. The approach relies on a service manifest that specifies the global network security policy. From this manifest configurations of the security functions for the different clouds of the federation are generated. This enables automated deployment and configuration of network security functions across the different clouds. The approach is illustrated with a case study where communications between trusted and untrusted clouds, e.g. public clouds, are encrypted. The paper discusses future work on implementing this architecture for the OpenStack cloud platform with the service function chaining API.
In modern enterprises, incorrect or inconsistent security policies can lead to massive damage, e.g., through unintended data leakage. As policy authors have different skills and background knowledge, usable policy editors have to be tailored to the author's individual needs and to the corresponding application domain. However, the development of individual policy editors and the customization of existing ones is an effort consuming task. In this paper, we present a framework for generating tailored policy editors. In order to empower user-friendly and less error-prone specification of security policies, the framework supports multiple platforms, policy languages, and specification paradigms.
Content Security Policy is a mechanism designed to prevent the exploitation of XSS – the most common high-risk web application flaw. CSP restricts which scripts can be executed by allowing developers to define valid script sources; an attacker with a content-injection flaw should not be able to force the browser to execute arbitrary malicious scripts. Currently, CSP is commonly used in conjunction with domain-based script whitelist, where the existence of a single unsafe endpoint in the script whitelist effectively removes the value of the policy as a protection against XSS ( some examples ).
At the core of its nature, security is a highly contextual and dynamic challenge. However, current security policy approaches are usually static, and slow to adapt to ever-changing requirements, let alone catching up with reality. In a 2012 Sophos survey, it was stated that a unique malware is created every half a second. This gives a glimpse of the unsustainable nature of a global problem, any improvement in terms of closing the "time window to adapt" would be a significant step forward. To exacerbate the situation, a simple change in threat and attack vector or even an implementation of the so-called "bring-your-own-device" paradigm will greatly change the frequency of changed security requirements and necessary solutions required for each new context. Current security policies also typically overlook the direct and indirect costs of implementation of policies. As a result, technical teams often fail to have the ability to justify the budget to the management, from a business risk viewpoint. This paper considers both the adaptive and cost-benefit aspects of security, and introduces a novel context-aware technique for designing and implementing adaptive, optimized security policies. Our approach leverages the capabilities of stochastic programming models to optimize security policy planning, and our preliminary results demonstrate a promising step towards proactive, context-aware security policies.
Institutions use the information security (InfoSec) policy document as a set of rules and guidelines to govern the use of the institutional information resources. However, a common problem is that these policies are often not followed or complied with. This study explores the extent to which the problem lies with the policy documents themselves. The InfoSec policies are documented in the natural languages, which are prone to ambiguity and misinterpretation. Subsequently such policies may be ambiguous, thereby making it hard, if not impossible for users to comply with. A case study approach with a content analysis was conducted. The research explores the extent of the problem by using a case study of an educational institution in South Africa.
Expressing and matching the security policy of each participant accurately is the precondition to construct a secure service composition. Most schemes presently use syntactic approaches to represent and match the security policy for service composition process, which is prone to result in false negative because of lacking semantics. In this paper, a novel approach based on semantics is proposed to express and match the security policies in service composition. Through constructing a general security ontology, the definition method and matching algorithm of the semantic security policy for service composition are presented, and the matching problem of policy is translated into the subsumption reasoning problem of semantic concept. Both the theoretical analysis and experimental evaluation show that, the proposed approach can present the necessary semantic information in the representation of policy and effectively improve the accuracy of matching result, thus overcome the deficiency of the syntactic approaches, and can also simplify the definition and management of the policy at the same time, which thereby provides a more effective solution for building the secure service composition based on security policy.
Controllers for software defined networks (SDNs) are quickly maturing to offer network operators more intuitive programming frameworks and greater abstractions for network application development. Likewise, many security solutions now exist within SDN environments for detecting and blocking clients who violate network policies. However, many of these solutions stop at triggering the security measure and give little thought to amending it. As a consequence, once the violation is addressed, no clear path exists for reinstating the flagged client beyond having the network operator reset the controller or manually implement a state change via an external command. This presents a burden for the network and its clients and administrators. Hence, we present a security policy transition framework for revoking security measures in an SDN environment once said measures are activated.
Hypervisors are the main components for managing virtual machines on cloud computing systems. Thus, the security of hypervisors is very crucial as the whole system could be compromised when just one vulnerability is exploited. In this paper, we assess the vulnerabilities of widely used hypervisors including VMware ESXi, Citrix XenServer and KVM using the NIST 800-115 security testing framework. We perform real experiments to assess the vulnerabilities of those hypervisors using security testing tools. The results are evaluated using weakness information from CWE, and using vulnerability information from CVE. We also compute the severity scores using CVSS information. All vulnerabilities found of three hypervisors will be compared in terms of weaknesses, severity scores and impact. The experimental results showed that ESXi and XenServer have common weaknesses and vulnerabilities whereas KVM has fewer vulnerabilities. In addition, we discover a new vulnerability called HTTP response splitting on ESXi Web interface.
Increasing cyber-security presents an ongoing challenge to security professionals. Research continuously suggests that online users are a weak link in information security. This research explores the relationship between cyber-security and cultural, personality and demographic variables. This study was conducted in four different countries and presents a multi-cultural view of cyber-security. In particular, it looks at how behavior, self-efficacy and privacy attitude are affected by culture compared to other psychological and demographics variables (such as gender and computer expertise). It also examines what kind of data people tend to share online and how culture affects these choices. This work supports the idea of developing personality based UI design to increase users' cyber-security. Its results show that certain personality traits affect the user cyber-security related behavior across different cultures, which further reinforces their contribution compared to cultural effects.