Yang, Z., Sun, Q., Zhang, Y., Zhu, L., Ji, W..
2020.
Inference of Suspicious Co-Visitation and Co-Rating Behaviors and Abnormality Forensics for Recommender Systems. IEEE Transactions on Information Forensics and Security. 15:2766—2781.
The pervasiveness of personalized collaborative recommender systems has shown the powerful capability in a wide range of E-commerce services such as Amazon, TripAdvisor, Yelp, etc. However, fundamental vulnerabilities of collaborative recommender systems leave space for malicious users to affect the recommendation results as the attackers desire. A vast majority of existing detection methods assume certain properties of malicious attacks are given in advance. In reality, improving the detection performance is usually constrained due to the challenging issues: (a) various types of malicious attacks coexist, (b) limited representations of malicious attack behaviors, and (c) practical evidences for exploring and spotting anomalies on real-world data are scarce. In this paper, we investigate a unified detection framework in an eye for an eye manner without being bothered by the details of the attacks. Firstly, co-visitation and co-rating graphs are constructed using association rules. Then, attribute representations of nodes are empirically developed from the perspectives of linkage pattern, structure-based property and inherent association of nodes. Finally, both attribute information and connective coherence of graph are combined in order to infer suspicious nodes. Extensive experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed detection approach compared with competing benchmarks. Additionally, abnormality forensics metrics including distribution of rating intention, time aggregation of suspicious ratings, degree distributions before as well as after removing suspicious nodes and time series analysis of historical ratings, are provided so as to discover interesting findings such as suspicious nodes (items or ratings) on real-world data.
Yaseen, Q., Panda, B..
2012.
Tackling Insider Threat in Cloud Relational Databases. 2012 IEEE Fifth International Conference on Utility and Cloud Computing. :215—218.
Cloud security is one of the major issues that worry individuals and organizations about cloud computing. Therefore, defending cloud systems against attacks such asinsiders' attacks has become a key demand. This paper investigates insider threat in cloud relational database systems(cloud RDMS). It discusses some vulnerabilities in cloud computing structures that may enable insiders to launch attacks, and shows how load balancing across multiple availability zones may facilitate insider threat. To prevent such a threat, the paper suggests three models, which are Peer-to-Peer model, Centralized model and Mobile-Knowledgebase model, and addresses the conditions under which they work well.
Igbe, O., Saadawi, T..
2018.
Insider Threat Detection using an Artificial Immune system Algorithm. 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :297—302.
Insider threats result from legitimate users abusing their privileges, causing tremendous damage or losses. Malicious insiders can be the main threats to an organization. This paper presents an anomaly detection system for detecting insider threat activities in an organization using an ensemble that consists of negative selection algorithms (NSA). The proposed system classifies a selected user activity into either of two classes: "normal" or "malicious." The effectiveness of our proposed detection system is evaluated using case studies from the computer emergency response team (CERT) synthetic insider threat dataset. Our results show that the proposed method is very effective in detecting insider threats.
Zhang, T., Zhao, P..
2010.
Insider Threat Identification System Model Based on Rough Set Dimensionality Reduction. 2010 Second World Congress on Software Engineering. 2:111—114.
Insider threat makes great damage to the security of information system, traditional security methods are extremely difficult to work. Insider attack identification plays an important role in insider threat detection. Monitoring user's abnormal behavior is an effective method to detect impersonation, this method is applied to insider threat identification, to built user's behavior attribute information database based on weights changeable feedback tree augmented Bayes network, but data is massive, using the dimensionality reduction based on rough set, to establish the process information model of user's behavior attribute. Using the minimum risk Bayes decision can effectively identify the real identity of the user when user's behavior departs from the characteristic model.
Althebyan, Q..
2019.
A Mobile Edge Mitigation Model for Insider Threats: A Knowledgebase Approach. 2019 International Arab Conference on Information Technology (ACIT). :188—192.
Taking care of security at the cloud is a major issue that needs to be carefully considered and solved for both individuals as well as organizations. Organizations usually expect more trust from employees as well as customers in one hand. On the other hand, cloud users expect their private data is maintained and secured. Although this must be case, however, some malicious outsiders of the cloud as well as malicious insiders who are cloud internal users tend to disclose private data for their malicious uses. Although outsiders of the cloud should be a concern, however, the more serious problems come from Insiders whose malicious actions are more serious and sever. Hence, insiders' threats in the cloud should be the top most problem that needs to be tackled and resolved. This paper aims to find a proper solution for the insider threat problem in the cloud. The paper presents a Mobile Edge Computing (MEC) mitigation model as a solution that suits the specialized nature of this problem where the solution needs to be very close to the place where insiders reside. This in fact gives real-time responses to attack, and hence, reduces the overhead in the cloud.
Zhang, H., Ma, J., Wang, Y., Pei, Q..
2009.
An Active Defense Model and Framework of Insider Threats Detection and Sense. 2009 Fifth International Conference on Information Assurance and Security. 1:258—261.
Insider attacks is a well-known problem acknowledged as a threat as early as 1980s. The threat is attributed to legitimate users who take advantage of familiarity with the computational environment and abuse their privileges, can easily cause significant damage or losses. In this paper, we present an active defense model and framework of insider threat detection and sense. Firstly, we describe the hierarchical framework which deal with insider threat from several aspects, and subsequently, show a hierarchy-mapping based insider threats model, the kernel of the threats detection, sense and prediction. The experiments show that the model and framework could sense the insider threat in real-time effectively.
Spooner, D., Silowash, G., Costa, D., Albrethsen, M..
2018.
Navigating the Insider Threat Tool Landscape: Low Cost Technical Solutions to Jump Start an Insider Threat Program. 2018 IEEE Security and Privacy Workshops (SPW). :247—257.
This paper explores low cost technical solutions that can help organizations prevent, detect, and respond to insider incidents. Features and functionality associated with insider risk mitigation are presented. A taxonomy for high-level categories of insider threat tools is presented. A discussion of the relationship between the types of tools points out the nuances of insider threat control deployment, and considerations for selecting, implementing, and operating insider threat tools are provided.
Mundie, D. A., Perl, S., Huth, C. L..
2013.
Toward an Ontology for Insider Threat Research: Varieties of Insider Threat Definitions. 2013 Third Workshop on Socio-Technical Aspects in Security and Trust. :26—36.
The lack of standardization of the terms insider and insider threat has been a noted problem for researchers in the insider threat field. This paper describes the investigation of 42 different definitions of the terms insider and insider threat, with the goal of better understanding the current conceptual model of insider threat and facilitating communication in the research community.
Sarma, M. S., Srinivas, Y., Abhiram, M., Ullala, L., Prasanthi, M. S., Rao, J. R..
2017.
Insider Threat Detection with Face Recognition and KNN User Classification. 2017 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM). :39—44.
Information Security in cloud storage is a key trepidation with regards to Degree of Trust and Cloud Penetration. Cloud user community needs to ascertain performance and security via QoS. Numerous models have been proposed [2] [3] [6][7] to deal with security concerns. Detection and prevention of insider threats are concerns that also need to be tackled. Since the attacker is aware of sensitive information, threats due to cloud insider is a grave concern. In this paper, we have proposed an authentication mechanism, which performs authentication based on verifying facial features of the cloud user, in addition to username and password, thereby acting as two factor authentication. New QoS has been proposed which is capable of monitoring and detection of insider threats using Machine Learning Techniques. KNN Classification Algorithm has been used to classify users into legitimate, possibly legitimate, possibly not legitimate and not legitimate groups to verify image authenticity to conclude, whether there is any possible insider threat. A threat detection model has also been proposed for insider threats, which utilizes Facial recognition and Monitoring models. Security Method put forth in [6] [7] is honed to include threat detection QoS to earn higher degree of trust from cloud user community. As a recommendation, Threat detection module should be harnessed in private cloud deployments like Defense and Pharma applications. Experimentation has been conducted using open source Machine Learning libraries and results have been attached in this paper.
Claycomb, W. R., Huth, C. L., Phillips, B., Flynn, L., McIntire, D..
2013.
Identifying indicators of insider threats: Insider IT sabotage. 2013 47th International Carnahan Conference on Security Technology (ICCST). :1—5.
This paper describes results of a study seeking to identify observable events related to insider sabotage. We collected information from actual insider threat cases, created chronological timelines of the incidents, identified key points in each timeline such as when attack planning began, measured the time between key events, and looked for specific observable events or patterns that insiders held in common that may indicate insider sabotage is imminent or likely. Such indicators could be used by security experts to potentially identify malicious activity at or before the time of attack. Our process included critical steps such as identifying the point of damage to the organization as well as any malicious events prior to zero hour that enabled the attack but did not immediately cause harm. We found that nearly 71% of the cases we studied had either no observable malicious action prior to attack, or had one that occurred less than one day prior to attack. Most of the events observed prior to attack were behavioral, not technical, especially those occurring earlier in the case timelines. Of the observed technical events prior to attack, nearly one third involved installation of software onto the victim organizations IT systems.
Sarkar, M. Z. I., Ratnarajah, T..
2010.
Information-theoretic security in wireless multicasting. International Conference on Electrical Computer Engineering (ICECE 2010). :53–56.
In this paper, a wireless multicast scenario is considered in which the transmitter sends a common message to a group of client receivers through quasi-static Rayleigh fading channel in the presence of an eavesdropper. The communication between transmitter and each client receiver is said to be secured if the eavesdropper is unable to decode any information. On the basis of an information-theoretic formulation of the confidential communications between transmitter and a group of client receivers, we define the expected secrecy sum-mutual information in terms of secure outage probability and provide a complete characterization of maximum transmission rate at which the eavesdropper is unable to decode any information. Moreover, we find the probability of non-zero secrecy mutual information and present an analytical expression for ergodic secrecy multicast mutual information of the proposed model.
Venkitasubramaniam, P., Yao, J., Pradhan, P..
2015.
Information-Theoretic Security in Stochastic Control Systems. Proceedings of the IEEE. 103:1914–1931.
Infrastructural systems such as the electricity grid, healthcare, and transportation networks today rely increasingly on the joint functioning of networked information systems and physical components, in short, on cyber-physical architectures. Despite tremendous advances in cryptography, physical-layer security and authentication, information attacks, both passive such as eavesdropping, and active such as unauthorized data injection, continue to thwart the reliable functioning of networked systems. In systems with joint cyber-physical functionality, the ability of an adversary to monitor transmitted information or introduce false information can lead to sensitive user data being leaked or result in critical damages to the underlying physical system. This paper investigates two broad challenges in information security in cyber-physical systems (CPSs): preventing retrieval of internal physical system information through monitored external cyber flows, and limiting the modification of physical system functioning through compromised cyber flows. A rigorous analytical framework grounded on information-theoretic security is developed to study these challenges in a general stochastic control system abstraction-a theoretical building block for CPSs-with the objectives of quantifying the fundamental tradeoffs between information security and physical system performance, and through the process, designing provably secure controller policies. Recent results are presented that establish the theoretical basis for the framework, in addition to practical applications in timing analysis of anonymous systems, and demand response systems in a smart electricity grid.
Jin, R., He, X., Dai, H..
2019.
On the Security-Privacy Tradeoff in Collaborative Security: A Quantitative Information Flow Game Perspective. IEEE Transactions on Information Forensics and Security. 14:3273–3286.
To contest the rapidly developing cyber-attacks, numerous collaborative security schemes, in which multiple security entities can exchange their observations and other relevant data to achieve more effective security decisions, are proposed and developed in the literature. However, the security-related information shared among the security entities may contain some sensitive information and such information exchange can raise privacy concerns, especially when these entities belong to different organizations. With such consideration, the interplay between the attacker and the collaborative entities is formulated as Quantitative Information Flow (QIF) games, in which the QIF theory is adapted to measure the collaboration gain and the privacy loss of the entities in the information sharing process. In particular, three games are considered, each corresponding to one possible scenario of interest in practice. Based on the game-theoretic analysis, the expected behaviors of both the attacker and the security entities are obtained. In addition, the simulation results are presented to validate the analysis.
Chrysikos, T., Dagiuklas, T., Kotsopoulos, S..
2010.
Wireless Information-Theoretic Security for moving users in autonomic networks. 2010 IFIP Wireless Days. :1–5.
This paper studies Wireless Information-Theoretic Security for low-speed mobility in autonomic networks. More specifically, the impact of user movement on the Probability of Non-Zero Secrecy Capacity and Outage Secrecy Capacity for different channel conditions has been investigated. This is accomplished by establishing a link between different user locations and the boundaries of information-theoretic secure communication. Human mobility scenarios are considered, and its impact on physical layer security is examined, considering quasi-static Rayleigh channels for the fading phenomena. Simulation results have shown that the Secrecy Capacity depends on the relative distance of legitimate and illegitimate (eavesdropper) users in reference to the given transmitter.
Bouzar-Benlabiod, L., Rubin, S. H., Belaidi, K., Haddar, N. E..
2020.
RNN-VED for Reducing False Positive Alerts in Host-based Anomaly Detection Systems. 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI). :17–24.
Host-based Intrusion Detection Systems HIDS are often based on anomaly detection. Several studies deal with anomaly detection by analyzing the system-call traces and get good detection rates but also a high rate off alse positives. In this paper, we propose a new anomaly detection approach applied on the system-call traces. The normal behavior learning is done using a Sequence to sequence model based on a Variational Encoder-Decoder (VED) architecture that integrates Recurrent Neural Networks (RNN) cells. We exploit the semantics behind the invoking order of system-calls that are then seen as sentences. A preprocessing phase is added to structure and optimize the model input-data representation. After the learning step, a one-class classification is run to categorize the sequences as normal or abnormal. The architecture may be used for predicting abnormal behaviors. The tests are achieved on the ADFA-LD dataset.
Wang, P., Zhang, J., Wang, S., Wu, D..
2020.
Quantitative Assessment on the Limitations of Code Randomization for Legacy Binaries. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :1–16.
Software development and deployment are generally fast-pacing practices, yet to date there is still a significant amount of legacy software running in various critical industries with years or even decades of lifespans. As the source code of some legacy software became unavailable, it is difficult for maintainers to actively patch the vulnerabilities, leaving the outdated binaries appealing targets of advanced security attacks. One of the most powerful attacks today is code reuse, a technique that can circumvent most existing system-level security facilities. While there have been various countermeasures against code reuse, applying them to sourceless software appears to be exceptionally challenging. Fine-grained code randomization is considered to be an effective strategy to impede modern code-reuse attacks. To apply it to legacy software, a technique called binary rewriting is employed to directly reconstruct binaries without symbol or relocation information. However, we found that current rewriting-based randomization techniques, regardless of their designs and implementations, share a common security defect such that the randomized binaries may remain vulnerable in certain cases. Indeed, our finding does not invalidate fine-grained code randomization as a meaningful defense against code reuse attacks, for it significantly raises the bar for exploits to be successful. Nevertheless, it is critical for the maintainers of legacy software systems to be aware of this problem and obtain a quantitative assessment of the risks in adopting a potentially incomprehensive defense. In this paper, we conducted a systematic investigation into the effectiveness of randomization techniques designed for hardening outdated binaries. We studied various state-of-the-art, fine-grained randomization tools, confirming that all of them can leave a certain part of the retrofitted binary code still reusable. To quantify the risks, we proposed a set of concrete criteria to classify gadgets immune to rewriting-based randomization and investigated their availability and capability.