Biblio
Malicious login, especially lateral movement, has been a primary and costly threat for enterprises. However, there exist two critical challenges in the existing methods. Specifically, they heavily rely on a limited number of predefined rules and features. When the attack patterns change, security experts must manually design new ones. Besides, they cannot explore the attributes' mutual effect specific to login operations. We propose MLTracer, a graph neural network (GNN) based system for detecting such attacks. It has two core components to tackle the previous challenges. First, MLTracer adopts a novel method to differentiate crucial attributes of login operations from the rest without experts' designated features. Second, MLTracer leverages a GNN model to detect malicious logins. The model involves a convolutional neural network (CNN) to explore attributes of login operations, and a co-attention mechanism to mutually improve the representations (vectors) of login attributes through learning their login-specific relation. We implement an evaluation of such an approach. The results demonstrate that MLTracer significantly outperforms state-of-the-art methods. Moreover, MLTracer effectively detects various attack scenarios with a remarkably low false positive rate (FPR).
When communication about security to end users is ineffective, people frequently misinterpret the protection offered by a system. The discrepancy between the security users perceive a system to have and the actual system state can lead to potentially risky behaviors. It is thus crucial to understand how security perceptions are shaped by interface elements such as text-based descriptions of encryption. This article addresses the question of how encryption should be described to non-experts in a way that enhances perceived security. We tested the following within-subject variables in an online experiment (N=309): a) how to best word encryption, b) whether encryption should be described with a focus on the process or outcome, or both c) whether the objective of encryption should be mentioned d) when mentioning the objective of encryption, how to best describe it e) whether a hash should be displayed to the user. We also investigated the role of context (between subjects). The verbs "encrypt" and "secure" performed comparatively well at enhancing perceived security. Overall, participants stated that they felt more secure not knowing about the objective of encryption. When it is necessary to state the objective, positive wording of the objective of encryption worked best. We discuss implications and why using these results to design for perceived lack of security might be of interest as well. This leads us to discuss ethical concerns, and we give guidelines for the design of user interfaces where encryption should be communicated to end users.
Throughout the life cycle of any technical project, the enterprise needs to assess the risks associated with its development, commissioning, operation and decommissioning. This article defines the task of researching risks in relation to the operation of a data storage subsystem in the cloud infrastructure of a geographically distributed company and the tools that are required for this. Analysts point out that, compared to 2018, in 2019 there were 3.5 times more cases of confidential information leaks from storages on unprotected (freely accessible due to incorrect configuration) servers in cloud services. The total number of compromised personal data and payment information records increased 5.4 times compared to 2018 and amounted to more than 8.35 billion records. Moreover, the share of leaks of payment information has decreased, but the percentage of leaks of personal data has grown and accounts for almost 90% of all leaks from cloud storage. On average, each unsecured service identified resulted in 33.7 million personal data records being leaked. Leaks are mainly related to misconfiguration of services and stored resources, as well as human factors. These impacts can be minimized by improving the skills of cloud storage administrators and regularly auditing storage. Despite its seeming insecurity, the cloud is a reliable way of storing data. At the same time, leaks are still occurring. According to Kaspersky Lab, every tenth (11%) data leak from the cloud became possible due to the actions of the provider, while a third of all cyber incidents in the cloud (31% in Russia and 33% in the world) were due to gullibility company employees caught up in social engineering techniques. Minimizing the risks associated with the storage of personal data is one of the main tasks when operating a company's cloud infrastructure.
Remote patient monitoring is a system that focuses on patients care and attention with the advent of the Internet of Things (IoT). The technology makes it easier to track distance, but also to diagnose and provide critical attention and service on demand so that billions of people are safer and more safe. Skincare monitoring is one of the growing fields of medical care which requires IoT monitoring, because there is an increasing number of patients, but cures are restricted to the number of available dermatologists. The IoT-based skin monitoring system produces and store volumes of private medical data at the cloud from which the skin experts can access it at remote locations. Such large-scale data are highly vulnerable and otherwise have catastrophic results for privacy and security mechanisms. Medical organizations currently do not concentrate much on maintaining safety and privacy, which are of major importance in the field. This paper provides an IoT based skin surveillance system based on a blockchain data protection and safety mechanism. A secure data transmission mechanism for IoT devices used in a distributed architecture is proposed. Privacy is assured through a unique key to identify each user when he registers. The principle of blockchain also addresses security issues through the generation of hash functions on every transaction variable. We use blockchain consortiums that meet our criteria in a decentralized environment for controlled access. The solutions proposed allow IoT based skin surveillance systems to privately and securely store and share medical data over the network without disturbance.
With the advent of Industry 4.0, the Internet of Things (IoT) and Artificial Intelligence (AI), smart entities are now able to read the minds of users via extracting cognitive patterns from electroencephalogram (EEG) signals. Such brain data may include users' experiences, emotions, motivations, and other previously private mental and psychological processes. Accordingly, users' cognitive privacy may be violated and the right to cognitive privacy should protect individuals against the unconsented intrusion by third parties into the brain data as well as against the unauthorized collection of those data. This has caused a growing concern among users and industry experts that laws to protect the right to cognitive liberty, right to mental privacy, right to mental integrity, and the right to psychological continuity. In this paper, we propose an AI-enabled EEG model, namely Cognitive Privacy, that aims to protect data and classifies users and their tasks from EEG data. We present a model that protects data from disclosure using normalized correlation analysis and classifies subjects (i.e., a multi-classification problem) and their tasks (i.e., eye open and eye close as a binary classification problem) using a long-short term memory (LSTM) deep learning approach. The model has been evaluated using the EEG data set of PhysioNet BCI, and the results have revealed its high performance of classifying users and their tasks with achieving high data privacy.
Experts often design security and privacy technology with specific use cases and threat models in mind. In practice however, end users are not aware of these threats and potential countermeasures. Furthermore, mis-conceptions about the benefits and limitations of security and privacy technology inhibit large-scale adoption by end users. In this paper, we address this challenge and contribute a qualitative study on end users' and security experts' perceptions of threat models and potential countermeasures. We follow an inductive research approach to explore perceptions and mental models of both security experts and end users. We conducted semi-structured interviews with 8 security experts and 13 end users. Our results suggest that in contrast to security experts, end users neglect acquaintances and friends as attackers in their threat models. Our findings highlight that experts value technical countermeasures whereas end users try to implement trust-based defensive methods.
Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.
In this research, we examine and develop an expert system with a mechanism to automate crime category classification and threat level assessment, using the information collected by crawling the dark web. We have constructed a bag of words from 250 posts on the dark web and developed an expert system which takes the frequency of terms as an input and classifies sample posts into 6 criminal category dealing with drugs, stolen credit card, passwords, counterfeit products, child porn and others, and 3 threat levels (high, middle, low). Contrary to prior expectations, our simple and explainable expert system can perform competitively with other existing systems. For short, our experimental result with 1500 posts on the dark web shows 76.4% of recall rate for 6 criminal category classification and 83% of recall rate for 3 threat level discrimination for 100 random-sampled posts.
The need to enhance the performance of existing transmission network in line with economic and technical constraints is crucial in a competitive market environment. This paper models the total transfer capacity (TTC) improvement using optimally placed thyristor-controlled series capacitors (TCSC). The system states were evaluated using distributed slack bus (DSB) and continuous power flow (CPF) techniques. Adaptable logic relations was modelled based on security margin (SM), steady state and transient condition collapse voltages (Uss, Uts) and the steady state line power loss (Plss), through which line suitability index (LSI) were obtained. The fuzzy expert system (FES) membership functions (MF) with respective degrees of memberships are defined to obtain the best states. The LSI MF is defined high between 0.2-0.8 to provide enough protection under transient disturbances. The test results on IEEE 30 bus system show that the model is feasible for TTC enhancement under steady state and N-1 conditions.
This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.
The new instrumentation and control (I&C) systems of the nuclear power plants (NPPs) improve the ability to operate the plant enhance the safety and performance of the NPP. However, they bring a new type of threat to the NPP's industry-cyber threat. The early fault diagnostic system (EDS) is one of the decision support systems that might be used online during the operation stage. The EDS aim is to prevent the incident/accident evolution by a timely troubleshooting process during any plant operational modes. It means that any significative deviation of plant parameters from normal values is pointed-out to plant operators well before reaching any undesired threshold potentially leading to a prohibited plant state, together with the cause that has generated the deviation. The paper lists the key benefits using the EDS to counter the cyber threat and proposes the framework for cybersecurity assessment using EDS during the operational stage.
Smart Grid cyber-security sounds to be a critical issue, because of widespread development of information technology. To achieve secure and reliable operation, the complexity of human automation interaction (HAI) necessitates more sophisticated and intelligent methodologies. In this paper, an adaptive autonomy fuzzy expert system is developed using gradient descent algorithm to determine the Level of Automation (LOA), based on the changing of Performance Shaping Factors (PSF). These PSFs indicate the effects of environmental conditions on the performance of HAI. The major advantage of this method is that the fuzzy rule or membership function can be learnt without changing the form of the fuzzy rule in conventional fuzzy control. Because of data shortage, Leave-One-Out Cross-Validation (LOOCV) technique is applied for assessing how the results of proposed system generalizes to the new contingency situations. The expert system database is extracted from superior experts' judgments. In order to regard the importance of each PSF, weighted rules are also considered. In addition, some new environmental conditions are introduced that has not been seen before. Nine scenarios are discussed to reveal the performance of the proposed system. Results confirm that the presented fuzzy expert system can effectively calculates the proper LOA even in the new contingency situations.
The use of risk information can help software engineers identify software components that are likely vulnerable or require extra attention when testing. Some studies have shown that the requirements risk-based approaches can be effective in improving the effectiveness of regression testing techniques. However, the risk estimation processes used in such approaches can be subjective, time-consuming, and costly. In this research, we introduce a fuzzy expert system that emulates human thinking to address the subjectivity related issues in the risk estimation process in a systematic and an efficient way and thus further improve the effectiveness of test case prioritization. Further, the required data for our approach was gathered by employing a semi-automated process that made the risk estimation process less subjective. The empirical results indicate that the new prioritization approach can improve the rate of fault detection over several existing test case prioritization techniques, while reducing threats to subjective risk estimation.
Due to its costly and time-consuming nature and a wide range of passive barrier elements and tools for their breaching, testing the delay time of passive barriers is only possible as an experimental tool to verify expert judgements of said delay times. The article focuses on the possibility of creating and utilizing a new method of acquiring values of delay time for various passive barrier elements using expert judgements which could add to the creation of charts where interactions between the used elements of mechanical barriers and the potential tools for their bypassing would be assigned a temporal value. The article consists of basic description of methods of expert judgements previously applied for making prognoses of socio-economic development and in other societal areas, which are called soft system. In terms of the problem of delay time, this method needed to be modified in such a way that the prospective output would be expressible by a specific quantitative value. To achieve this goal, each stage of the expert judgements was adjusted to the use of suitable scientific methods to select appropriate experts and then to achieve and process the expert data. High emphasis was placed on evaluation of quality and reliability of the expert judgements, which takes into account the specifics of expert selection such as their low numbers, specialization and practical experience.
The paper discusses the architectural, algorithmic and computing aspects of creating and operating a class of expert system for managing technological safety of an enterprise, in conditions of a large flow of diagnostic variables. The algorithm for finding a faulty technological chain uses expert information, formed as a set of evidence on the influence of diagnostic variables on the correctness of the technological process. Using the Dempster-Schafer trust function allows determining the overall probability measure on subsets of faulty process chains. To combine different evidence, the orthogonal sums of the base probabilities determined for each evidence are calculated. The procedure described above is converted into the rules of the knowledge base production. The description of the developed prototype of the expert system, its architecture, algorithmic and software is given. The functionality of the expert system and configuration tools for a specific type of production are under discussion.
The number of new malware and new malware variants have been increasing continuously. Security experts analyze malware to capture the malicious properties of malware and to generate signatures or detection rules, but the analysis overheads keep increasing with the increasing number of malware. To analyze a large amount of malware, various kinds of automatic analysis methods are in need. Recently, deep learning techniques such as convolutional neural network (CNN) and recurrent neural network (RNN) have been applied for malware classifications. The features used in the previous approches are mostly based on API (Application Programming Interface) information, and the API invocation information can be obtained through dynamic analysis. However, the invocation information may not reflect malicious behaviors of malware because malware developers use various analysis avoidance techniques. Therefore, deep learning-based malware analysis using other features still need to be developed to improve malware analysis performance. In this paper, we propose a malware classification method using the deep learning algorithm based on byte information. Our proposed method uses images generated from malware byte information that can reflect malware behavioral context, and the convolutional neural network-based sentence analysis is used to process the generated images. We performed several experiments to show the effecitveness of our proposed method, and the experimental results show that our method showed higher accuracy than the naive CNN model, and the detection accuracy was about 99%.
Biometric authentication offers promise for mobile security, but its adoption can be controversial, both from a usability and security perspective. We describe a preliminary study, comparing recollections of biometric adoption by computer security experts and non-experts collected in semi-structured interviews. Initial decisions and thought processes around biometric adoption were recalled, as well as changes in those views over time. These findings should serve to better inform security education across differing levels of technical experience. Preliminary findings indicate that both user groups were influenced by similar sources of information; however, expert users differed in having more professional requirements affecting choices (e.g., BYOD). Furthermore, experts often added biometric authentication methods opportunistically during device updates, despite describing higher security concern and caution. Non-experts struggled with the setting up fingerprint biometrics, leading to poor adoption. Further interviews are still being conducted.
The strength of an anonymity system depends on the number of users. Therefore, User eXperience (UX) and usability of these systems is of critical importance for boosting adoption and use. To this end, we carried out a study with 19 non-expert participants to investigate how users experience routine Web browsing via the Tor Browser, focusing particularly on encountered problems and frustrations. Using a mixed-methods quantitative and qualitative approach to study one week of naturalistic use of the Tor Browser, we uncovered a variety of UX issues, such as broken Web sites, latency, lack of common browsing conveniences, differential treatment of Tor traffic, incorrect geolocation, operational opacity, etc. We applied this insight to suggest a number of UX improvements that could mitigate the issues and reduce user frustration when using the Tor Browser.