Biblio
The biometric system of access to information resources has been developed. The software and hardware complex are designed to protect information resources and personal data from unauthorized access using the principle of user authentication by fingerprints. In the developed complex, the traditional input of login and password was replaced by applying a finger to the fingerprint scanner. The system automatically recognizes the fingerprint and provides access to the information resource, provides encryption of personal data and automation of the authorization process on the web resource. The web application was implemented using the Bootstrap framework, the 000webhost web server, the phpMyAdmin database server, the PHP scripting language, the HTML hypertext markup language, along with cascading style sheets and embedded scripts (JavaScript), which created a full-fledged web-site and Google Chrome extension with the ability to integrate it into other systems. The structural schematic diagram was performed. The design of the device is offered. The algorithm of the program operation and the program of the device operation in the C language are developed.
Zero Trust Model ensures each node is responsible for the approval of the transaction before it gets committed. The data owners can track their data while it’s shared amongst the various data custodians ensuring data security. The consensus algorithm enables the users to trust the network as malicious nodes fail to get approval from all nodes, thereby causing the transaction to be aborted. The use case chosen to demonstrate the proposed consensus algorithm is the college placement system. The algorithm has been extended to implement a diversified, decentralized, automated placement system, wherein the data owner i.e. the student, maintains an immutable certificate vault and the student’s data has been validated by a verifier network i.e. the academic department and placement department. The data transfer from student to companies is recorded as transactions in the distributed ledger or blockchain allowing the data to be tracked by the student.
The increase of cyber attacks in both the numbers and varieties in recent years demands to build a more sophisticated network intrusion detection system (NIDS). These NIDS perform better when they can monitor all the traffic traversing through the network like when being deployed on a Software-Defined Network (SDN). Because of the inability to detect zero-day attacks, signature-based NIDS which were traditionally used for detecting malicious traffic are beginning to get replaced by anomaly-based NIDS built on neural networks. However, recently it has been shown that such NIDS have their own drawback namely being vulnerable to the adversarial example attack. Moreover, they were mostly evaluated on the old datasets which don't represent the variety of attacks network systems might face these days. In this paper, we present Reconstruction from Partial Observation (RePO) as a new mechanism to build an NIDS with the help of denoising autoencoders capable of detecting different types of network attacks in a low false alert setting with an enhanced robustness against adversarial example attack. Our evaluation conducted on a dataset with a variety of network attacks shows denoising autoencoders can improve detection of malicious traffic by up to 29% in a normal setting and by up to 45% in an adversarial setting compared to other recently proposed anomaly detectors.
Microarchitectural Side-Channel Attacks (SCAs) have emerged recently to compromise the security of computer systems by exploiting the existing processors' hardware vulnerabilities. In order to detect such attacks, prior studies have proposed the deployment of low-level features captured from built-in Hardware Performance Counter (HPC) registers in modern microprocessors to implement accurate Machine Learning (ML)-based SCAs detectors. Though effective, such attack detection techniques have mainly focused on binary classification models offering limited insights on identifying the type of attacks. In addition, while existing SCAs detectors required prior knowledge of attacks applications to detect the pattern of side-channel attacks using a variety of microarchitectural features, detecting unknown (zero-day) SCAs at run-time using the available HPCs remains a major challenge. In response, in this work we first identify the most important HPC features for SCA detection using an effective feature reduction method. Next, we propose Phased-Guard, a two-level machine learning-based framework to accurately detect and classify both known and unknown attacks at run-time using the most prominent low-level features. In the first level (SCA Detection), Phased-Guard using a binary classification model detects the existence of SCAs on the target system by determining the critical scenarios including system under attack and system under no attack. In the second level (SCA Identification) to further enhance the security against side-channel attacks, Phased-Guard deploys a multiclass classification model to identify the type of SCA applications. The experimental results indicate that Phased-Guard by monitoring only the victim applications' microarchitectural HPCs data, achieves up to 98 % attack detection accuracy and 99.5% SCA identification accuracy significantly outperforming the state-of-the-art solutions by up to 82 % in zero-day attack detection at the cost of only 4% performance overhead for monitoring.
The relevance of data protection is related to the intensive informatization of various aspects of society and the need to prevent unauthorized access to them. World spending on ensuring information security (IS) for the current state: expenses in the field of IS today amount to \$81.7 billion. Expenditure forecast by 2020: about \$105 billion [1]. Information protection of military facilities is the most critical in the public sector, in the non-state - financial organizations is one of the leaders in spending on information protection. An example of the importance of IS research is the Trojan encoder WannaCry, which infected hundreds of thousands of computers around the world, attacks are recorded in more than 116 countries. The attack of the encoder of WannaCry (Wana Decryptor) happens through a vulnerability in service Server Message Block (protocol of network access to file systems) of Windows OS. Then, a rootkit (a set of malware) was installed on the infected system, using which the attackers launched an encryption program. Then each vulnerable computer could become infected with another infected device within one local network. Due to these attacks, about \$70,000 was lost (according to data from 18.05.2017) [2]. It is assumed in the presented work, that the software level of information protection is fundamentally insufficient to ensure the stable functioning of critical objects. This is due to the possible hardware implementation of undocumented instructions, discussed later. The complexity of computing systems and the degree of integration of their components are constantly growing. Therefore, monitoring the operation of the computer hardware is necessary to achieve the maximum degree of protection, in particular, data processing methods.
File timestamps do not receive much attention from information security specialists and computer forensic scientists. It is believed that timestamps are extremely easy to fake, and the system time of a computer can be changed. However, operating system for synchronizing processes and working with file objects needs accurate time readings. The authors estimate that several million timestamps can be stored on the logical partition of a hard disk with the NTFS. The MFT stores four timestamps for each file object in \$STANDARDİNFORMATION and \$FILE\_NAME attributes. Furthermore, each directory in the İNDEX\_ROOT or İNDEX\_ALLOCATION attributes contains four more timestamps for each file within it. File timestamps are set and changed as a result of file operations. At the same time, some file operations differently affect changes in timestamps. This article presents the results of the tool-based observation over the creation and update of timestamps in the MFT resulting from the basic file operations. Analysis of the results is of interest with regard to computer forensic science.
Existing cyber security solutions have been basically developed using knowledge-based models that often cannot trigger new cyber-attack families. With the boom of Artificial Intelligence (AI), especially Deep Learning (DL) algorithms, those security solutions have been plugged-in with AI models to discover, trace, mitigate or respond to incidents of new security events. The algorithms demand a large number of heterogeneous data sources to train and validate new security systems. This paper presents the description of new datasets, the so-called ToNİoT, which involve federated data sources collected from Telemetry datasets of IoT services, Operating system datasets of Windows and Linux, and datasets of Network traffic. The paper introduces the testbed and description of TONİoT datasets for Windows operating systems. The testbed was implemented in three layers: edge, fog and cloud. The edge layer involves IoT and network devices, the fog layer contains virtual machines and gateways, and the cloud layer involves cloud services, such as data analytics, linked to the other two layers. These layers were dynamically managed using the platforms of software-Defined Network (SDN) and Network-Function Virtualization (NFV) using the VMware NSX and vCloud NFV platform. The Windows datasets were collected from audit traces of memories, processors, networks, processes and hard disks. The datasets would be used to evaluate various AI-based cyber security solutions, including intrusion detection, threat intelligence and hunting, privacy preservation and digital forensics. This is because the datasets have a wide range of recent normal and attack features and observations, as well as authentic ground truth events. The datasets can be publicly accessed from this link [1].
Advances in technology have led not only to increased security and privacy but also to new channels of information leakage. New leak channels have resulted in the emergence of increased relevance of various types of attacks. One such attacks are Side-Channel Attacks, i.e. attacks aimed to find vulnerabilities in the practical component of the algorithm. However, with the development of these types of attacks, methods of protection against them have also appeared. One of such methods is White-Box Cryptography.
In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored - image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.
Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.
A rapid rise in cyber-attacks on Cyber Physical Systems (CPS) has been observed in the last decade. It becomes even more concerning that several of these attacks were on critical infrastructures that indeed succeeded and resulted into significant physical and financial damages. Experimental testbeds capable of providing flexible, scalable and interoperable platform for executing various cybersecurity experiments is highly in need by all stakeholders. A container-based SCADA testbed is presented in this work as a potential platform for executing cybersecurity experiments. Through this testbed, a network traffic containing ARP spoofing is generated that represents a Man in the middle (MITM) attack. While doing so, scanning of different systems within the network is performed which represents a reconnaissance attack. The network traffic generated by both ARP spoofing and network scanning are captured and further used for preparing a dataset. The dataset is utilized for training a network classification model through a machine learning algorithm. Performance of the trained model is evaluated through a series of tests where promising results are obtained.
Port scans are a persistent problem on contemporary communication networks. Typically used as an attack reconnaissance tool, they can also create problems with application performance and throughput. This paper describes an architecture that deploys sequential neural networks (NNs) to classify packets, separate TCP datagrams, determine the type of TCP packet and detect port scans. Sequential networks allow this lengthy task to learn from the current environment and to be broken up into component parts. Following classification, analysis is performed in order to discover scan attempts. We show that neural networks can be used to successfully classify general packetized traffic at recognition rates above 99% and more complex TCP classes at rates that are also above 99%. We demonstrate that this specific communications task can successfully be broken up into smaller work loads. When tested against actual NMAP scan pcap files, this model successfully discovers open ports and the scan attempts with the same high percentage and low false positives.
In this work, we propose a novel approach for decentralized identifier distribution and synchronization in networks. The protocol generates network entity identifiers composed of timestamps and cryptographically secure random values with a significant reduction of collision probability. The distribution is inspired by Unique Universal Identifiers and Timestamp-based Concurrency Control algorithms originating from database applications. We defined fundamental requirements for the distribution, including: uniqueness, accuracy of distribution, optimal timing behavior, scalability, small impact on network load for different operation modes and overall compliance to common network security objectives. An implementation of the proposed approach is evaluated and the results are presented. Originally designed for a domain of proactive defense strategies known as Moving Target Defense, the general architecture of the protocol enables arbitrary applications where identifier distributions in networks have to be decentralized, rapid and secure.
We consider different models of malicious multiple access channels, especially for binary adder channel and for A-channel, and show how they can be used for the reformulation of digital fingerprinting coding problems. In particular, we propose a new model of multimedia fingerprinting coding. In the new model, not only zeroes and plus/minus ones but arbitrary coefficients of linear combinations of noise-like signals for forming watermarks (digital fingerprints) can be used. This modification allows dramatically increase the possible number of users with the property that if t or less malicious users create a forge digital fingerprint then a dealer of the system can find all of them with zero-error probability. We show how arisen problems are related to the compressed sensing problem.