Biblio
Accessing the secured data through the network is a major task in emerging technology. Data needs to be protected from the network vulnerabilities, malicious users, hackers, sniffers, intruders. The novel framework has been designed to provide high security in data transaction through computer network. The implant of network amalgamation in the recent trends, make the way in security enhancement in an efficient manner through the machine learning algorithm. In this system the usage of the biometric authenticity plays a vital role for unique approach. The novel mathematical approach is used in machine learning algorithms to solve these problems and provide the security enhancement. The result shows that the novel method has consistent improvement in enhancing the security of data transactions in the emerging technologies.
Payment channel networks have been introduced to mitigate the scalability issues inherent to permissionless decentralized cryptocurrencies such as Bitcoin. Launched in 2018, the Lightning Network (LN) has been gaining popularity and consists today of more than 5000 nodes and 35000 payment channels that jointly hold 965 bitcoins (9.2M USD as of June 2020). This adoption has motivated research from both academia and industryPayment channels suffer from security vulnerabilities, such as the wormhole attack [39], anonymity issues [38], and scalability limitations related to the upper bound on the number of concurrent payments per channel [28], which have been pointed out by the scientific community but never quantitatively analyzedIn this work, we first analyze the proneness of the LN to the wormhole attack and attacks against anonymity. We observe that an adversary needs to control only 2% of nodes to learn sensitive payment information (e.g., sender, receiver, and amount) or to carry out the wormhole attack. Second, we study the management of concurrent payments in the LN and quantify its negative effect on scalability. We observe that for micropayments, the forwarding capability of up to 50% of channels is restricted to a value smaller than the channel capacity. This phenomenon hinders scalability and opens the door for denial-of-service attacks: we estimate that a network-wide DoS attack costs within 1.6M USD, while isolating the biggest community costs only 238k USDOur findings should prompt the LN community to consider the issues studied in this work when educating users about path selection algorithms, as well as to adopt multi-hop payment protocols that provide stronger security, privacy and scalability guarantees.
The rapid proliferation of biometrics has led to growing concerns about the security and privacy of the biometric data (template). A biometric uniquely identifies an individual and unlike passwords, it cannot be revoked or replaced since it is unique and fixed for every individual. To address this problem, many biometric template protection methods using fully homomorphic encryption have been proposed. But, most of them (i) are computationally expensive and practically infeasible (ii) do not support operations over real valued biometric feature vectors without quantization (iii) do not support packing of real valued feature vectors into a ciphertext (iv) require multi-shot enrollment of users for improved matching performance. To address these limitations, we propose a secure and privacy preserving method for biometric template protection using fully homomorphic encryption. The proposed method is computationally efficient and practically feasible, supports operations over real valued feature vectors without quantization and supports packing of real valued feature vectors into a single ciphertext. In addition, the proposed method enrolls the users using one-shot enrollment. To evaluate the proposed method, we use three face datasets namely LFW, FEI and Georgia tech face dataset. The encrypted face template (for 128 dimensional feature vector) requires 32.8 KB of memory space and it takes 2.83 milliseconds to match a pair of encrypted templates. The proposed method improves the matching performance by 3 % when compared to state-of-the-art, while providing high template security.
Biometric authentication is the preferred authentication scheme in modern computing systems. While it offers enhanced usability, it also requires cautious handling of sensitive users' biometric templates. In this paper, a distributed scheme that eliminates the requirement for a central node that holds users' biometric templates is presented. This is replaced by an Ethereum/IPFS combination to which the templates of the users are stored in a homomorphically encrypted form. The scheme enables the biometric authentication of the users by any third party service, while the actual biometric templates of the user never leave his device in non encrypted form. Secure authentication of users in enabled, while sensitive biometric data are not exposed to anyone. Experiments show that the scheme can be applied as an authentication mechanism with minimal time overhead.
Malware is any software that causes harm to the user information, computer systems or network. Modern computing and internet systems are facing increase in malware threats from the internet. It is observed that different malware follows the same patterns in their structure with minimal alterations. The type of threats has evolved, from file-based malware to fileless malware, such kind of threats are also known as Advance Volatile Threat (AVT). Fileless malware is complex and evasive, exploiting pre-installed trusted programs to infiltrate information with its malicious intent. Fileless malware is designed to run in system memory with a very small footprint, leaving no artifacts on physical hard drives. Traditional antivirus signatures and heuristic analysis are unable to detect this kind of malware due to its sophisticated and evasive nature. This paper provides information relating to detection, mitigation and analysis for such kind of threat.
Existing cyber security solutions have been basically developed using knowledge-based models that often cannot trigger new cyber-attack families. With the boom of Artificial Intelligence (AI), especially Deep Learning (DL) algorithms, those security solutions have been plugged-in with AI models to discover, trace, mitigate or respond to incidents of new security events. The algorithms demand a large number of heterogeneous data sources to train and validate new security systems. This paper presents the description of new datasets, the so-called ToNİoT, which involve federated data sources collected from Telemetry datasets of IoT services, Operating system datasets of Windows and Linux, and datasets of Network traffic. The paper introduces the testbed and description of TONİoT datasets for Windows operating systems. The testbed was implemented in three layers: edge, fog and cloud. The edge layer involves IoT and network devices, the fog layer contains virtual machines and gateways, and the cloud layer involves cloud services, such as data analytics, linked to the other two layers. These layers were dynamically managed using the platforms of software-Defined Network (SDN) and Network-Function Virtualization (NFV) using the VMware NSX and vCloud NFV platform. The Windows datasets were collected from audit traces of memories, processors, networks, processes and hard disks. The datasets would be used to evaluate various AI-based cyber security solutions, including intrusion detection, threat intelligence and hunting, privacy preservation and digital forensics. This is because the datasets have a wide range of recent normal and attack features and observations, as well as authentic ground truth events. The datasets can be publicly accessed from this link [1].
We leverage deep learning algorithms on various user behavioral information gathered from end-user devices to classify a subject of interest. In spite of the ability of these techniques to counter spoofing threats, they are vulnerable to adversarial learning attacks, where an attacker adds adversarial noise to the input samples to fool the classifier into false acceptance. Recently, a handful of mature techniques like Fast Gradient Sign Method (FGSM) have been proposed to aid white-box attacks, where an attacker has a complete knowledge of the machine learning model. On the contrary, we exploit a black-box attack to a behavioral biometric system based on gait patterns, by using FGSM and training a shadow model that mimics the target system. The attacker has limited knowledge on the target model and no knowledge of the real user being authenticated, but induces a false acceptance in authentication. Our goal is to understand the feasibility of a black-box attack and to what extent FGSM on shadow models would contribute to its success. Our results manifest that the performance of FGSM highly depends on the quality of the shadow model, which is in turn impacted by key factors including the number of queries allowed by the target system in order to train the shadow model. Our experimentation results have revealed strong relationships between the shadow model and FGSM performance, as well as the effect of the number of FGSM iterations used to create an attack instance. These insights also shed light on deep-learning algorithms' model shareability that can be exploited to launch a successful attack.
Advances in technology have led not only to increased security and privacy but also to new channels of information leakage. New leak channels have resulted in the emergence of increased relevance of various types of attacks. One such attacks are Side-Channel Attacks, i.e. attacks aimed to find vulnerabilities in the practical component of the algorithm. However, with the development of these types of attacks, methods of protection against them have also appeared. One of such methods is White-Box Cryptography.
Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.
We consider different models of malicious multiple access channels, especially for binary adder channel and for A-channel, and show how they can be used for the reformulation of digital fingerprinting coding problems. In particular, we propose a new model of multimedia fingerprinting coding. In the new model, not only zeroes and plus/minus ones but arbitrary coefficients of linear combinations of noise-like signals for forming watermarks (digital fingerprints) can be used. This modification allows dramatically increase the possible number of users with the property that if t or less malicious users create a forge digital fingerprint then a dealer of the system can find all of them with zero-error probability. We show how arisen problems are related to the compressed sensing problem.
Cloud forensics investigates the crime committed over cloud infrastructures like SLA-violations and storage privacy. Cloud storage forensics is the process of recording the history of the creation and operations performed on a cloud data object and investing it. Secure data provenance in the Cloud is crucial for data accountability, forensics, and privacy. Towards this, we present a Cloud-based data provenance framework using Blockchain, which traces data record operations and generates provenance data. Initially, we design a dropbox like application using AWS S3 storage. The application creates a cloud storage application for the students and faculty of the university, thereby making the storage and sharing of work and resources efficient. Later, we design a data provenance mechanism for confidential files of users using Ethereum blockchain. We also evaluate the proposed system using performance parameters like query and transaction latency by varying the load and number of nodes of the blockchain network.
Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.