Biblio
Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.
Distributed banking platforms and services forgo centralized banks to process financial transactions. For example, M-Pesa provides distributed banking service in the developing regions so that the people without a bank account can deposit, withdraw, or transfer money. The current distributed banking systems lack the transparency in monitoring and tracking of distributed banking transactions and thus do not support auditing of distributed banking transactions for accountability. To address this issue, this paper proposes a blockchain-based distributed banking (BDB) scheme, which uses blockchain technology to leverage its built-in properties to record and track immutable transactions. BDB supports distributed financial transaction processing but is significantly different from cryptocurrencies in its design properties, simplicity, and computational efficiency. We implement a prototype of BDB using smart contract and conduct experiments to show BDB's effectiveness and performance. We further compare our prototype with the Ethereum cryptocurrency to highlight the fundamental differences and demonstrate the BDB's superior computational efficiency.
Blockchain technology is the cornerstone of digital trust and systems' decentralization. The necessity of eliminating trust in computing systems has triggered researchers to investigate the applicability of Blockchain to decentralize the conventional security models. Specifically, researchers continuously aim at minimizing trust in the well-known Public Key Infrastructure (PKI) model which currently requires a trusted Certificate Authority (CA) to sign digital certificates. Recently, the Automated Certificate Management Environment (ACME) was standardized as a certificate issuance automation protocol. It minimizes the human interaction by enabling certificates to be automatically requested, verified, and installed on servers. ACME only solved the automation issue, but the trust concerns remain as a trusted CA is required. In this paper we propose decentralizing the ACME protocol by using the Blockchain technology to enhance the current trust issues of the existing PKI model and to eliminate the need for a trusted CA. The system was implemented and tested on Ethereum Blockchain, and the results showed that the system is feasible in terms of cost, speed, and applicability on a wide range of devices including Internet of Things (IoT) devices.
Data privacy has been an important area of research in recent years. Dataset often consists of sensitive data fields, exposure of which may jeopardize interests of individuals associated with the data. In order to resolve this issue, privacy techniques can be used to hinder the identification of a person through anonymization of the sensitive data in the dataset to protect sensitive information, while the anonymized dataset can be used by the third parties for analysis purposes without obstruction. In this research, we investigated a privacy technique, k-anonymity for different values of on different number columns of the dataset. Next, the information loss due to k-anonymity is computed. The anonymized files go through the classification process by some machine-learning algorithms i.e., Naive Bayes, J48 and neural network in order to check a balance between data anonymity and data utility. Based on the classification accuracy, the optimal values of and are obtained, and thus, the optimal and can be used for k-anonymity algorithm to anonymize optimal number of columns of the dataset.
An air-gapped computer is physically isolated from unsecured networks to guarantee effective protection against data exfiltration. Due to air gaps, unauthorized data transfer seems impossible over legitimate communication channels, but in reality many so-called physical covert channels can be constructed to allow data exfiltration across the air gaps. Most of such covert channels are very slow and often require certain strict conditions to work (e.g., no physical obstacles between the sender and the receiver). In this paper, we introduce a new physical covert channel named BitJabber that is extremely fast and strong enough to even penetrate concrete walls. We show that this covert channel can be easily created by an unprivileged sender running on a victim’s computer. Specifically, the sender constructs the channel by using only memory accesses to modulate the electromagnetic (EM) signals generated by the DRAM clock. While possessing a very high bandwidth (up to 300,000 bps), this new covert channel is also very reliable (less than 1% error rate). More importantly, this covert channel can enable data exfiltration from an air-gapped computer enclosed in a room with thick concrete walls up to 15 cm.
It is necessary to improve the safety of the underwater acoustic sensor networks (UASNs) since it is mostly used in the military industry. Specific emitter identification is the process of identifying different transmitters based on the radio frequency fingerprint extracted from the received signal. The sonar transmitter is a typical low-frequency radiation source and is an important part of the UASNs. Class D power amplifier, a typical nonlinear amplifier, is usually used in sonar transmitters. The inherent nonlinearity of power amplifiers provides fingerprint features that can be distinguished without transmitters for specific emitter recognition. First, the nonlinearity of the sonar transmitter is studied in-depth, and the nonlinearity of the power amplifier is modeled and its nonlinearity characteristics are analyzed. After obtaining the nonlinear model of an amplifier, a similar amplifier in practical application is obtained by changing its model parameters as the research object. The output signals are collected by giving the same input of different models, and, then, the output signals are extracted and classified. In this paper, the memory polynomial model is used to model the amplifier. The power spectrum features of the output signals are extracted as fingerprint features. Then, the dimensionality of the high-dimensional features is reduced. Finally, the classifier is used to recognize the amplifier. The experimental results show that the individual sonar transmitter can be well identified by using the nonlinear characteristics of the signal. By this way, this method can enhance the communication safety of the UASNs.
Steganalysis is an interesting classification problem in order to discriminate the images, including hidden messages from the clean ones. There are many methods, including deep CNN networks to extract fine features for this classification task. Nevertheless, a few researches have been conducted to improve the final classifier. Some state-of-the-art methods try to ensemble the networks by a voting strategy to achieve more stable performance. In this paper, a selection phase is proposed to filter improper networks before any voting. This filtering is done by a binary relevance multi-label classification approach. The Logistic Regression (LR) is chosen here as the last layer of network for classification. The large-margin Fisher’s linear discriminant (FLD) classifier is assigned to each one of the networks. It learns to discriminate the training instances which associated network is suitable for or not. Xu-Net, one of the most famous state-of-the-art Steganalysis models, is chosen as the base networks. The proposed method with different approaches is applied on the BOSSbase dataset and is compared with traditional voting and also some state-of-the-art related ensemble techniques. The results show significant accuracy improvement of the proposed method in comparison with others.
Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.
Cloud computing is an Internet-based technology that emerging rapidly in the last few years due to popular and demanded services required by various institutions, organizations, and individuals. structured, unstructured, semistructured data is transfer at a record pace on to the cloud server. These institutions, businesses, and organizations are shifting more and more increasing workloads on cloud server, due to high cost, space and maintenance issues from big data, cloud computing will become a potential choice for the storage of data. In Cloud Environment, It is obvious that data is not secure completely yet from inside and outside attacks and intrusions because cloud servers are under the control of a third party. The Security of data becomes an important aspect due to the storage of sensitive data in a cloud environment. In this paper, we give an overview of characteristics and state of art of big data and data security & privacy top threats, open issues and current challenges and their impact on business are discussed for future research perspective and review & analysis of previous and recent frameworks and architectures for data security that are continuously established against threats to enhance how to keep and store data in the cloud environment.
Big Data Platform provides business units with data platforms, data products and data services by integrating all data to fully analyze and exploit the intrinsic value of data. Data accessed by big data platforms may include many users' privacy and sensitive information, such as the user's hotel stay history, user payment information, etc., which is at risk of leakage. This paper first analyzes the risks of data leakage, then introduces in detail the theoretical basis and common methods of data desensitization technology, and finally puts forward a set of effective market subject credit supervision application based on asccii, which is committed to solving the problems of insufficient breadth and depth of data utilization for enterprises involved, the problems of lagging regulatory laws and standards, the problems of separating credit construction and market supervision business, and the credit constraints of data governance.
Mobile Ad-hoc Network (MANET) consists of different configurations, where it deals with the dynamic nature of its creation and also it is a self-configurable type of a network. The primary task in this type of networks is to develop a mechanism for routing that gives a high QoS parameter because of the nature of ad-hoc network. The Ad-hoc-on-Demand Distance Vector (AODV) used here is the on-demand routing mechanism for the computation of the trust. The proposed approach uses the Artificial neural network (ANN) and the Support Vector Machine (SVM) for the discovery of the black hole attacks in the network. The results are carried out between the black hole AODV and the security mechanism provided by us as the Secure AODV (SAODV). The results were tested on different number of nodes, at last, it has been experimented for 100 nodes which provide an improvement in energy consumption of 54.72%, the throughput is 88.68kbps, packet delivery ratio is 92.91% and the E to E delay is of about 37.27ms.
With the development of IoT and 5G networks, the demand for the next-generation intelligent transportation system has been growing at a rapid pace. Dynamic mapping has been considered one of the key technologies to reduce traffic accidents and congestion in the intelligent transportation system. However, as the number of vehicles keeps growing, a huge volume of mapping traffic may overload the central cloud, leading to serious performance degradation. In this paper, we propose and prototype a CUPS (control and user plane separation)-based edge computing architecture for the dynamic mapping and quantify its benefits by prototyping. There are a couple of merits of our proposal: (i) we can mitigate the overhead of the networks and central cloud because we only need to abstract and send global dynamic mapping information from the edge servers to the central cloud; (ii) we can reduce the response latency since the dynamic mapping traffic can be isolated from other data traffic by being generated and distributed from a local edge server that is deployed closer to the vehicles than the central server in cloud. The capabilities of our system have been quantified. The experimental results have shown our system achieves throughput improvement by more than four times, and response latency reduction by 67.8% compared to the conventional central cloud-based approach. Although these results are still obtained from the preliminary evaluations using our prototype system, we believe that our proposed architecture gives insight into how we utilize CUPS and edge computing to enable efficient dynamic mapping applications.