Biblio
Quick Response (QR) codes are rapidly becoming pervasive in our daily life because of its fast readability and the popularity of smartphones with a built-in camera. However, recent researches raise security concerns because QR codes can be easily sniffed and decoded which can lead to private information leakage or financial loss. To address the issue, we present mQRCode which exploit patterns with specific spatial frequency to camouflage QR codes. When the targeted receiver put a camera at the designated position (e.g., 30cm and 0° above the camouflaged QR code), the original QR code is revealed due to the Moiré phenomenon. Malicious adversaries will only see camouflaged QR code at any other position. Our experiments show that the decoding rate of mQR codes is 95% or above within 0.83 seconds. When the camera is 10cm or 15° away from the designated location, the decoding rate drops to 0 so it's secure from attackers.
Observing semantic dependencies in large and heterogeneous networks is a critical task, since it is quite difficult to find the actual source of a malfunction in the case of an error. Dependencies might exist between many network nodes and among multiple hops in paths. If those dependency structures are unknown, debugging errors gets quite difficult. Since CPS and other large networks change at runtime and consists of custom software and hardware, as well as components off-the-shelf, it is necessary to be able to not only include own components in approaches to detect dependencies between nodes. In this paper we present an extension to the Information Flow Monitor approach. Our goal is that this approach should be able to handle unalterable blackbox nodes. This is quite challenging, since the IFM originally requires each network node to be compliant with the IFM protocol.
Mobile ad hoc networks (MANETs) are self-configuring, dynamic networks in which nodes are free to move. These nodes are susceptible to various malicious attacks. In this paper, we propose a distributed trust-based security scheme to prevent multiple attacks such as Probe, Denial-of-Service (DoS), Vampire, User-to-Root (U2R) occurring simultaneously. We report above 95% accuracy in data transmission and reception by applying the proposed scheme. The simulation has been carried out using network simulator ns-2 in a AODV routing protocol environment. To the best of the authors' knowledge, this is the first work reporting a distributed trust-based prevention scheme for preventing multiple attacks. We also check the scalability of the technique using variable node densities in the network.
Monitoring systems are essential to understand and control the behaviour of systems and networks. Cyber-physical systems (CPS) are particularly delicate under that perspective since they involve real-time constraints and physical phenomena that are not usually considered in common IT solutions. Therefore, there is a need for publicly available monitoring tools able to contemplate these aspects. In this poster/demo, we present our initiative, called CPS-MT, towards a versatile, real-time CPS monitoring tool, with a particular focus on security research. We first present its architecture and main components, followed by a MiniCPS-based case study. We also describe a performance analysis and preliminary results. During the demo, we will discuss CPS-MT's capabilities and limitations for security applications.
Recently, IoT, 5G mobile, big data, and artificial intelligence are increasingly used in the real world. These technologies are based on convergenced in Cyber Physical System(Cps). Cps technology requires core technologies to ensure reliability, real-time, safety, autonomy, and security. CPS is the system that can connect between cyberspace and physical space. Cyberspace attacks are confused in the real world and have a lot of damage. The personal information that dealing in CPS has high confidentiality, so the policies and technique will needed to protect the attack in advance. If there is an attack on the CPS, not only personal information but also national confidential data can be leaked. In order to prevent this, the risk is measured using the Factor Analysis of Information Risk (FAIR) Model, which can measure risk by element for situational awareness in CPS environment. To reduce risk by preventing attacks in CPS, this paper measures risk after using the concept of Crime Prevention Through Environmental Design(CPTED).
Cyber physical systems are the key innovation driver for many domains such as automotive, avionics, industrial process control, and factory automation. However, their interconnection potentially provides adversaries easy access to sensitive data, code, and configurations. If attackers gain control, material damage or even harm to people must be expected. To counteract data theft, system manipulation and cyber-attacks, security mechanisms must be embedded in the cyber physical system. Adding hardware security in the form of the standardized Trusted Platform Module (TPM) is a promising approach. At the same time, traditional dependability features such as safety, availability, and reliability have to be maintained. To determine the right balance between security and dependability it is essential to understand their interferences. This paper supports developers in identifying the implications of using TPMs on the dependability of their system.We highlight potential consequences of adding TPMs to cyber-physical systems by considering the resulting safety, reliability, and availability. Furthermore, we discuss the potential of enhancing the dependability of TPM services by applying traditional redundancy techniques.
Transitioning to more open architectures has been making Cyber-Physical Systems (CPS) vulnerable to malicious attacks that are beyond the conventional cyber attacks. This paper studies attack-resilience enhancement for a system under emerging attacks in the environment of the controller. An effective way to address this problem is to make system state estimation accurate enough for control regardless of the compromised components. This work follows this way and develops a procedure named CPS checkpointing and recovery, which leverages historical data to recover failed system states. Specially, we first propose a new concept of physical-state recovery. The essential operation is defined as rolling the system forward starting from a consistent historical system state. Second, we design a checkpointing protocol that defines how to record system states for the recovery. The protocol introduces a sliding window that accommodates attack-detection delay to improve the correctness of stored states. Third, we present a use case of CPS checkpointing and recovery that deals with compromised sensor measurements. At last, we evaluate our design through conducting simulator-based experiments and illustrating the use of our design with an unmanned vehicle case study.
Decision making in utilities, municipal, and energy companies depends on accurate and trustworthy weather information and predictions. Recently, crowdsourced personal weather stations (PWS) are being increasingly used to provide a higher spatial and temporal resolution of weather measurements. However, tools and methods to ensure the trustworthiness of the crowdsourced data in real-time are lacking. In this paper, we present a Reputation System for Crowdsourced Rainfall Networks (RSCRN) to assign trust scores to personal weather stations in a region. Using real PWS data from the Weather Underground service in the high flood risk region of Norfolk, Virginia, we evaluate the performance of the proposed RSCRN. The proposed method is able to converge to a confident trust score for a PWS within 10–20 observations after installation. Collectively, the results indicate that the trust score derived from the RSCRN can reflect the collective measure of trustworthiness to the PWS, ensuring both useful and trustworthy data for modeling and decision-making in the future.
In this paper, we proposed a framework to evaluate information retrieval systems in presence of multidimensional relevance. This is an important problem in tasks such as consumer health search, where the understandability and trustworthiness of information greatly influence people's decisions based on the search engine results, but common topicality-only evaluation measures ignore these aspects. We used synthetic and real data to compare our proposed framework, named MM, to the understandability-biased information evaluation (UBIRE), an existing framework used in the context of consumer health search. We showed how the proposed approach diverges from the UBIRE framework, and how MM can be used to better understand the trade-offs between topical relevance and the other relevance dimensions.
Prior work notes dispositional, learned, and situational aspects of trust in automation. However, no work has investigated the relative role of these factors in initial trust of an automated system. Moreover, trust in automation researchers often consider trust unidimensionally, whereas ability, integrity, and benevolence perceptions (i.e., trusting beliefs) may provide a more thorough understanding of trust dynamics. To investigate this, we recruited 163 participants on Amazon's Mechanical Turk (MTurk) and randomly assigned each to one of 4 videos describing a hypothetical drone system: one control, the others with additional system performance or process, or both types of information. Participants reported on trusting beliefs in the system, propensity to trust other people, risk-taking tendencies, and trust in the government law enforcement agency behind the system. We found that financial risk-taking tendencies influenced trusting beliefs. Also, those who received process information were likely to have higher integrity and ability beliefs than those not receiving process information, while those who received performance information were likely to have higher ability beliefs. Lastly, perceptions of structural assurance positively influenced all three trusting beliefs. Our findings suggest that a) users' risk-taking tendencies influence trustworthiness perceptions of systems, b) different types of information about a system have varied effects on the trustworthiness dimensions, and c) institutions play an important role in users' calibration of trust. Insights gained from this study can help design training materials and interfaces that improve user trust calibration in automated systems.
In this paper, we explore the use of the Stellar Consensus Protocol (SCP) and its Federated Byzantine Agreement (FBA) algorithm for ensuring trust and reputation between federated, cloud-based platform instances (nodes) and their participants. Our approach is grounded on federated consensus mechanisms, which promise data quality managed through computational trust and data replication, without a centralized authority. We perform our experimentation on the ground of the NIMBLE cloud manufacturing platform, which is designed to support growth of B2B digital manufacturing communities and their businesses through federated platform services, managed by peer-to-peer networks. We discuss the message exchange flow between the NIMBLE application logic and Stellar consensus logic.
With the vision of building "A Smart World", Internet of Things (IoT) plays a crucial role where users, computing systems and objects with sensing and actuating capabilities cooperate with unparalleled convenience. Among many applications of IoT, healthcare is the most emerging in today's scenario, as new technological advancement creates opportunity for early detection of illnesses, quick decision generation and even aftercare monitoring. Nowadays, it has become a reality for many patients to be monitored remotely, overcoming traditional logistical obstacles. However, these e-health applications increase the concerns of security, privacy, and integrity of medical data. For secured transmission in IoT healthcare, data that has been gathered from sensors in a patient's body area network needs to be sent to the end user and might need to be aggregated, visualized and/or evaluated before being presented. Here, trust is critical. Therefore, an end-to-end trustworthy system architecture can guarantee the reliable transmission of a patient's data and confirms the success of IoT Healthcare application.
In blockchain-based systems, malicious behaviour can be detected using auditable information in transactions managed by distributed ledgers. Besides cryptocurrency, blockchain technology has recently been used for other applications, such as file storage. However, most of existing blockchain- based file storage systems can not revoke a user efficiently when multiple users have access to the same file that is encrypted. Actually, they need to update file encryption keys and distribute new keys to remaining users, which significantly increases computation and bandwidth overheads. In this work, we propose a blockchain and proxy re-encryption based design for encrypted file sharing that brings a distributed access control and data management. By combining blockchain with proxy re-encryption, our approach not only ensures confidentiality and integrity of files, but also provides a scalable key management mechanism for file sharing among multiple users. Moreover, by storing encrypted files and related keys in a distributed way, our method can resist collusion attacks between revoked users and distributed proxies.
eAssessment uses technology to support online evaluation of students' knowledge and skills. However, challenging problems must be addressed such as trustworthiness among students and teachers in blended and online settings. The TeSLA system proposes an innovative solution to guarantee correct authentication of students and to prove the authorship of their assessment tasks. Technologically, the system is based on the integration of five instruments: face recognition, voice recognition, keystroke dynamics, forensic analysis, and plagiarism. The paper aims to analyze and compare the results achieved after the second pilot performed in an online and a blended university revealing the realization of trust-driven solutions for eAssessment.
This paper considers a pilot spoofing attack scenario in a massive MIMO system. A malicious user tries to disturb the channel estimation process by sending interference symbols to the base-station (BS) via the uplink. Another legitimate user counters by sending random symbols. The BS does not possess any partial channel state information (CSI) and distribution of symbols sent by malicious user a priori. For such scenario, this paper aims to separate the channel directions from the legitimate and malicious users to the BS, respectively. A blind channel separation algorithm based on estimating the characteristic function of the distribution of the signal space vector is proposed. Simulation results show that the proposed algorithm provides good channel separation performance in a typical massive MIMO system.
The need to process the verity, volume and velocity of data generated by today's Internet of Things (IoT) devices has pushed both academia and the industry to investigate new architectural alternatives to support the new challenges. As a result, Edge Computing (EC) has emerged to address these issues, by placing part of the cloud resources (e.g., computation, storage, logic) closer to the edge of the network, which allows faster and context dependent data analysis and storage. However, as EC infrastructures grow, different providers who do not necessarily trust each other need to collaborate in order serve different IoT devices. In this context, EC infrastructures, IoT devices and the data transiting the network all need to be subject to identity and provenance checks, in order to increase trust and accountability. Each device/data in the network needs to be identified and the provenance of its actions needs to be tracked. In this paper, we propose a blockchain container based architecture that implements the W3C-PROV Data Model, to track identities and provenance of all orchestration decisions of a business network. This architecture provides new forms of interaction between the different stakeholders, which supports trustworthy transactions and leads to a new decentralized interaction model for IoT based applications.
The rapidly growing body of research in adversarial machine learning has demonstrated that deep neural networks (DNNs) are highly vulnerable to adversarially generated images. This underscores the urgent need for practical defense techniques that can be readily deployed to combat attacks in real-time. Observing that many attack strategies aim to perturb image pixels in ways that are visually imperceptible, we place JPEG compression at the core of our proposed SHIELD defense framework, utilizing its capability to effectively "compress away" such pixel manipulation. To immunize a DNN model from artifacts introduced by compression, SHIELD "vaccinates" the model by retraining it with compressed images, where different compression levels are applied to generate multiple vaccinated models that are ultimately used together in an ensemble defense. On top of that, SHIELD adds an additional layer of protection by employing randomization at test time that compresses different regions of an image using random compression levels, making it harder for an adversary to estimate the transformation performed. This novel combination of vaccination, ensembling, and randomization makes SHIELD a fortified multi-pronged defense. We conducted extensive, large-scale experiments using the ImageNet dataset, and show that our approaches eliminate up to 98% of gray-box attacks delivered by strong adversarial techniques such as Carlini-Wagner's L2 attack and DeepFool. Our approaches are fast and work without requiring knowledge about the model.
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ norms, namely, L\_0, L\_1, L\_2, and L\_$ınfty$ attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion.
Nowadays, network is one of the essential parts of life, and lots of primary activities are performed by using the network. Also, network security plays an important role in the administrator and monitors the operation of the system. The intrusion detection system (IDS) is a crucial module to detect and defend against the malicious traffics before the system is affected. This system can extract the information from the network system and quickly indicate the reaction which provides real-time protection for the protected system. However, detecting malicious traffics is very complicating because of their large quantity and variants. Also, the accuracy of detection and execution time are the challenges of some detection methods. In this paper, we propose an IDS platform based on convolutional neural network (CNN) called IDS-CNN to detect DoS attack. Experimental results show that our CNN based DoS detection obtains high accuracy at most 99.87%. Moreover, comparisons with other machine learning techniques including KNN, SVM, and Naïve Bayes demonstrate that our proposed method outperforms traditional ones.
This paper proposes a deep learning based method for efficient malware classification. Specially, we convert the malware classification problem into the image classification problem, which can be addressed through leveraging convolutional neural networks (CNNs). For many malware families, the images belonging to the same family have similar contours and textures, so we convert the Binary files of malware samples to uncompressed gray-scale images which possess complete information of the original malware without artificial feature extraction. We then design classifier based on Tensorflow framework of Google by combining the deep learning (DL) and malware detection technology. Experimental results show that the uncompressed gray-scale images of the malware are relatively easy to distinguish and the CNN based classifier can achieve a high success rate of 98.2%
In this study, we have used the Image Similarity technique to detect the unknown or new type of malware using CNN ap- proach. CNN was investigated and tested with three types of datasets i.e. one from Vision Research Lab, which contains 9458 gray-scale images that have been extracted from the same number of malware samples that come from 25 differ- ent malware families, and second was benign dataset which contained 3000 different kinds of benign software. Benign dataset and dataset vision research lab were initially exe- cutable files which were converted in to binary code and then converted in to image files. We obtained a testing ac- curacy of 98% on Vision Research dataset.
Deep neural network based steganalysis has developed rapidly in recent years, which poses a challenge to the security of steganography. However, there is no steganography method that can effectively resist the neural networks for steganalysis at present. In this paper, we propose a new strategy that constructs enhanced covers against neural networks with the technique of adversarial examples. The enhanced covers and their corresponding stegos are most likely to be judged as covers by the networks. Besides, we use both deep neural network based steganalysis and high-dimensional feature classifiers to evaluate the performance of steganography and propose a new comprehensive security criterion. We also make a tradeoff between the two analysis systems and improve the comprehensive security. The effectiveness of the proposed scheme is verified with the evidence obtained from the experiments on the BOSSbase using the steganography algorithm of WOW and popular steganalyzers with rich models and three state-of-the-art neural networks.
Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership. In this paper, we generalize the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100$\backslash$%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.