Biblio
Modern botnets can persist in networked systems for extended periods of time by operating in a stealthy manner. Despite the progress made in the area of botnet prevention, detection, and mitigation, stealthy botnets continue to pose a significant risk to enterprises. Furthermore, existing enterprise-scale solutions require significant resources to operate effectively, thus they are not practical. In order to address this important problem in a resource-constrained environment, we propose a reinforcement learning based approach to optimally and dynamically deploy a limited number of defensive mechanisms, namely honeypots and network-based detectors, within the target network. The ultimate goal of the proposed approach is to reduce the lifetime of stealthy botnets by maximizing the number of bots identified and taken down through a sequential decision-making process. We provide a proof-of-concept of the proposed approach, and study its performance in a simulated environment. The results show that the proposed approach is promising in protecting against stealthy botnets.
For sketch-based image retrieval (SBIR), we propose a generative adversarial network trained on a large number of sketches and their corresponding real images. To imitate human search process, we attempt to match candidate images with theimaginary image in user single s mind instead of the sketch query, i.e., not only the shape information of sketches but their possible content information are considered in SBIR. Specifically, a conditional generative adversarial network (cGAN) is employed to enrich the content information of sketches and recover the imaginary images, and two VGG-based encoders, which work on real and imaginary images respectively, are used to constrain their perceptual consistency from the view of feature representations. During SBIR, we first generate an imaginary image from a given sketch via cGAN, and then take the output of the learned encoder for imaginary images as the feature of the query sketch. Finally, we build an interactive SBIR system that shows encouraging performance.
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
The paper presents a fully automatic end-to-end trainable system to colorize grayscale images. Colorization is a highly under-constrained problem. In order to produce realistic outputs, the proposed approach takes advantage of the recent advances in deep learning and generative networks. To achieve plausible colorization, the paper investigates conditional Wasserstein Generative Adversarial Networks (WGAN) [3] as a solution to this problem. Additionally, a loss function consisting of two classification loss components apart from the adversarial loss learned by the WGAN is proposed. The first classification loss provides a measure of how much the predicted colored images differ from ground truth. The second classification loss component makes use of ground truth semantic classification labels in order to learn meaningful intermediate features. Finally, WGAN training procedure pushes the predictions to the manifold of natural images. The system is validated using a user study and a semantic interpretability test and achieves results comparable to [1] on Imagenet dataset [10].
Distractor generation is a crucial step for fill-in-the-blank question generation. We propose a generative model learned from training generative adversarial nets (GANs) to create useful distractors. Our method utilizes only context information and does not use the correct answer, which is completely different from previous Ontology-based or similarity-based approaches. Trained on the Wikipedia corpus, the proposed model is able to predict Wiki entities as distractors. Our method is evaluated on two biology question datasets collected from Wikipedia and actual college-level exams. Experimental results show that our context-based method achieves comparable performance to a frequently used word2vec-based method for the Wiki dataset. In addition, we propose a second-stage learner to combine the strengths of the two methods, which further improves the performance on both datasets, with 51.7% and 48.4% of generated distractors being acceptable.
Self-describing the content of a video is an elementary problem in artificial intelligence that joins computer vision and natural language processing. Through this paper, we propose a single system which could carry out video analysis (Object Detection and Captioning) at a reduced time and memory complexity. This single system uses YOLO (You Look Only Once) as its base model. Moreover, to highlight the importance of using transfer learning in development of the proposed system, two more approaches have been discussed. The rest one uses two discrete models, one to extract continuous bag of words from the frames and other to generate captions from those words i.e. Language Model. VGG-16 (Visual Geometry Group) is used as the base image decoder model to compare the two approaches, while LSTM is the base Language Model used. The Dataset used is Microsoft Research Video Description Corpus. The dataset was manually modified to serve the purpose of training the proposed system. Second approach which uses transfer learning proves to be the better approach for development of the proposed system.
The evolution of cloud gaming systems is substantially the security requirements for computer games. Although online game development often utilizes artificial intelligence and human computer interaction, game developers and providers often do not pay much attention to security techniques. In cloud gaming, location-based games are augmented reality games which take the original principals of the game and applies them to the real world. In other terms, it uses the real world to impact the game experience. Because the execution of such games is distributed in cloud computing, users cannot be certain where their input and output data are managed. This introduces the possibility to input incorrect data in the exchange between the gamer's terminal and the gaming platform. In this context, we propose a new gaming concept for augmented reality and location-based games in order to solve the aforementioned cheating scenario problem. The merit of our approach is to establish an accurate and verifiable proof that the gamer reached the goal or found the target. The major novelty in our method is that it allows the gamer to submit an authenticated proof related to the game result without altering the privacy of positioning data.
Consent is a key measure for privacy protection and needs to be `meaningful' to give people informational power. It is increasingly important that individuals are provided with real choices and are empowered to negotiate for meaningful consent. Meaningful consent is an important area for consideration in IoT systems since privacy is a significant factor impacting on adoption of IoT. Obtaining meaningful consent is becoming increasingly challenging in IoT environments. It is proposed that an ``apparency, pragmatic/semantic transparency model'' adopted for data management could make consent more meaningful, that is, visible, controllable and understandable. The model has illustrated the why and what issues regarding data management for potential meaningful consent [1]. In this paper, we focus on the `how' issue, i.e. how to implement the model in IoT systems. We discuss apparency by focusing on the interactions and data actions in the IoT system; pragmatic transparency by centring on the privacy risks, threats of data actions; and semantic transparency by focusing on the terms and language used by individuals and the experts. We believe that our discussion would elicit more research on the apparency model' in IoT for meaningful consent.
With the extensive application of cloud computing technology developing, security is of paramount importance in Cloud Computing. In the cloud computing environment, surveys have been provided on several intrusion detection techniques for detecting intrusions. We will summarize some literature surveys of various attack taxonomy, which might cause various threats in cloud environment. Such as attacks in virtual machines, attacks on virtual machine monitor, and attacks in tenant network. Besides, we review massive existing solutions proposed in the literature, such as misuse detection techniques, behavior analysis of network traffic, behavior analysis of programs, virtual machine introspection (VMI) techniques, etc. In addition, we have summarized some innovations in the field of cloud security, such as CloudVMI, data mining techniques, artificial intelligence, and block chain technology, etc. At the same time, our team designed and implemented the prototype system of CloudI (Cloud Introspection). CloudI has characteristics of high security, high performance, high expandability and multiple functions.
In the last decade we have witnessed a more than prolific growth of online social media content in sites designed for online social interactions. These systems have been traditionally designed as centralized silos, which unfortunately suffer from abusive behavior ranging from spam, cyberbullying to even censorship. This paper investigates the utility of supervised learning techniques for abuse detection in future decentralized settings, where less metadata remains available for use in learning algorithms. We present a method that uses a privacy-preserving protocol to exchange a fingerprint of the neighborhood of a pair of nodes, namely sender and receiver. Our method extracts social graph metadata to form a subset of key features, namely neighborhood knowledge, some of which we approximate to reduce communication and computational requirements of such a protocol. In our benchmarking we show that a data minimization approach can obtain features 13% faster while providing similar or, as with the SVM classifier, even better abuse detection rates with just approximated Private Set Intersection.
This paper presents a new technique for providing the analysis and comparison of wiretap codes in the small blocklength regime over the binary erasure wiretap channel. A major result is the development of Monte Carlo strategies for quantifying a code's equivocation, which mirrors techniques used to analyze forward error correcting codes. For this paper, we limit our analysis to coset-based wiretap codes, and give preferred strategies for calculating and/or estimating the equivocation in order of preference. We also make several comparisons of different code families. Our results indicate that there are security advantages to using algebraic codes for applications that require small to medium blocklengths.
Network operators face a challenge of ensuring correctness as networks grow more complex, in terms of scale and increasingly in terms of diversity of software components. Network-wide verification approaches can spot errors, but assume a simplified abstraction of the functionality of individual network devices, which may deviate from the real implementation. In this paper, we propose a technique for high-coverage testing of end-to-end network correctness using the real software that is deployed in these networks. Our design is effectively a hybrid, using an explicit-state model checker to explore all network-wide execution paths and event orderings, but executing real software as subroutines for each device. We show that this approach can detect correctness issues that would be missed both by existing verification and testing approaches, and a prototype implementation suggests the technique can scale to larger networks
with reasonable performance.
This paper suggests a conceptual mechanism for increasing the security level of the global information community, national information technology infrastructures (e-governments) and private cloud structures, which uses the logical characteristic of IPv6-protocol. The mechanism is based on the properties of the IPv6-header and, in particular, rules of coding IPv6-addresses.
This study seeks to investigate how the development of e-government services impacts on cybersecurity. The study uses the methods of correlation and multiple regression to analyse two sets of global data, the e-government development index of the 2015 United Nations e-government survey and the 2015 International Telecommunication Union global cybersecurity development index (GCI 2015). After analysing the various contextual factors affecting e-government development, the study found that, various composite measures of e-government development are significantly correlated with cybersecurity development. The therefore study contributes to the understanding of the relationship between e-government and cybersecurity development. The authors developed a model to highlight this relationship and have validated the model using empirical data. This is expected to provide guidance on specific dimensions of e-government services that will stimulate the development of cybersecurity. The study provided the basis for understanding the patterns in cybersecurity development and has implication for policy makers in developing trust and confidence for the adoption e-government services.
The paper contains the results of perspective digital signatures algorithms based on hash functions analysis. Several aspects of their implementation are presented. The comparative analysis was carried out by the method of hierarchies. Some problems of implementation in the existing infrastructure are described. XMSS algorithm implementation with Ukrainian hash function national standard is presented.
Comparing with the traditional grid, energy internet will collect data widely and connect more broader. The analysis of electrical data use of Non-intrusive Load Monitoring (NILM) can infer user behavior privacy. Consideration both data security and availability is a problem must be addressed. Due to its rigid and provable privacy guarantee, Differential Privacy has proverbially reached and applied to privacy preserving data release and data mining. Because of its high sensitivity, increases the noise directly will led to data unavailable. In this paper, we propose a differentially private mechanism to protect energy internet privacy. Our focus is the aggregated data be released by data owner after added noise in disaggregated data. The theoretically proves and experiments show that our scheme can achieve the purpose of privacy-preserving and data availability.
HB+ is a lightweight authentication scheme, which is secure against passive attacks if the Learning Parity with Noise Problem (LPN) is hard. However, HB+ is vulnerable to a key-recovery, man-in-the-middle (MiM) attack dubbed GRS. The HB+DB protocol added a distance-bounding dimension to HB+, and was experimentally proven to resist the GRS attack. We exhibit several security flaws in HB+DB. First, we refine the GRS strategy to induce a different key-recovery MiM attack, not deterred by HB+DB's distancebounding. Second, we prove HB+DB impractical as a secure distance-bounding (DB) protocol, as its DB security-levels scale poorly compared to other DB protocols. Third, we refute that HB+DB's security against passive attackers relies on the hardness of LPN; moreover, (erroneously) requiring such hardness lowers HB+DB's efficiency and security. We also propose anew distance-bounding protocol called BLOG. It retains parts of HB+DB, yet BLOG is provably secure and enjoys better (asymptotical) security.