Biblio
IoT device usually has an associated application to facilitate customers' interactions with the device, and customers need to register an account to use this application as well. Due to the popularity of mobile phone, a customer is encouraged to register an account with his own mobile phone number. After binding the device to his account, the customer can control his device remotely with his smartphone. When a customer forgets his password, he can use his mobile phone to receive a verification code that is sent by the Short Message Service (SMS) to authenticate and reset his password. If an attacker gains this code, he can steal the victim's account (reset password or login directly) to control the IoT device. Although IoT device vendors have already deployed a set of security countermeasures to protect account such as setting expiration time for SMS authentication code, HTTP encryption, and application packing, this paper shows that existing IoT account password reset via SMS authentication code are still vulnerable to brute-force attacks. In particular, we present an automatic brute-force attack to bypass current protections and then crack IoT device user account. Our preliminary study on popular IoT devices such as smart lock, smart watch, smart router, and sharing car has discovered six account login zero-day vulnerabilities.
This paper investigates the use of deep reinforcement learning (DRL) in the design of a "universal" MAC protocol referred to as Deep-reinforcement Learning Multiple Access (DLMA). The design framework is partially inspired by the vision of DARPA SC2, a 3-year competition whereby competitors are to come up with a clean-slate design that "best share spectrum with any network(s), in any environment, without prior knowledge, leveraging on machine-learning technique". While the scope of DARPA SC2 is broad and involves the redesign of PHY, MAC, and Network layers, this paper's focus is narrower and only involves the MAC design. In particular, we consider the problem of sharing time slots among a multiple of time-slotted networks that adopt different MAC protocols. One of the MAC protocols is DLMA. The other two are TDMA and ALOHA. The DRL agents of DLMA do not know that the other two MAC protocols are TDMA and ALOHA. Yet, by a series of observations of the environment, its own actions, and the rewards - in accordance with the DRL algorithmic framework - a DRL agent can learn the optimal MAC strategy for harmonious co-existence with TDMA and ALOHA nodes. In particular, the use of neural networks in DRL (as opposed to traditional reinforcement learning) allows for fast convergence to optimal solutions and robustness against perturbation in hyper- parameter settings, two essential properties for practical deployment of DLMA in real wireless networks.
The increasing amount of malware variants seen in the wild is causing problems for Antivirus Software vendors, unable to keep up by creating signatures for each. The methods used to develop a signature, static and dynamic analysis, have various limitations. Machine learning has been used by Antivirus vendors to detect malware based on the information gathered from the analysis process. However, adversarial examples can cause machine learning algorithms to miss-classify new data. In this paper we describe a method for malware analysis by converting malware binaries to images and then preparing those images for training within a Generative Adversarial Network. These unsupervised deep neural networks are not susceptible to adversarial examples. The conversion to images from malware binaries should be faster than using dynamic analysis and it would still be possible to link malware families together. Using the Generative Adversarial Network, malware detection could be much more effective and reliable.
Intrusion Prevention System (IPS) is a tool for securing networks from any malicious packet that could be sent from specific host. IPS can be installed on SDN network that has centralized logic architecture, so that IPS doesnt need to be installed on lots of nodes instead it has to be installed alongside the controller as center of logic network. IPS still has a flaw and that is the block duration would remain the same no matter how often a specific host attacks. For this reason, writer would like to make a system that not only integrates IPS on the SDN, but also designs an adaptive IPS by utilizing a fuzzy logic that can decide how long blocks are based on the frequency variable and type of attacks. From the results of tests that have been done, SDN network that has been equipped with adaptive IPS has the ability to detect attacks and can block the attacker host with the duration based on the frequency and type of attacks. The final result obtained is to make the SDN network safer by adding 0.228 milliseconds as the execute time required for the fuzzy algorithm in one process.
Wireless cameras are widely deployed in surveillance systems for security guarding. However, the privacy concerns associated with unauthorized videotaping, are drawing an increasing attention recently. Existing detection methods for unauthorized wireless cameras are either limited by their detection accuracy or requiring dedicated devices. In this paper, we propose DeWiCam, a lightweight and effective detection mechanism using smartphones. The basic idea of DeWiCam is to utilize the intrinsic traffic patterns of flows from wireless cameras. Compared with traditional traffic pattern analysis, DeWiCam is more challenging because it cannot access the encrypted information in the data packets. Yet, DeWiCam overcomes the difficulty and can detect nearby wireless cameras reliably. To further identify whether a camera is in an interested room, we propose a human-assisted identification model. We implement DeWiCam on the Android platform and evaluate it with extensive experiments on 20 cameras. The evaluation results show that DeWiCam can detect cameras with an accuracy of 99% within 2.7 s.
In this paper, the rotor unbalanced magnetic pull (UMP) characteristics of different field winding inter-turn short-circuit (FWISC) positions in turbo-generator are studied. Firstly, the qualitative analysis on the air gap magnetic flux density (MFD), as well as the rotor UMPs in X-direction and Y-direction, is carried out. Then the finite element numerical simulations are respectively taken to calculate the quantitative data of rotor UMP under normal condition and three different short-circuit positions. Finally, the variation rules based on rotor UMP characteristics by experimental analysis are obtained. It is shown that the occurrence of FWISC will induce generally fundamental-frequency UMP acting on the rotor in X-direction. Moreover, the different positions of FWISC are found to be sensitive to the rotor UMP amplitudes. The closer the short-circuit position is to the big teeth, the larger the rotor UMP amplitudes in X-direction will be.
Program authorship attribution has implications for the privacy of programmers who wish to contribute code anonymously. While previous work has shown that complete files that are individually authored can be attributed, these efforts have focused on ideal data sets such as the Google Code Jam data. We explore the problem of attribution "in the wild," examining source code obtained from open source version control systems, and investigate if and how such contributions can be attributed to their authors, either individually or on a per-account basis. In this work we show that accounts belonging to open source contributors containing short, incomplete, and typically uncompilable fragments can be effectively attributed.
The advent of HTML 5 revives the life of cross-site scripting attack (XSS) in the web. Cross Document Messaging, Local Storage, Attribute Abuse, Input Validation, Inline Multimedia and SVG emerge as likely targets for serious threats. Introduction of various new tags and attributes can be potentially manipulated to exploit the data on a dynamic website. The XSS attack manages to retain a spot in all the OWASP Top 10 security risks released over the past decade and placed in the seventh spot in OWASP Top 10 of 2017. It is known that XSS attempts to execute scripts with untrusted data without proper validation between websites. XSS executes scripts in the victim's browser which can hijack user sessions, deface websites, or redirect the user to the malicious site. This paper focuses on the development of a browser extension for the popular Google Chromium browser that keeps track of various attack vectors. These vectors primarily include tags and attributes of HTML 5 that may be used maliciously. The developed plugin alerts users whenever a possibility of XSS attack is discovered when a user accesses a particular website.
Silicon Physical Unclonable Function (PUF) is arguably the most promising hardware security primitive. In particular, PUFs that are capable of generating a large amount of challenge response pairs (CRPs) can be used in many security applications. However, these CRPs can also be exploited by machine learning attacks to model the PUF and predict its response. In this paper, we first show that, based on data in the public domain, two popular PUFs that can generate CRPs (i.e., arbiter PUF and reconfigurable ring oscillator (RO) PUF) can be broken by simple logistic regression (LR) attack with about 99% accuracy. We then propose a feedback structure to XOR the PUF response with the challenge and challenge the PUF again to generate the response. Results show that this successfully reduces LR's learning accuracy to the lower 50%, but artificial neural network (ANN) learning attack still has an 80% success rate. Therefore, we propose a configurable ring oscillator based dual-mode PUF which works with both odd number of inverters (like the reconfigurable RO PUF) and even number of inverters (like a bistable ring (BR) PUF). Since currently there are no known attacks that can model both RO PUF and BR PUF, the dual-mode PUF will be resistant to modeling attacks as long as we can hide its working mode from the attackers, which we achieve with two practical methods. Finally, we implement the proposed dual-mode PUF on Nexys 4 FPGA boards and collect real measurement to show that it reduces the learning accuracy of LR and ANN to the mid-50% and low 60%, respectively. In addition, it meets the PUF requirements of uniqueness, randomness, and robustness.
Lately, we are facing the Malware crisis due to various types of malware or malicious programs or scripts available in the huge virtual world - the Internet. But, what is malware? Malware can be a malicious software or a program or a script which can be harmful to the user's computer. These malicious programs can perform a variety of functions, including stealing, encrypting or deleting sensitive data, altering or hijacking core computing functions and monitoring users' computer activity without their permission. There are various entry points for these programs and scripts in the user environment, but only one way to remove them is to find them and kick them out of the system which isn't an easy job as these small piece of script or code can be anywhere in the user system. This paper involves the understanding of different types of malware and how we will use Machine Learning to detect these malwares.
The steady decline of IP transit prices in the past two decades has helped fuel the growth of traffic demands in the Internet ecosystem. Despite the declining unit pricing, bandwidth costs remain significant due to ever-increasing scale and reach of the Internet, combined with the price disparity between the Internet's core hubs versus remote regions. In the meantime, cloud providers have been auctioning underutilized computing resources in their marketplace as spot instances for a much lower price, compared to their on-demand instances. This state of affairs has led the networking community to devote extensive efforts to cloud-assisted networks - the idea of offloading network functionality to cloud platforms, ultimately leading to more flexible and highly composable network service chains.We initiate a critical discussion on the economic and technological aspects of leveraging cloud-assisted networks for Internet-scale interconnections and data transfers. Namely, we investigate the prospect of constructing a large-scale virtualized network provider that does not own any fixed or dedicated resources and runs atop several spot instances. We construct a cloud-assisted overlay as a virtual network provider, by leveraging third-party cloud spot instances. We identify three use case scenarios where such approach will not only be economically and technologically viable but also provide performance benefits compared to current commercial offerings of connectivity and transit providers.
This paper describes the technology of neural network application to solve the problem of information security incidents forecasting. We describe the general problem of analyzing and predicting time series in a graphical and mathematical setting. To solve this problem, it is proposed to use a neural network model. To solve the task of forecasting a time series of information security incidents, data are generated and described on the basis of which the neural network is trained. We offer a neural network structure, train the neural network, estimate it's adequacy and forecasting ability. We show the possibility of effective use of a neural network model as a part of an intelligent forecasting system.
In this paper we solve the problem of neural network technology development for e-mail messages classification. We analyze basic methods of spam filtering such as a sender IP-address analysis, spam messages repeats detection and the Bayesian filtering according to words. We offer the neural network technology for solving this problem because the neural networks are universal approximators and effective in addressing the problems of classification. Also, we offer the scheme of this technology for e-mail messages “spam”/“not spam” classification. The creation of effective neural network model of spam filtering is performed within the databases knowledge discovery technology. For this training set is formed, the neural network model is trained, its value and classifying ability are estimated. The experimental studies have shown that a developed artificial neural network model is adequate and it can be effectively used for the e-mail messages classification. Thus, in this paper we have shown the possibility of the effective neural network model use for the e-mail messages filtration and have shown a scheme of artificial neural network model use as a part of the e-mail spam filtering intellectual system.
Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models.
Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership. In this paper, we generalize the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100$\backslash$%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.
Along with the rapid development of hardware security techniques, the revolutionary growth of countermeasures or attacking methods developed by intelligent and adaptive adversaries have significantly complicated the ability to create secure hardware systems. Thus, there is a critical need to (re)evaluate existing or new hardware security techniques against these state-of-the-art attacking methods. With this in mind, this paper presents a novel framework for incorporating active learning techniques into hardware security field. We demonstrate that active learning can significantly improve the learning efficiency of physical unclonable function (PUF) modeling attack, which samples the least confident and the most informative challenge-response pair (CRP) for training in each iteration. For example, our experimental results show that in order to obtain a prediction error below 4%, 2790 CRPs are required in passive learning, while only 811 CRPs are required in active learning. The sampling strategies and detailed applications of PUF modeling attack under various environmental conditions are also discussed. When the environment is very noisy, active learning may sample a large number of mislabeled CRPs and hence result in high prediction error. We present two methods to mitigate the contradiction between informative and noisy CRPs.
Recent research suggests that 88% of Android applications that use Java cryptographic APIs make at least one mistake, which results in an insecure implementation. It is unclear, however, if these mistakes originate from code written by application or third-party library developers. Understanding the responsible party for a misuse case is important for vulnerability disclosure. In this paper, we bridge this knowledge gap and introduce source attribution to the analysis of cryptographic API misuse. We developed BinSight, a static program analyzer that supports source attribution, and we analyzed 132K Android applications collected in years 2012, 2015, and 2016. Our results suggest that third-party libraries are the main source of cryptographic API misuse. In particular, 90% of the violating applications, which contain at least one call-site to Java cryptographic API, originate from libraries. When compared to 2012, we found the use of ECB mode for symmetric ciphers has significantly decreased in 2016, for both application and third-party library code. Unlike application code, however, third-party libraries have significantly increased their reliance on static encryption keys for symmetric ciphers and static IVs for CBC mode ciphers. Finally, we found that the insecure RC4 and DES ciphers were the second and the third most used ciphers in 2016.
Zero-day Use-After-Free (UAF) vulnerabilities are increasingly popular and highly dangerous, but few mitigations exist. We introduce a new pointer-analysis-based static analysis, CRed, for finding UAF bugs in multi-MLOC C source code efficiently and effectively. CRed achieves this by making three advances: (i) a spatio-temporal context reduction technique for scaling down soundly and precisely the exponential number of contexts that would otherwise be considered at a pair of free and use sites, (ii) a multi-stage analysis for filtering out false alarms efficiently, and (iii) a path-sensitive demand-driven approach for finding the points-to information required. We have implemented CRed in LLVM-3.8.0 and compared it with four different state-of-the-art static tools: CBMC (model checking), Clang (abstract interpretation), Coccinelle (pattern matching), and Supa (pointer analysis) using all the C test cases in Juliet Test Suite (JTS) and 10 open-source C applications. For the ground-truth validated with JTS, CRed detects all the 138 known UAF bugs as CBMC and Supa do while Clang and Coccinelle miss some bugs, with no false alarms from any tool. For practicality validated with the 10 applications (totaling 3+ MLOC), CRed reports 132 warnings including 85 bugs in 7.6 hours while the existing tools are either unscalable by terminating within 3 days only for one application (CBMC) or impractical by finding virtually no bugs (Clang and Coccinelle) or issuing an excessive number of false alarms (Supa).
Crowd sensing is one of the core features of internet of vehicles, the use of internet of vehicles for crowd sensing is conducive to the rational allocation of sensing tasks. This paper mainly studies the problem of task allocation for crowd sensing in internet of vehicles, proposes a trajectory-based task allocation scheme for crowd sensing in internet of vehicles. With limited budget constraints, participants' trajectory is taken as an indicator of the spatiotemporal availability. Based on the solution idea of the minimal-cover problem, select the minimum number of participating vehicles to achieve the coverage of the target area.
Use-After-Free (UAF) vulnerabilities are caused by the program operating on a dangling pointer and can be exploited to compromise critical software systems. While there have been many tools to mitigate UAF vulnerabilities, UAF remains one of the most common attack vectors. UAF is particularly di cult to detect in concurrent programs, in which a UAF may only occur with rare thread schedules. In this paper, we present a novel technique, UFO, that can precisely predict UAFs based on a single observed execution trace with a provably higher detection capability than existing techniques with no false positives. The key technical advancement of UFO is an extended maximal thread causality model that captures the largest possible set of feasible traces that can be inferred from a given multithreaded execution trace. By formulating UAF detection as a constraint solving problem atop this model, we can explore a much larger thread scheduling space than classical happens-before based techniques. We have evaluated UFO on several real-world large complex C/C++ programs including Chromium and FireFox. UFO scales to real-world systems with hundreds of millions of events in their execution and has detected a large number of real concurrency UAFs.
Smart grid utilizes cloud service to realize reliable, efficient, secured, and cost-effective power management, but there are a number of security risks in the cloud service of smart grid. The security risks are particularly problematic to operators of power information infrastructure who want to leverage the benefits of cloud. In this paper, security risk of cloud service in the smart grid are categorized and analyzed characteristics, and multi-layered index system of general technical risks is established, which applies to different patterns of cloud service. Cloud service risk of smart grid can evaluate according indexes.