Biblio
We report on our implementation of a new Gaussian sampling algorithm for lattice trapdoors. Lattice trapdoors are used in a wide array of lattice-based cryptographic schemes including digital signatures, attributed-based encryption, program obfuscation and others. Our implementation provides Gaussian sampling for trapdoor lattices with prime moduli, and supports both single- and multi-threaded execution. We experimentally evaluate our implementation through its use in the GPV hash-and-sign digital signature scheme as a benchmark. We compare our design and implementation with prior work reported in the literature. The evaluation shows that our implementation 1) has smaller space requirements and faster runtime, 2) does not require multi-precision floating-point arithmetic, and 3) can be used for a broader range of cryptographic primitives than previous implementations.
The dynamically changing landscape of DDoS threats increases the demand for advanced security solutions. The rise of massive IoT botnets enables attackers to mount high-intensity short-duration ”volatile ephemeral” attack waves in quick succession. Therefore the standard human-in-the-loop security center paradigm is becoming obsolete. To battle the new breed of volatile DDoS threats, the intrusion detection system (IDS) needs to improve markedly, at least in reaction times and in automated response (mitigation). Designing such an IDS is a daunting task as network operators are traditionally reluctant to act - at any speed - on potentially false alarms. The primary challenge of a low reaction time detection system is maintaining a consistently low false alarm rate. This paper aims to show how a practical FPGA-based DDoS detection and mitigation system can successfully address this. Besides verifying the model and algorithms with real traffic ”in the wild”, we validate the low false alarm ratio. Accordingly, we describe a methodology for determining the false alarm ratio for each involved threat type, then we categorize the causes of false detection, and provide our measurement results. As shown here, our methods can effectively mitigate the volatile ephemeral DDoS attacks, and accordingly are usable both in human out-of-loop and on-the-loop next-generation security solutions.
The objective of the Honeypot security system is a mechanism to identify the unauthorized users and intruders in the network. The enterprise level security can be possible via high scalability. The whole theme behind this research is an Intrusion Detection System and Intrusion Prevention system factors accomplished through honeypot and honey trap methodology. Dynamic Configuration of honey pot is the milestone for this mechanism. Eight different methodologies were deployed to catch the Intruders who utilizing the unsecured network through the unused IP address. The method adapted here to identify and trap through honeypot mechanism activity. The result obtained is, intruders find difficulty in gaining information from the network, which helps a lot of the industries. Honeypot can utilize the real OS and partially through high interaction and low interaction respectively. The research work concludes the network activity and traffic can also be tracked through honeypot. This provides added security to the secured network. Detection, prevention and response are the categories available, and moreover, it detects and confuses the hackers.
Decision making in utilities, municipal, and energy companies depends on accurate and trustworthy weather information and predictions. Recently, crowdsourced personal weather stations (PWS) are being increasingly used to provide a higher spatial and temporal resolution of weather measurements. However, tools and methods to ensure the trustworthiness of the crowdsourced data in real-time are lacking. In this paper, we present a Reputation System for Crowdsourced Rainfall Networks (RSCRN) to assign trust scores to personal weather stations in a region. Using real PWS data from the Weather Underground service in the high flood risk region of Norfolk, Virginia, we evaluate the performance of the proposed RSCRN. The proposed method is able to converge to a confident trust score for a PWS within 10–20 observations after installation. Collectively, the results indicate that the trust score derived from the RSCRN can reflect the collective measure of trustworthiness to the PWS, ensuring both useful and trustworthy data for modeling and decision-making in the future.
This paper considers a pilot spoofing attack scenario in a massive MIMO system. A malicious user tries to disturb the channel estimation process by sending interference symbols to the base-station (BS) via the uplink. Another legitimate user counters by sending random symbols. The BS does not possess any partial channel state information (CSI) and distribution of symbols sent by malicious user a priori. For such scenario, this paper aims to separate the channel directions from the legitimate and malicious users to the BS, respectively. A blind channel separation algorithm based on estimating the characteristic function of the distribution of the signal space vector is proposed. Simulation results show that the proposed algorithm provides good channel separation performance in a typical massive MIMO system.
In the distributed Internet of Things (IoT) architecture, sensors collect data from vehicles, home appliances and office equipment and other environments. Various objects contain the sensor which process data, cooperate and exchange information with other embedded devices and end users in a distributed network. It is important to provide end-to-end communication security and an authentication system to guarantee the security and reliability of the data in such a distributed system. Two-factor authentication is a solution to improve the security level of password-based authentication processes and immunized the system against many attacks. At the same time, the computational and storage overhead of an authentication method also needs to be considered in IoT scenarios. For this reason, many cryptographic schemes are designed especially for the IoT; however, we observe a lack of laboratory hardware test beds and modules, and universal authentication hardware modules. This paper proposes a design and analysis for a hardware module in the IoT which allows the use of two-factor authentication based on smart cards, while taking into consideration the limited processing power and energy reserves of nodes, as well as designing the system with scalability in mind.
User authentication on smartphones must satisfy both security and convenience, an inherently difficult balancing art. Apple's FaceID is arguably the latest of such efforts, at the cost of additional hardware (e.g., dot projector, flood illuminator and infrared camera). We propose a novel user authentication system EchoPrint, which leverages acoustics and vision for secure and convenient user authentication, without requiring any special hardware. EchoPrint actively emits almost inaudible acoustic signals from the earpiece speaker to "illuminate" the user's face and authenticates the user by the unique features extracted from the echoes bouncing off the 3D facial contour. To combat changes in phone-holding poses thus echoes, a Convolutional Neural Network (CNN) is trained to extract reliable acoustic features, which are further combined with visual facial landmark locations to feed a binary Support Vector Machine (SVM) classifier for final authentication. Because the echo features depend on 3D facial geometries, EchoPrint is not easily spoofed by images or videos like 2D visual face recognition systems. It needs only commodity hardware, thus avoiding the extra costs of special sensors in solutions like FaceID. Experiments with 62 volunteers and non-human objects such as images, photos, and sculptures show that EchoPrint achieves 93.75% balanced accuracy and 93.50% F-score, while the average precision is 98.05%, and no image/video based attack is observed to succeed in spoofing.
Recent advances in pervasive computing have caused a rapid growth of the Smart Home market, where a number of otherwise mundane pieces of technology are capable of connecting to the Internet and interacting with other similar devices. However, with the lack of a commonly adopted set of guidelines, several IT companies are producing smart devices with their own proprietary standards, leading to highly heterogeneous Smart Home systems in which the interoperability of the present elements is not always implemented in the most straightforward manner. As such, understanding the cyber risk of these cyber-physical systems beyond the individual devices has become an almost intractable problem. This paper tackles this issue by introducing a Smart Home reference architecture which facilitates security analysis. Being composed by three viewpoints, it gives a high-level description of the various functions and components needed in a domestic IoT device and network. Furthermore, this document demonstrates how the architecture can be used to determine the various attack surfaces of a home automation system from which its key vulnerabilities can be determined.
Compile-time specialization and feature pruning through static binary rewriting have been proposed repeatedly as techniques for reducing the attack surface of large programs, and for minimizing the trusted computing base. We propose a new approach to attack surface reduction: dynamic binary lifting and recompilation. We present BinRec, a binary recompilation framework that lifts binaries to a compiler-level intermediate representation (IR) to allow complex transformations on the captured code. After transformation, BinRec lowers the IR back to a "recovered" binary, which is semantically equivalent to the input binary, but does have its unnecessary features removed. Unlike existing approaches, which are mostly based on static analysis and rewriting, our framework analyzes and lifts binaries dynamically. The crucial advantage is that we can not only observe the full program including all of its dependencies, but we can also determine which program features the end-user actually uses. We evaluate the correctness and performance of BinRec, and show that our approach enables aggressive pruning of unwanted features in COTS binaries.
Recommender system is an important component of many web services to help users locate items that match their interests. Several studies showed that recommender systems are vulnerable to poisoning attacks, in which an attacker injects fake data to a recommender system such that the system makes recommendations as the attacker desires. However, these poisoning attacks are either agnostic to recommendation algorithms or optimized to recommender systems (e.g., association-rule-based or matrix-factorization-based recommender systems) that are not graph-based. Like association-rule-based and matrix-factorization-based recommender systems, graph-based recommender system is also deployed in practice, e.g., eBay, Huawei App Store (a big app store in China). However, how to design optimized poisoning attacks for graph-based recommender systems is still an open problem. In this work, we perform a systematic study on poisoning attacks to graph-based recommender systems. We consider an attacker's goal is to promote a target item to be recommended to as many users as possible. To achieve this goal, our a"acks inject fake users with carefully crafted rating scores to the recommender system. Due to limited resources and to avoid detection, we assume the number of fake users that can be injected into the system is bounded. The key challenge is how to assign rating scores to the fake users such that the target item is recommended to as many normal users as possible. To address the challenge, we formulate the poisoning attacks as an optimization problem, solving which determines the rating scores for the fake users. We also propose techniques to solve the optimization problem. We evaluate our attacks and compare them with existing attacks under white-box (recommendation algorithm and its parameters are known), gray-box (recommendation algorithm is known but its parameters are unknown), and blackbox (recommendation algorithm is unknown) settings using two real-world datasets. Our results show that our attack is effective and outperforms existing attacks for graph-based recommender systems. For instance, when 1% of users are injected fake users, our attack can make a target item recommended to 580 times more normal users in certain scenarios.
In this work NTRU-like cryptosystem NTRU Prime IIT Ukraine, which is created on the basis of existing cryptographic transformations end-to-end encryption type, is considered. The description of this cryptosystem is given and its analysis is carried out. Also, features of its implementation, comparison of the main characteristics and indicators, as well as the definition of differences from existing NTRU-like cryptographic algorithms are presented. Conclusions are made and recommendations are given.
This paper proposes a deep learning based method for efficient malware classification. Specially, we convert the malware classification problem into the image classification problem, which can be addressed through leveraging convolutional neural networks (CNNs). For many malware families, the images belonging to the same family have similar contours and textures, so we convert the Binary files of malware samples to uncompressed gray-scale images which possess complete information of the original malware without artificial feature extraction. We then design classifier based on Tensorflow framework of Google by combining the deep learning (DL) and malware detection technology. Experimental results show that the uncompressed gray-scale images of the malware are relatively easy to distinguish and the CNN based classifier can achieve a high success rate of 98.2%
Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership. In this paper, we generalize the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100$\backslash$%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.
Automated network control and management has been a long standing target of network protocols. We address in this paper the question of automated protocol design, where distributed networked nodes have to cooperate to achieve a common goal without a priori knowledge on which information to exchange or the network topology. While reinforcement learning has often been proposed for this task, we propose here to apply recent methods from semi-supervised deep neural networks which are focused on graphs. Our main contribution is an approach for applying graph-based deep learning on distributed routing protocols via a novel neural network architecture named Graph-Query Neural Network. We apply our approach to the tasks of shortest path and max-min routing. We evaluate the learned protocols in cold-start and also in case of topology changes. Numerical results show that our approach is able to automatically develop efficient routing protocols for those two use-cases with accuracies larger than 95%. We also show that specific properties of network protocols, such as resilience to packet loss, can be explicitly included in the learned protocol.
Silicon Physical Unclonable Function (PUF) is arguably the most promising hardware security primitive. In particular, PUFs that are capable of generating a large amount of challenge response pairs (CRPs) can be used in many security applications. However, these CRPs can also be exploited by machine learning attacks to model the PUF and predict its response. In this paper, we first show that, based on data in the public domain, two popular PUFs that can generate CRPs (i.e., arbiter PUF and reconfigurable ring oscillator (RO) PUF) can be broken by simple logistic regression (LR) attack with about 99% accuracy. We then propose a feedback structure to XOR the PUF response with the challenge and challenge the PUF again to generate the response. Results show that this successfully reduces LR's learning accuracy to the lower 50%, but artificial neural network (ANN) learning attack still has an 80% success rate. Therefore, we propose a configurable ring oscillator based dual-mode PUF which works with both odd number of inverters (like the reconfigurable RO PUF) and even number of inverters (like a bistable ring (BR) PUF). Since currently there are no known attacks that can model both RO PUF and BR PUF, the dual-mode PUF will be resistant to modeling attacks as long as we can hide its working mode from the attackers, which we achieve with two practical methods. Finally, we implement the proposed dual-mode PUF on Nexys 4 FPGA boards and collect real measurement to show that it reduces the learning accuracy of LR and ANN to the mid-50% and low 60%, respectively. In addition, it meets the PUF requirements of uniqueness, randomness, and robustness.
The power of artificial neural networks to form predictive models for phenomenon that exhibit non-linear relationships is a given fact. Despite this advantage, artificial neural networks are known to suffer drawbacks such as long training times and computational intensity. The researchers propose a two-tiered approach to enhance the learning performance of artificial neural networks for phenomenon with time series where data exhibits predictable changes that occur every calendar year. This paper focuses on the initial results of the first phase of the proposed algorithm which incorporates clustering and classification prior to application of the backpropagation algorithm. The 2016–2017 zonal load data of France is used as the data set. K-means is chosen as the clustering algorithm and a comparison is made between Naïve Bayes and k-Nearest Neighbors to determine the better classifier for this data set. The initial results show that electrical load behavior is not necessarily reflective of calendar clustering even without using the min-max temperature recorded during the inclusive months. Simulating the day-type classification process using one cluster, initial results show that the k-nearest neighbors outperforms the Naïve Bayes classifier for this data set and that the best feature to be used for classification into day type is the daily min-max load. These classified load data is expected to reduce training time and improve the overall performance of short-term load demand predictive models in a future paper.
The field of robotics has matured using artificial intelligence and machine learning such that intelligent robots are being developed in the form of autonomous vehicles. The anticipated widespread use of intelligent robots and their potential to do harm has raised interest in their security. This research evaluates a cyberattack on the machine learning policy of an autonomous vehicle by designing and attacking a robotic vehicle operating in a dynamic environment. The primary contribution of this research is an initial assessment of effective manipulation through an indirect attack on a robotic vehicle using the Q learning algorithm for real-time routing control. Secondly, the research highlights the effectiveness of this attack along with relevant artifact issues.
A Cyber Reasoning System (CRS) is designed to automatically find and exploit software vulnerabilities in complex software. To be effective, CRSs integrate multiple vulnerability detection tools (VDTs), such as symbolic executors and fuzzers. Determining which VDTs can best find bugs in a large set of target programs, and how to optimally configure those VDTs, remains an open and challenging problem. Current solutions are based on heuristics created by security analysts that rely on experience, intuition and luck. In this paper, we present Central Exploit Organizer (CEO), a proof-of-concept tool to optimize VDT selection. CEO uses machine learning to optimize the selection and configuration of the most suitable vulnerability detection tool. We show that CEO can predict the relative effectiveness of a given vulnerability detection tool, configuration, and initial input. The estimation accuracy presents an improvement between \$11%\$ and \$21%\$ over random selection. We are releasing CEO and our dataset as open source to encourage further research.
In this paper, we applied IoT (Internet of things) technology and SMS (short message service) technology to vehicle security system, and designed vehicle remote control system to ensure the vehicle security. Besides, we discussed the method that converted the displacement increment to latitude and longitude increment in order to solve the problem that how to accurately obtain the current location information when GPS (Global Positioning System) failed. The hardware system can realize such function that owners by sending an SMS, or by sending the password through web side of IoT platform, you can remotely control the car alarm system opening or closing, and query vehicle position and other functions. Through this method, it is easy to achieve security for vehicle positioning and tracking.
In public video surveillance, there is an inherent conflict between public safety goals and privacy needs of citizens. Generally, societies tend to decide on middleground solutions that sacrifice neither safety nor privacy goals completely. In this paper, we propose an alternative to existing approaches that rely on cloud-based video analysis. Our approach leverages the inherent geo-distribution of fog computing to preserve privacy of citizens while still supporting camera-based digital manhunts of law enforcement agencies.
With the extensive application of cloud computing technology, the government, enterprises and individuals have migrated their IT applications and sensitive data to the cloud. The cloud security issues have been paid more and more attention by academics and industry. At present, the cloud security solutions are mainly implemented in the user cloud platform, such as the internal part of guest virtual machine, high privileged domain, and virtual machine monitor (VMM) or hardware layer. Through the monitoring of the tenant virtual machine to find out malicious attacks and abnormal state, which ensures the security of user cloud to a certain extent. However, this kind of method has the following shortcomings: firstly, it will increase the cloud platform overhead and interfere with the normal cloud services. Secondly, it could only obtain a limited type of security state information, so the function is single and difficult to expand. Thirdly, there will cause false information if the user cloud platform has been compromised, which will affect the effectiveness of cloud security monitoring. This paper proposes a cloud security model based on cloud introspection technology. In the user cloud platform, we deploy cloud probes to obtain the user cloud state information, such as system memory, network communication and disk storage, etc. Then we synchronize the cloud state information to the introspection cloud, which is deployed independent. Finally, through bridging the semantic gap and data analysis in the introspection cloud, we can master the security state of user cloud. At the same time, we design and implement the prototype system of CloudI (Cloud Introspection). Through the comparison with the original cloud security technology by a series of experiments, CloudI has characteristics of high security, high performance, high expandability and multiple functions.
Connected cars have received massive attention in Intelligent Transportation System. Many potential services, especially safety-related ones, rely on spatial-temporal messages periodically broadcast by cars. Without a secure authentication algorithm, malicious cars may send out invalid spatial-temporal messages and then deny creating them. Meanwhile, a lot of private information may be disclosed from these spatial-temporal messages. Since cars move on expressways at high speed, any authentication must be performed in real-time to prevent crashes. In this paper, we propose a Fast and Anonymous Spatial-Temporal Trust (FastTrust) mechanism to ensure these properties. In contrast to most authentication protocols which rely on fixed infrastructures, FastTrust is distributed and mostly designed on symmetric-key cryptography and an entropy-based commitment, and is able to fast authenticate spatial-temporal messages. FastTrust also ensures the anonymity and unlinkability of spatial-temporal messages by developing a pseudonym-varying scheduling scheme on cars. We provide both analytical and simulation evaluations to show that FastTrust achieves the security and privacy properties. FastTrust is low-cost in terms of communication and computational resources, authenticating 20 times faster than existing Elliptic Curve Digital Signature Algorithm.
Tor provides low-latency anonymous and uncensored network access against a local or network adversary. Due to the design choice to minimize traffic overhead (and increase the pool of potential users) Tor allows some information about the client's connections to leak. Attacks using (features extracted from) this information to infer the website a user visits are called Website Fingerprinting (WF) attacks. We develop a methodology and tools to measure the amount of leaked information about a website. We apply this tool to a comprehensive set of features extracted from a large set of websites and WF defense mechanisms, allowing us to make more fine-grained observations about WF attacks and defenses.