Biblio
With the development of Internet technology, software vulnerabilities have become a major threat to current computer security. In this work, we propose the vulnerability detection for source code using Contextual LSTM. Compared with CNN and LSTM, we evaluated the CLSTM on 23185 programs, which are collected from SARD. We extracted the features through the program slicing. Based on the features, we used the natural language processing to analysis programs with source code. The experimental results demonstrate that CLSTM has the best performance for vulnerability detection, reaching the accuracy of 96.711% and the F1 score of 0.96984.
Although the importance of mobile applications grows every day, recent vulnerability reports argue the application's deficiency to meet modern security standards. Testing strategies alleviate the problem by identifying security violations in software implementations. This paper proposes a novel testing methodology that applies state machine learning of mobile Android applications in combination with algorithms that discover attack paths in the learned state machine. The presence of an attack path evidences the existence of a vulnerability in the mobile application. We apply our methods to real-life apps and show that the novel methodology is capable of identifying vulnerabilities.
To build a secure communications software, Vulnerability Prediction Models (VPMs) are used to predict vulnerable software modules in the software system before software security testing. At present many software security metrics have been proposed to design a VPM. In this paper, we predict vulnerable classes in a software system by establishing the system's weighted software network. The metrics are obtained from the nodes' attributes in the weighted software network. We design and implement a crawler tool to collect all public security vulnerabilities in Mozilla Firefox. Based on these data, the prediction model is trained and tested. The results show that the VPM based on weighted software network has a good performance in accuracy, precision, and recall. Compared to other studies, it shows that the performance of prediction has been improved greatly in Pr and Re.
In this paper, we present initial work towards creating an intelligent interface that can act as an open access laboratory for visual stylometry called WAIVS, Workflows for Analysis of Images and Visual Stylometry. WAIVS allows scholars, students, and other interested parties to explore the nature of artistic style using cutting-edge research methods in visual stylometry. We create semantic workflows for this interface using various computer vision algorithms that not only facilitate artistically significant analyses but also impose intelligent semantic constraints on complex analyses. In the interface, we combine these workflows with a manually-curated dataset for analysis of artistic style based on either the school of art or the medium.
Mobile Ad Hoc Network (MANET) is pretty vulnerable to attacks because of its broad distribution and open nodes. Hence, an effective Intrusion Detection System (IDS) is vital in MANET to deter unwanted malicious attacks. An IDS has been proposed in this paper based on watchdog and pathrater method as well as evaluation of its performance has been presented using Dynamic Source Routing (DSR) and Ad-hoc On-demand Distance Vector (AODV) routing protocols with and without considering the effect of the sinkhole attack. The results obtained justify that the proposed IDS is capable of detecting suspicious activities and identifying the malicious nodes. Moreover, it replaces the fake route with a real one in the routing table in order to mitigate the security risks. The performance appraisal also suggests that the AODV protocol has a capacity of sending more packets than DSR and yields more throughput.
The increasing adoption of 3D printing in many safety and mission critical applications exposes 3D printers to a variety of cyber attacks that may result in catastrophic consequences if the printing process is compromised. For example, the mechanical properties (e.g., physical strength, thermal resistance, dimensional stability) of 3D printed objects could be significantly affected and degraded if a simple printing setting is maliciously changed. To address this challenge, this study proposes a model-free real-time online process monitoring approach that is capable of detecting and defending against the cyber-physical attacks on the firmwares of 3D printers. Specifically, we explore the potential attacks and consequences of four key printing attributes (including infill path, printing speed, layer thickness, and fan speed) and then formulate the attack models. Based on the intrinsic relation between the printing attributes and the physical observations, our defense model is established by systematically analyzing the multi-faceted, real-time measurement collected from the accelerometer, magnetometer and camera. The Kalman filter and Canny filter are used to map and estimate three aforementioned critical toolpath information that might affect the printing quality. Mel-frequency Cepstrum Coefficients are used to extract features for fan speed estimation. Experimental results show that, for a complex 3D printed design, our method can achieve 4% Hausdorff distance compared with the model dimension for infill path estimate, 6.07% Mean Absolute Percentage Error (MAPE) for speed estimate, 9.57% MAPE for layer thickness estimate, and 96.8% accuracy for fan speed identification. Our study demonstrates that, this new approach can effectively defend against the cyber-physical attacks on 3D printers and 3D printing process.
As the Internet technology develops rapidly, attacks against Tor networks becomes more and more frequent. So, it's more and more difficult for Tor network to meet people's demand to protect their private information. A method to improve the anonymity of Tor seems urgent. In this paper, we mainly talk about the principle of Tor, which is the largest anonymous communication system in the world, analyze the reason for its limited efficiency, and discuss the vulnerability of link fingerprint and node selection. After that, a node recognition model based on SVM is established, which verifies that the traffic characteristics expose the node attributes, thus revealing the link and destroying the anonymity. Based on what is done above, some measures are put forward to improve Tor protocol to make it more anonymous.
Query authentication has been extensively studied to ensure the integrity of query results for outsourced databases, which are often not fully trusted. However, access control, another important security concern, is largely ignored by existing works. Notably, recent breakthroughs in cryptography have enabled fine-grained access control over outsourced data. In this paper, we take the first step toward studying the problem of authenticating relational queries with fine-grained access control. The key challenge is how to protect information confidentiality during query authentication, which is essential to many critical applications. To address this challenge, we propose a novel access-policy-preserving (APP) signature as the primitive authenticated data structure. A useful property of the APP signature is that it can be used to derive customized signatures for unauthorized users to prove the inaccessibility while achieving the zero-knowledge confidentiality. We also propose a grid-index-based tree structure that can aggregate APP signatures for efficient range and join query authentication. In addition to this, a number of optimization techniques are proposed to further improve the authentication performance. Security analysis and performance evaluation show that the proposed solutions and techniques are robust and efficient under various system settings.
This is very true for the Windows operating system (OS) used by government and private organizations. With Windows, the closed source nature of the operating system has unfortunately meant that hidden security issues are discovered very late and the fixes are not found in real time. There needs to be a reexamination of current static methods of malware detection. This paper presents an integrated system for automated and real-time monitoring and prediction of rootkit and malware threats for the Windows OS. We propose to host the target Windows machines on the widely used Xen hypervisor, and collect process behavior using virtual memory introspection (VMI). The collected data will be analyzed using state of the art machine learning techniques to quickly isolate malicious process behavior and alert system administrators about potential cyber breaches. This research has two focus areas: identifying memory data structures and developing prediction tools to detect malware. The first part of research focuses on identifying memory data structures affected by malware. This includes extracting the kernel data structures with VMI that are frequently targeted by rootkits/malware. The second part of the research will involve development of a prediction tool using machine learning techniques.
The Internet of Things (IoT) provides transparent and seamless incorporation of heterogeneous and different end systems. It has been widely used in many applications such as smart homes. However, people may resist the IOT as long as there is no public confidence that it will not cause any serious threats to their privacy. Effective secure key management for things authentication is the prerequisite of security operations. In this paper, we present an interactive key management protocol and a non-interactive key management protocol to minimize the communication cost of the things. The security analysis show that the proposed schemes are resilient to various types of attacks.
The Structured Query Language Injection Attack (SQLIA) is one of the most serious and popular threats of web applications. The results of SQLIA include the data loss or complete host takeover. Detection of SQLIA is always an intractable challenge because of the heterogeneity of the attack payloads. In this paper, a novel method to detect SQLIA based on word vector of SQL tokens and LSTM neural networks is described. In the proposed method, SQL query strings were firstly syntactically analyzed into tokens, and then likelihood ratio test is used to build the word vector of SQL tokens, ultimately, an LSTM model is trained with sequences of token word vectors. We developed a tool named WOVSQLI, which implements the proposed technique, and it was evaluated with a dataset from several sources. The results of experiments demonstrate that WOVSQLI can effectively identify SQLIA.
Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts ($\backslash$em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.
This paper presents an authentication protocol specifically tailored for IoT devices that inherently limits the number of times that an entity can authenticate itself with a given key pair. The protocol we propose is based on a stateful hash-based digital signature system called eXtended Merkle Signature Scheme (XMSS), which has increased its popularity of late due to its resistance to quantum-computer-aided attacks. We propose a 1-pass authentication protocol that can be customized according to the server capabilities to keep track of the key pair state. In addition, we present results when ported to ARM Cortex-M3 and M0 processors.
Cloud nowaday has become the backbone of the IT infrastructure. Whole of the infrastructure is now being shifted to the clouds, and as the cloud involves all of the networking schemes and the OS images, it inherits all of the vulnerabilities too. And hence securing them is one of our very prior concerns. Malwares are one of the many other problems that have ever growing and hence need to be eradicated from the system. The history of mal wares go long back in time since the advent of computers and hence a lot of techniques has also been already devised to tackle with the problem in some or other way. But most of them fall short in some or other way or are just too heavy to execute on a simple user machine. Our approach devises a 3 - phase exhaustive technique which confirms the detection of any kind of malwares from the host. It also works for the zero-day attacks that are really difficult to cover most times and can be of really high-risk at times. We have thought of a solution to keep the things light weight for the user.
Advancements in semiconductor domain gave way to realize numerous applications in Video Surveillance using Computer vision and Deep learning, Video Surveillances in Industrial automation, Security, ADAS, Live traffic analysis etc. through image understanding improves efficiency. Image understanding requires input data with high precision which is dependent on Image resolution and location of camera. The data of interest can be thermal image or live feed coming for various sensors. Composite(CVBS) is a popular video interface capable of streaming upto HD(1920x1080) quality. Unlike high speed serial interfaces like HDMI/MIPI CSI, Analog composite video interface is a single wire standard supporting longer distances. Image understanding requires edge detection and classification for further processing. Sobel filter is one the most used edge detection filter which can be embedded into live stream. This paper proposes Zynq FPGA based system design for video surveillance with Sobel edge detection, where the input Composite video decoded (Analog CVBS input to YCbCr digital output), processed in HW and streamed to HDMI display simultaneously storing in SD memory for later processing. The HW design is scalable for resolutions from VGA to Full HD for 60fps and 4K for 24fps. The system is built on Xilinx ZC702 platform and TVP5146 to showcase the functional path.
To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose β-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a β-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.
Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.
Air-gap data is important for the security of computer systems. The injection of the computer virus is limited but possible, however data communication channel is necessary for the transmission of stolen data. This paper considers BFSK digital modulation applied to brightness changes of screen for unidirectional transmission of valuable data. Experimental validation and limitations of the proposed technique are provided.
Mobile ad hoc network (MANET) is an infrastructure less, self organizing on demand wireless communication. The nodes communicate among themselves through their radio range and nodes within the range are known as neighbor nodes. DSR (Dynamic Source Routing), a MANET reactive routing protocol identify the destination by transmitting route request (RREQ) control message into the network and establishes a path after receiving route reply (RREP) control messages. The intermediate node lies in between source to destination may also send RREP control message, weather they have path information about that destination is present into their route cache due to any previous communication. A malicious node may enter within the network and may send RREP control message to the source before original RREP is being received. After receiving RREP without knowing about the destination source starts to send data and data may reached to a different location. In this paper we proposed a novel algorithm by which a malicious node, even stay in the network and send RREP control message but before data transmission source can authenticate the destination by applying PGP (pretty Good Privacy) encryption program. In order to design our algorithm we proposed to add an extra field with RREQ control message with a unique index value (UIV) and two extra fields in RREP applied over UIV to form a random key (Rk) in such a way that, our proposal can maintained two way authorization scheme. Even a malicious node may exists into the network but before data transmission source can identified weather RREP is received by the requested destination or a by a malicious node.
Early detection of new kinds of malware always plays an important role in defending the network systems. Especially, if intelligent protection systems could themselves detect an existence of new malware types in their system, even with a very small number of malware samples, it must be a huge benefit for the organization as well as the social since it help preventing the spreading of that kind of malware. To deal with learning from few samples, term ``one-shot learning'' or ``fewshot learning'' was introduced, and mostly used in computer vision to recognize images, handwriting, etc. An approach introduced in this paper takes advantage of One-shot learning algorithms in solving the malware classification problem by using Memory Augmented Neural Network in combination with malware's API calls sequence, which is a very valuable source of information for identifying malware behavior. In addition, it also use some advantages of the development in Natural Language Processing field such as word2vec, etc. to convert those API sequences to numeric vectors before feeding to the one-shot learning network. The results confirm very good accuracies compared to the other traditional methods.
Use of multi-objective probabilistic planning to synthesize behavior of CPSs can play an important role in engineering systems that must self-optimize for multiple quality objectives and operate under uncertainty. However, the reasoning behind automated planning is opaque to end-users. They may not understand why a particular behavior is generated, and therefore not be able to calibrate their confidence in the systems working properly. To address this problem, we propose a method to automatically generate verbal explanation of multi-objective probabilistic planning, that explains why a particular behavior is generated on the basis of the optimization objectives. Our explanation method involves describing objective values of a generated behavior and explaining any tradeoff made to reconcile competing objectives. We contribute: (i) an explainable planning representation that facilitates explanation generation, and (ii) an algorithm for generating contrastive justification as explanation for why a generated behavior is best with respect to the planning objectives. We demonstrate our approach on a mobile robot case study.
We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of thirdparty SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children’s apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.
Much research has been devoted to better understanding adversarial examples, which are specially crafted inputs to machine-learning models that are perceptually similar to benign inputs, but are classified differently (i.e., misclassified). Both algorithms that create adversarial examples and strategies for defending against adversarial examples typically use Lp-norms to measure the perceptual similarity between an adversarial input and its benign original. Prior work has already shown, however, that two images need not be close to each other as measured by an Lp-norm to be perceptually similar. In this work, we show that nearness according to an Lp-norm is not just unnecessary for perceptual similarity, but is also insufficient. Specifically, focusing on datasets (CIFAR10 and MNIST), Lp-norms, and thresholds used in prior work, we show through online user studies that “adversarial examples” that are closer to their benign counterparts than required by commonly used Lpnorm thresholds can nevertheless be perceptually distinct to humans from the corresponding benign examples. Namely, the perceptual distance between two images that are “near” each other according to an Lp-norm can be high enough that participants frequently classify the two images as representing different objects or digits. Combined with prior work, we thus demonstrate that nearness of inputs as measured by Lp-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples. We propose and discuss alternative similarity metrics to stimulate future research in the area.
Due to the evolution of programming languages, interpreted languages have gained widespread use in scientific and research computing. Interpreted languages excel at being portable, easy to use, and fast in prototyping than their ahead-of-time (AOT) counterparts, including C, C++, and Fortran. While traditionally considered as slow to execute, advancements in Just-in-Time (JIT) compilation techniques have significantly improved the execution speed of interpreted languages and in some cases outperformed AOT languages. In this paper, we explore some challenges and design strategies in developing a high performance parallel discrete event simulation engine, called Simian, written with interpreted languages with JIT capabilities, including Python, Lua, and Javascript. Our results show that Simian with JIT performs similarly to AOT simulators, such as MiniSSF and ROSS. We expect that with features like good performance, userfriendliness, and portability, the just-in-time parallel simulation will become a common choice for modeling and simulation in the near future.
Adaptive systems are expected to adapt to unanticipated run-time events using imperfect information about themselves, their environment, and goals. This entails handling the effects of uncertainties in decision-making, which are not always considered as a first-class concern. This paper contributes a formal analysis technique that explicitly considers uncertainty in sensing when reasoning about the best way to adapt, together with uncertainty reduction mechanisms to improve system utility. We illustrate our approach on a Denial of Service (DoS) attack scenario and present results that demonstrate the benefits of uncertainty-aware decision-making in comparison to using an uncertainty-ignorant approach, both in the presence and absence of uncertainty reduction mechanisms.