Biblio
This paper describes a novel distributed mobility management (DMM) scheme for the "named-object" information centric network (ICN) architecture in which the routers forward data based on unique identifiers which are dynamically mapped to the current network addresses of a device. The work proposes and evaluates two specific handover schemes namely, hard handoff with rebinding and soft handoff with multihoming intended to provide seamless data transfer with improved throughput during handovers. The evaluation of the proposed handover schemes using system simulation along with proof-of-concept implementation in ORBIT testbed is described. The proposed handoff and scheduling throughput gains are 12.5% and 44% respectively over multiple interfaces when compared to traditional IP network with equal share split scheme. The handover performance with respect to RTT and throughput demonstrate the benefits of clean slate network architecture for beyond 5G networks.
Modern JavaScript applications extensively depend on third-party libraries. Especially for the Node.js platform, vulnerabilities can have severe consequences to the security of applications, resulting in, e.g., cross-site scripting and command injection attacks. Existing static analysis tools that have been developed to automatically detect such issues are either too coarse-grained, looking only at package dependency structure while ignoring dataflow, or rely on manually written taint specifications for the most popular libraries to ensure analysis scalability. In this work, we propose a technique for automatically extracting taint specifications for JavaScript libraries, based on a dynamic analysis that leverages the existing test suites of the libraries and their available clients in the npm repository. Due to the dynamic nature of JavaScript, mapping observations from dynamic analysis to taint specifications that fit into a static analysis is non-trivial. Our main insight is that this challenge can be addressed by a combination of an access path mechanism that identifies entry and exit points, and the use of membranes around the libraries of interest. We show that our approach is effective at inferring useful taint specifications at scale. Our prototype tool automatically extracts 146 additional taint sinks and 7 840 propagation summaries spanning 1 393 npm modules. By integrating the extracted specifications into a commercial, state-of-the-art static analysis, 136 new alerts are produced, many of which correspond to likely security vulnerabilities. Moreover, many important specifications that were originally manually written are among the ones that our tool can now extract automatically.
Preserving medical data is of utmost importance to stake holders. There are not many laws in India about preservation, usability of patient records. When data is transmitted across the globe there are chances of data getting tampered intentionally or accidentally. Tampered data loses its authenticity for diagnostic purpose, research and various other reasons. This paper proposes an authenticity based ECDSA algorithm by signature verification to identify the tampering of medical image files and alerts by the rules of authenticity. The algorithm can be used by researchers, doctors or any other educated person in order to maintain the authenticity of the record. Presently it is applied on medical related image files like DICOM. However, it can support any other medical related image files and still preserve the authenticity.
The mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a delay overhead in information propagation which makes the system vulnerable to double spend attacks. This paper introduces a proximity-aware extensions to the current Bitcoin protocol, named Master Node Based Clustering (MNBC). The ultimate purpose of the proposed protocol is to improve the information propagation delay in the Bitcoin network.
The increase of cyber attacks in both the numbers and varieties in recent years demands to build a more sophisticated network intrusion detection system (NIDS). These NIDS perform better when they can monitor all the traffic traversing through the network like when being deployed on a Software-Defined Network (SDN). Because of the inability to detect zero-day attacks, signature-based NIDS which were traditionally used for detecting malicious traffic are beginning to get replaced by anomaly-based NIDS built on neural networks. However, recently it has been shown that such NIDS have their own drawback namely being vulnerable to the adversarial example attack. Moreover, they were mostly evaluated on the old datasets which don't represent the variety of attacks network systems might face these days. In this paper, we present Reconstruction from Partial Observation (RePO) as a new mechanism to build an NIDS with the help of denoising autoencoders capable of detecting different types of network attacks in a low false alert setting with an enhanced robustness against adversarial example attack. Our evaluation conducted on a dataset with a variety of network attacks shows denoising autoencoders can improve detection of malicious traffic by up to 29% in a normal setting and by up to 45% in an adversarial setting compared to other recently proposed anomaly detectors.
The detection of malicious HTTP(S) requests is a pressing concern in cyber security, in particular given the proliferation of HTTP-based (micro-)service architectures. In addition to rule-based systems for known attacks, anomaly detection has been shown to be a promising approach for unknown (zero-day) attacks. This article extends existing work by integrating outlier explanations for individual requests into an end-to-end pipeline. These end-to-end explanations reflect the internal working of the pipeline. Empirically, we show that found explanations coincide with manually labelled explanations for identified outliers, allowing security professionals to quickly identify and understand malicious requests.
It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent socialmedia profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content.We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near- 0% accuracy. We develop five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.
We consider the problem of protecting cloud services from simultaneous white-box and black-box attacks. Recent research in cryptographic program obfuscation considers the problem of protecting the confidentiality of programs and any secrets in them. In this model, a provable program obfuscation solution makes white-box attacks to the program not more useful than black-box attacks. Motivated by very recent results showing successful black-box attacks to machine learning programs run by cloud servers, we propose and study the approach of augmenting the program obfuscation solution model so to achieve, in at least some class of application scenarios, program confidentiality in the presence of both white-box and black-box attacks.We propose and formally define encrypted-input program obfuscation, where a key is shared between the entity obfuscating the program and the entity encrypting the program's inputs. We believe this model might be of interest in practical scenarios where cloud programs operate over encrypted data received by associated sensors (e.g., Internet of Things, Smart Grid).Under standard intractability assumptions, we show various results that are not known in the traditional cryptographic program obfuscation model; most notably: Yao's garbled circuit technique implies encrypted-input program obfuscation hiding all gates of an arbitrary polynomial circuit; and very efficient encrypted-input program obfuscation for range membership programs and a class of machine learning programs (i.e., decision trees). The performance of the latter solutions has only a small constant overhead over the equivalent unobfuscated program.
Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.
Automatic Image Analysis, Image Classification, Automatic Object Recognition are some of the aspiring research areas in various fields of Engineering. Many Industrial and biological applications demand Image Analysis and Image Classification. Sample images available for classification may be complex, image data may be inadequate or component regions in the image may have poor visibility. With the available information each Digital Image Processing application has to analyze, classify and recognize the objects appropriately. Pre-processing, Image segmentation, feature extraction and classification are the most common steps to follow for Classification of Images. In this study we applied various existing edge detection methods like Robert, Sobel, Prewitt, Canny, Otsu and Laplacian of Guassian to crab images. From the conducted analysis of all edge detection operators, it is observed that Sobel, Prewitt, Robert operators are ideal for enhancement. The paper proposes Enhanced Sobel operator, Enhanced Prewitt operator and Enhanced Robert operator using morphological operations and masking. The novelty of the proposed approach is that it gives thick edges to the crab images and removes spurious edges with help of m-connectivity. Parameters which measure the accuracy of the results are employed to compare the existing edge detection operators with proposed edge detection operators. This approach shows better results than existing edge detection operators.
Experimentation focused on assessing the value of complex visualisation approaches when compared with alternative methods for data analysis is challenging. The interaction between participant prior knowledge and experience, a diverse range of experimental or real-world data sets and a dynamic interaction with the display system presents challenges when seeking timely, affordable and statistically relevant experimentation results. This paper outlines a hybrid approach proposed for experimentation with complex interactive data analysis tools, specifically for computer network traffic analysis. The approach involves a structured survey completed after free engagement with the software platform by expert participants. The survey captures objective and subjective data points relating to the experience with the goal of making an assessment of software performance which is supported by statistically significant experimental results. This work is particularly applicable to field of network analysis for cyber security and also military cyber operations and intelligence data analysis.
Analyzing multi-dimensional geospatial data is difficult and immersive analytics systems are used to visualize geospatial data and models. There is little previous work evaluating when immersive and non-immersive visualizations are the most suitable for data analysis and more research is needed.