Biblio
In this paper, we present a semi-supervised remote sensing change detection method based on graph model with Generative Adversarial Networks (GANs). Firstly, the multi-temporal remote sensing change detection problem is converted as a problem of semi-supervised learning on graph where a majority of unlabeled nodes and a few labeled nodes are contained. Then, GANs are adopted to generate samples in a competitive manner and help improve the classification accuracy. Finally, a binary change map is produced by classifying the unlabeled nodes to a certain class with the help of both the labeled nodes and the unlabeled nodes on graph. Experimental results carried on several very high resolution remote sensing image data sets demonstrate the effectiveness of our method.
Over the past few years, virtual and mixed reality systems have evolved significantly yielding high immersive experiences. Most of the metaphors used for interaction with the virtual environment do not provide the same meaningful feedback, to which the users are used to in the real world. This paper proposes a cyber-glove to improve the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. In particular, we are proposing a cyber-glove system that tracks wrist movements, hand orientation and finger movements. It provides a decoupled position of the wrist and hand, which can contribute to a better embodiment in interaction and manipulation tasks. Additionally, the detection of the curvature of the fingers aims to improve the proprioceptive perception of the grasping/releasing gestures more consistent to visual feedback. The cyber-glove system is being developed for VR applications related to real estate promotion, where users have to go through divisions of the house and interact with objects and furniture. This work aims to assess if glove-based systems can contribute to a higher sense of immersion, embodiment and usability when compared to standard VR hand controller devices (typically button-based). Twenty-two participants tested the cyber-glove system against the HTC Vive controller in a 3D manipulation task, specifically the opening of a virtual door. Metric results showed that 83% of the users performed faster door pushes, and described shorter paths with their hands wearing the cyber-glove. Subjective results showed that all participants rated the cyber-glove based interactions as equally or more natural, and 90% of users experienced an equal or a significant increase in the sense of embodiment.
Mixed reality (MR) technologies are widely used in distributed collaborative learning scenarios and have made learning and training more flexible and intuitive. However, there are many challenges in the use of MR due to the difficulty in creating a physical presence, particularly when a physical task is being performed collaboratively. We therefore developed a novel MR system to overcomes these limitations and enhance the distributed collaboration user experience. The primary objective of this paper is to explore the potential of a MR-based hand gestures system to enhance the conceptual architecture of MR in terms of both visualization and interaction in distributed collaboration. We propose a synchronous prototype named MRCollab as an immersive collaborative approach that allows two or more users to communicate with a peer based on the integration of several technologies such as video, audio, and hand gestures.
This paper discusses two pieces of software designed for intelligence analysis, the brainstorming tool and the Scenario Planning Advisor. These tools were developed in the Cognitive Immersive Systems Lab (CISL) in conjunction with IBM. We discuss the immersive environment the tools are situated in, and the proposed benefit for intelligence analysis.
Social media has been one of the most efficacious and precise by speakers of public opinion. A strategy which sanctions the utilization and illustration of twitter data to conclude public conviction is discussed in this paper. Sentiments on exclusive entities with diverse strengths and intenseness are stated by public, where these sentiments are strenuously cognate to their personal mood and emotions. To examine the sentiments from natural language texts, addressing various opinions, a lot of methods and lexical resources have been propounded. A path for boosting twitter sentiment classification using various sentiment proportions as meta-level features has been proposed by this article. Analysis of tweets was done on the product iPhone 6.
Reconnaissance might be the longest phase, sometimes take weeks or months. The black hat makes use of passive information gathering techniques. Once the attacker has sufficient statistics, then the attacker starts the technique of scanning perimeter and internal network devices seeking out open ports and related services. In this paper we are showing traffic accountability and time to complete the specific task during reconnaissance phase active scanning with nmap tool and proposed strategies that how to deal with large volumes of hosts and conserve network traffic as well as time of the specific task.
To be prepared against cyberattacks, most organizations resort to security information and event management systems to monitor their infrastructures. These systems depend on the timeliness and relevance of the latest updates, patches and threats provided by cyberthreat intelligence feeds. Open source intelligence platforms, namely social media networks such as Twitter, are capable of aggregating a vast amount of cybersecurity-related sources. To process such information streams, we require scalable and efficient tools capable of identifying and summarizing relevant information for specified assets. This paper presents the processing pipeline of a novel tool that uses deep neural networks to process cybersecurity information received from Twitter. A convolutional neural network identifies tweets containing security-related information relevant to assets in an IT infrastructure. Then, a bidirectional long short-term memory network extracts named entities from these tweets to form a security alert or to fill an indicator of compromise. The proposed pipeline achieves an average 94% true positive rate and 91% true negative rate for the classification task and an average F1-score of 92% for the named entity recognition task, across three case study infrastructures.
Often, analysts have to face a challenging situation when formally verifying the implementation of a security protocol: they need to build a model of the protocol from only poorly or not documented code, and with little or no help from the developers to better understand it. Security protocols implementations frequently use services provided by libraries coded in the C programming language; automatic tools for codelevel reverse engineering offer good support to comprehend the behavior of code in object-oriented languages but are ineffective to deal with libraries in C. Here we propose a systematic, yet human-dependent approach, which combines the capabilities of state-of-the-art tools in order to help the analyst to retrieve, step by step, the security protocol specifications from a library in C. Those specifications can then be used to create the formal model needed to carry out the analysis.
We consider the possibility of detecting malicious behaviors of the advanced persistent threat (APT) at endpoints during incident response or forensics investigations. Specifically, we study the case where third-party sensors are not available; our observables are obtained solely from inherent digital artifacts of Windows operating systems. What is of particular interest is an artifact called the Application Compatibility Cache (Shimcache). As it is not apparent from the Shimcache when a file has been executed, we propose an algorithm of estimating the time of file execution up to an interval. We also show guarantees of the proposed algorithm's performance and various possible extensions that can improve the estimation. Finally, combining this approach with methods of machine learning, as well as information from other digital artifacts, we design a prototype system called XTEC and demonstrate that it can help hunt for the APT in a real-world case study.