Biblio
Exfiltrating sensitive information from smartphones has become one of the most significant security threats. We have built a system to identify HTTP-based information exfiltration of malicious Android applications. In this paper, we discuss the method to track the propagation of sensitive information in Android applications using static taint analysis. We have studied the leaked information, destinations to which information is exfiltrated, and their correlations with types of sensitive information. The analysis results based on 578 malicious Android applications have revealed that a significant portion of these applications are interested in identity-related sensitive information. The vast majority of malicious applications leak multiple types of sensitive information. We have also identified servers associated with three country codes including CN, US, and SG are most active in collecting sensitive information. The analysis results have also demonstrated that a wide range of non-default ports are used by suspicious URLs.
The recent emergence of smartphones, cloud computing, and the Internet of Things has brought about the explosion of data creation. By collating and merging these enormous data with other information, services that use information become more sophisticated and advanced. However, at the same time, the consideration of privacy violations caused by such merging is indispensable. Various anonymization methods have been proposed to preserve privacy. The conventional perturbation-based anonymization method of location data adds comparatively larger noise, and the larger noise makes it difficult to utilize the data effectively for secondary use. In this research, to solve these problems, we first clarified the definition of privacy preservation and then propose TMk-anonymity according to the definition.
Distinguishing and classifying different types of malware is important to better understanding how they can infect computers and devices, the threat level they pose and how to protect against them. In this paper, a system for classifying malware programs is presented. The paper describes the architecture of the system and assesses its performance on a publicly available database (provided by Microsoft for the Microsoft Malware Classification Challenge BIG2015) to serve as a benchmark for future research efforts. First, the malicious programs are preprocessed such that they are visualized as gray scale images. We then make use of an architecture comprised of multiple layers (multiple levels of encoding) to carry out the classification process of those images/programs. We compare the performance of this approach against traditional machine learning and pattern recognition algorithms. Our experimental results show that the deep learning architecture yields a boost in performance over those conventional/standard algorithms. A hold-out validation analysis using the superior architecture shows an accuracy in the order of 99.15%.
Social networking sites such as Flickr, YouTube, Facebook, etc. contain huge amount of user contributed data for a variety of real-world events. We describe an unsupervised approach to the problem of automatically detecting subgroups of people holding similar tastes or either taste. Item or taste tags play an important role in detecting group or subgroup, if two or more persons share the same opinion on the item or taste, they tend to use similar content. We consider the latter to be an implicit attitude. In this paper, we have investigated the impact of implicit and explicit attitude in two genres of social media discussion data, more formal wikipedia discussions and a debate discussion forum that is much more informal. Experimental results strongly suggest that implicit attitude is an important complement for explicit attitudes (expressed via sentiment) and it can improve the sub-group detection performance independent of genre. Here, we have proposed taste-based group, which can enhance the quality of service.
Vehicular ad hoc networks (VANETs) are designed to provide traffic safety by exploiting the inter-vehicular communications. Vehicles build awareness of traffic in their surroundings using information broadcast by other vehicles, such as speed, location and heading, to proactively avoid collisions. The effectiveness of these VANET traffic safety applications is particularly dependent on the accuracy of the location information advertised by each vehicle. Therefore, traffic safety can be compromised when Sybil attackers maliciously advertise false locations or other inaccurate GPS readings are sent. The most effective way to detect a Sybil attack or correct the noise in the GPS readings is localizing vehicles based on the physical features of their transmission signals. The current localization techniques either are designed for networks where the nodes are immobile or suffer from inaccuracy in high-interference environments. In this paper, we present a RSSI-based localization technique that uses mobile nodes for localizing another mobile node and adjusts itself based on the heterogeneous interference levels in the environment. We show via simulation that our localization mechanism is more accurate than the other mechanisms and more resistant to environments with high interference and mobility.
Real-time localization of mobile target has been attracted much attention in recent years. With the limitation of unavailable GPS signals in the complex environments, wireless sensor networks can be applied to real-time locate and track the mobile targets in this paper. The multi wireless signals are used to weaken the effect of abnormal wireless signals in some areas. To verify the real-time localization performance for mobile targets, experiments and analyses are implemented. The results of the experiments reflect that the proposed location method can provide experimental basis for the applications, such as the garage, shopping center, underwater, etc.
Active authentication is the problem of continuously verifying the identity of a person based on behavioral aspects of their interaction with a computing device. In this paper, we collect and analyze behavioral biometrics data from 200 subjects, each using their personal Android mobile device for a period of at least 30 days. This data set is novel in the context of active authentication due to its size, duration, number of modalities, and absence of restrictions on tracked activity. The geographical colocation of the subjects in the study is representative of a large closed-world environment such as an organization where the unauthorized user of a device is likely to be an insider threat: coming from within the organization. We consider four biometric modalities: 1) text entered via soft keyboard, 2) applications used, 3) websites visited, and 4) physical location of the device as determined from GPS (when outdoors) or WiFi (when indoors). We implement and test a classifier for each modality and organize the classifiers as a parallel binary decision fusion architecture. We are able to characterize the performance of the system with respect to intruder detection time and to quantify the contribution of each modality to the overall performance.
Authentication is one of the key aspects of securing applications and systems alike. While in most existing systems this is achieved using usernames and passwords it has been continuously shown that this authentication method is not secure. Studies that have been conducted have shown that these systems have vulnerabilities which lead to cases of impersonation and identity theft thus there is need to improve such systems to protect sensitive data. In this research, we explore the combination of the user's location together with traditional usernames and passwords as a multi factor authentication system to make authentication more secure. The idea involves comparing a user's mobile device location with that of the browser and comparing the device's Bluetooth key with the key used during registration. We believe by leveraging existing technologies such as Bluetooth and GPS we can reduce implementation costs whilst improving security.
Given the complexities involved in the sensing, navigational and positioning environment on board automated vehicles we conduct an exploratory survey and identify factors capable of influencing the users' trust in such system. After the analysis of the survey data, the Situational Awareness of the Vehicle (SAV) emerges as an important factor capable of influencing the trust of the users. We follow up on that by conducting semi-structured interviews with 12 experts in the CAV field, focusing on the importance of the SAV, on the factors that are most important when talking about it as well as the need to keep the users informed regarding its status. We conclude that in the context of Connected and Automated Vehicles (CAVs), the importance of the SAV can now be expanded beyond its technical necessity of making vehicles function to a human factors area: calibrating users' trust.
Cyber-Physical Systems (CPS) such as Unmanned Aerial Systems (UAS) sense and actuate their environment in pursuit of a mission. The attack surface of these remotely located, sensing and communicating devices is both large, and exposed to adversarial actors, making mission assurance a challenging problem. While best-practice security policies should be followed, they are rarely enough to guarantee mission success as not all components in the system may be trusted and the properties of the environment (e.g., the RF environment) may be under the control of the attacker. CPS must thus be built with a high degree of resilience to mitigate threats that security cannot alleviate. In this paper, we describe the Agile and Resilient Embedded Systems (ARES) methodology and metric set. The ARES methodology pursues cyber security and resilience (CSR) as high level system properties to be developed in the context of the mission. An analytic process guides system developers in defining mission objectives, examining principal issues, applying CSR technologies, and understanding their interactions.
Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.
User uses smartphones for web surfing and browsing data. Many smartphones are embedded with inbuilt location aware system called GPS [Global Positioning System]. Using GPS user have to register and share his all private information to the LBS server. LBS is nothing but Location Based Service. Simply user sends the query to the LBS server. Then what is happening the LBS server gives a private information regarding particular user location. There will be a possibility to misuse this information so using mobile crowd method hides user location from LBS server and avoid sharing of privacy information with server. Our solution does not required to change the LBS server architecture.
Misalignment angles estimation of strapdown inertial navigation system (INS) using global positioning system (GPS) data is highly affected by measurement noises, especially with noises displaying time varying statistical properties. Hence, adaptive filtering approach is recommended for the purpose of improving the accuracy of in-motion alignment. In this paper, a simplified form of Celso's adaptive stochastic filtering is derived and applied to estimate both the INS error states and measurement noise statistics. To detect and bound the influence of outliers in INS/GPS integration, outlier detection based on jerk tracking model is also proposed. The accuracy and validity of the proposed algorithm is tested through ground based navigation experiments.
The Department of Energy seeks to modernize the U.S. electric grid through the SmartGrid initiative, which includes the use of Global Positioning System (GPS)-timing dependent electric phasor measurement units (PMUs) for continual monitoring and automated controls. The U.S. Department of Homeland Security is concerned with the associated risks of increased utilization of GPS timing in the electricity subsector, which could in turn affect a large number of electricity-dependent Critical Infrastructure (CI) sectors. Exploiting the vulnerabilities of GPS systems in the electricity subsector can result to large-scale and costly blackouts. This paper seeks to analyze the risks of increased dependence of GPS into the electric grid through the introduction of PMUs and provides a systems engineering perspective to the GPS-dependent System of Systems (S-o-S) created by the SmartGrid initiative. The team started by defining and modeling the S-o-S followed by usage of a risk analysis methodology to identify and measure risks and evaluate solutions to mitigating the effects of the risks. The team expects that the designs and models resulting from the study will prove useful in terms of determining both current and future risks to GPS-dependent CIs sectors along with the appropriate countermeasures as the United States moves towards a SmartGrid system.