Biblio
In this paper, we present a security and privacy enhancement (SPE) framework for unmodified mobile operating systems. SPE introduces a new layer between the application and the operating system and does not require a device be jailbroken or utilize a custom operating system. We utilize an existing ontology designed for enforcing security and privacy policies on mobile devices to build a policy that is customizable. Based on this policy, SPE provides enhancements to native controls that currently exist on the platform for privacy and security sensitive components. SPE allows access to these components in a way that allows the framework to ensure the application is truthful in its declared intent and ensure that the user's policy is enforced. In our evaluation we verify the correctness of the framework and the computing impact on the device. Additionally, we discovered security and privacy issues in several open source applications by utilizing the SPE Framework. From our findings, if SPE is adopted by mobile operating systems producers, it would provide consumers and businesses the additional privacy and security controls they demand and allow users to be more aware of security and privacy issues with applications on their devices.
In this paper, an innovative approach to keyboard user monitoring (authentication), using keyboard dynamics and founded on the concept of time series analysis, is presented. The work is motivated by the need for robust authentication mechanisms in the context of on-line assessment such as those featured in many online learning platforms. Four analysis mechanisms are considered: analysis of keystroke time series in their raw form (without any translation), analysis consequent to translating the time series into a more compact form using either the Discrete Fourier Transform or the Discrete Wavelet Transform, and a "benchmark" feature vector representation of the form typically used in previous related work. All four mechanisms are fully described and evaluated. A best authentication accuracy of 99% was obtained using the wavelet transform.
Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the "wear and tear" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system i- ages that exhibit a realistic wear-and-tear state.
Compromised smart meters reporting false power consumption data in Advanced Metering Infrastructure (AMI) may have drastic consequences on a smart grid's operations. Most existing works only deal with electricity theft from customers. However, several other types of data falsification attacks are possible, when meters are compromised by organized rivals. In this paper, we first propose a taxonomy of possible data falsification strategies such as additive, deductive, camouflage and conflict, in AMI micro-grids. Then, we devise a statistical anomaly detection technique to identify the incidence of proposed attack types, by studying their impact on the observed data. Subsequently, a trust model based on Kullback-Leibler divergence is proposed to identify compromised smart meters for additive and deductive attacks. The resultant detection rates and false alarms are minimized through a robust aggregate measure that is calculated based on the detected attack type and successfully discriminating legitimate changes from malicious ones. For conflict and camouflage attacks, a generalized linear model and Weibull function based kernel trick is used over the trust score to facilitate more accurate classification. Using real data sets collected from AMI, we investigate several trade-offs that occur between attacker's revenue and costs, as well as the margin of false data and fraction of compromised nodes. Experimental results show that our model has a high true positive detection rate, while the average false alarm rate is just 8%, for most practical attack strategies, without depending on the expensive hardware based monitoring.
In this paper we conduct an empirical study with the purpose of identifying common software weaknesses of embedded devices used as part of industrial control systems in power grids. The data is gathered about the devices and software of 6 companies, ABB, General Electric, Schneider Electric, Schweitzer Engineering Laboratories, Siemens and Wind River. The study uses data from the manufacturersfi online databases, NVD, CWE and ICS CERT. We identified that the most common problems that were reported are related to the improper input validation, cryptographic issues, and programming errors.
Transport Layer Security (TLS), has become the de-facto standard for secure Internet communication. When used correctly, it provides secure data transfer, but used incorrectly, it can leave users vulnerable to attacks while giving them a false sense of security. Numerous efforts have studied the adoption of TLS (and its predecessor, SSL) and its use in the desktop ecosystem, attacks, and vulnerabilities in both desktop clients and servers. However, there is a dearth of knowledge of how TLS is used in mobile platforms. In this paper we use data collected by Lumen, a mobile measurement platform, to analyze how 7,258 Android apps use TLS in the wild. We analyze and fingerprint handshake messages to characterize the TLS APIs and libraries that apps use, and also evaluate weaknesses. We see that about 84% of apps use default OS APIs for TLS. Many apps use third-party TLS libraries; in some cases they are forced to do so because of restricted Android capabilities. Our analysis shows that both approaches have limitations, and that improving TLS security in mobile is not straightforward. Apps that use their own TLS configurations may have vulnerabilities due to developer inexperience, but apps that use OS defaults are vulnerable to certain attacks if the OS is out of date, even if the apps themselves are up to date. We also study certificate verification, and see low prevalence of security measures such as certificate pinning, even among high-risk apps such as those providing financial services, though we did observe major third-party tracking and advertisement services deploying certificate pinning.
Quantum Key Distribution (QKD) is a revolutionary technology which leverages the laws of quantum mechanics to distribute cryptographic keying material between two parties with theoretically unconditional security. Terrestrial QKD systems are limited to distances of \textbackslashtextless;200 km in both optical fiber and line-of-sight free-space configurations due to severe losses during single photon propagation and the curvature of the Earth. Thus, the feasibility of fielding a low Earth orbit (LEO) QKD satellite to overcome this limitation is being explored. Moreover, in August 2016, the Chinese Academy of Sciences successfully launched the world's first QKD satellite. However, many of the practical engineering performance and security tradeoffs associated with space-based QKD are not well understood for global secure key distribution. This paper presents several system-level considerations for modeling and studying space-based QKD architectures and systems. More specifically, this paper explores the behaviors and requirements that researchers must examine to develop a model for studying the effectiveness of QKD between LEO satellites and ground stations.
Cyber-physical-social systems (CPSS), an emerging computing paradigm, have attracted intensive attentions from the research community and industry. We are facing various challenges in designing secure, reliable, and user-satisfied CPSS. In this article, we consider these design issues as a whole and propose a system-level design optimization framework for CPSS design where energy consumption, security-level, and user satisfaction requirements can be fulfilled while satisfying constraints for system reliability. Specifically, we model the constraints (energy efficiency, security, and reliability) as the penalty functions to be incorporated into the corresponding objective functions for the optimization problem. A smart office application is presented to demonstrate the feasibility and effectiveness of our proposed design optimization approach.
The applications of 3D Virtual Environments are taking giant leaps with more sophisticated 3D user interfaces and immersive technologies. Interactive 3D and Virtual Reality platforms present a great opportunity for data analytics and can represent large amounts of data to help humans in decision making and insight. For any of the above to be effective, it is essential to understand the characteristics of these interfaces in displaying different types of content. Text is an essential and widespread content and legibility acts as an important criterion to determine the style, size and quantity of the text to be displayed. This study evaluates the maximum amount of text per visual angle, that is, the maximum density of text that will be legible in a virtual environment displayed on different platforms. We used Extensible 3D (X3D) to provide the portable (cross-platform) stimuli. The results presented here are based on a user study conducted in DeepSix (a tiled LCD display with 5750×2400 resolution) and the Hypercube (an immersive CAVE-style active stereo projection system with three walls and floor at 2560×2560 pixels active stereo per wall). We found that more legible text can be displayed on an immersive projection due to its larger Field of Regard; in the immersive case, stereo versus monoscopic rendering did not have a significant effect on legibility.
In our era, most of the communication between people is realized in the form of electronic messages and especially through smart mobile devices. As such, the written text exchanged suffers from bad use of punctuation, misspelling words, continuous chunk of several words without spaces, tables, internet addresses etc. which make traditional text analytics methods difficult or impossible to be applied without serious effort to clean the dataset. Our proposed method in this paper can work in massive noisy and scrambled texts with minimal preprocessing by removing special characters and spaces in order to create a continuous string and detect all the repeated patterns very efficiently using the Longest Expected Repeated Pattern Reduced Suffix Array (LERP-RSA) data structure and a variant of All Repeated Patterns Detection (ARPaD) algorithm. Meta-analyses of the results can further assist a digital forensics investigator to detect important information to the chunk of text analyzed.
We define a number of threat models to describe the goals, the available information and the actions characterising the behaviour of a possible attacker in multimedia forensic scenarios. We distinguish between an investigative scenario, wherein the forensic analysis is used to guide the investigative action and a use-in-court scenario, wherein forensic evidence must be defended during a lawsuit. We argue that the goals and actions of the attacker in these two cases are very different, thus exposing the forensic analyst to different challenges. Distinction is also made between model-based techniques and techniques based on machine learning, showing how in the latter case the necessity of defining a proper training set enriches the set of actions available to the attacker. By leveraging on the previous analysis, we then introduce some game-theoretic models to describe the interaction between the forensic analyst and the attacker in the investigative and use-in-court scenarios.
Small, local groups who share protected resources (e.g., families, work teams, student organizations) have unmet authentication needs. For these groups, existing authentication strategies either create unnecessary social divisions (e.g., biometrics), do not identify individuals (e.g., shared passwords), do not equitably distribute security responsibility (e.g., individual passwords), or make it difficult to share or revoke access (e.g., physical keys). To explore an alternative, we designed Thumprint: inclusive group authentication with a shared secret knock. All group members share one secret knock, but individual expressions of the secret are discernible. We evaluated the usability and security of our concept through two user studies with 30 participants. Our results suggest that (1) individuals who enter the same shared thumprint are distinguishable from one another, (2) that people can enter thumprints consistently over time, and (3) that thumprints are resilient to casual adversaries.
Circular statistics present a new technique to analyse the time patterns of events in the field of cyber security. We apply this technique to analyse incidents of malware infections detected by network monitoring. In particular we are interested in the daily and weekly variations of these events. Based on "live" data provided by Spamhaus, we examine the hypothesis that attacks on four countries are distributed uniformly over 24 hours. Specifically, we use Rayleigh and Watson tests. While our results are mainly exploratory, we are able to demonstrate that the attacks are not uniformly distributed, nor do they follow a Poisson distribution as reported in other research. Our objective in this is to identify a distribution that can be used to establish risk metrics. Moreover, our approach provides a visual overview of the time patterns' variation, indicating when attacks are most likely. This will assist decision makers in cyber security to allocate resources or estimate the cost of system monitoring during high risk periods. Our results also reveal that the time patterns are influenced by the total number of attacks. Networks subject to a large volume of attacks exhibit bimodality while one case, where attacks were at relatively lower rate, showed a multi-modal daily variation.
With the fast development of autonomous driving and vehicular communication technologies, intelligent transportation systems that are based on VANET (Vehicular Ad-Hoc Network) have shown great promise. For instance, through V2V (Vehicle-to-Vehicle) and V2I (Vehicle-to-Infrastructure) communication, intelligent intersections allow more fine-grained control of vehicle crossings and significantly enhance traffic efficiency. However, the performance and safety of these VANET-based systems could be seriously impaired by communication delays and packet losses, which may be caused by network congestion or by malicious attacks that target communication timing behavior. In this paper, we quantitatively model and analyze some of the timing and security issues in transportation networks with VANET-based intelligent intersections. In particular, we demonstrate how communication delays may affect the performance and safety of a single intersection and of multiple interconnected intersections, and present our delay-tolerant intersection management protocols. We also discuss the issues of such protocols when the vehicles are non-cooperative and how they may be addressed with game theory.
In the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their computations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its payoff while ensuring the desired level of integrity for the outsourced computation. We define two Stackelberg games and analyze the optimal setting's sensitivity for the parameters of the model.
Establishing and operating an Information Security Management System (ISMS) to protect information values and information systems is in itself a challenge for larger enterprises and small and medium sized businesses alike. A high level of automation is required to reduce operational efforts to an acceptable level when implementing an ISMS. In this paper we present the ADAMANT framework to increase automation in information security management as a whole by establishing a continuous risk-driven and context-aware ISMS that not only automates security controls but considers all highly interconnected information security management tasks. We further illustrate how ADAMANT is suited to establish an ISO 27001 compliant ISMS for small and medium-sized enterprises and how not only the monitoring of security controls but a majority of ISMS related activities can be supported through automated process execution and workflow enactment.
Current technologies to include cloud computing, social networking, mobile applications and crowd and synthetic intelligence, coupled with the explosion in storage and processing power, are evolving massive-scale marketplaces for a wide variety of resources and services. They are also enabling unprecedented forms and levels of collaborations among human and machine entities. In this new era, trust remains the keystone of success in any relationship between two or more parties. A primary challenge is to establish and manage trust in environments where massive numbers of consumers, providers and brokers are largely autonomous with vastly diverse requirements, capabilities, and trust profiles. Most contemporary trust management solutions are oblivious to diversities in trustors' requirements and contexts, utilize direct or indirect experiences as the only form of trust computations, employ hardcoded trust computations and marginally consider collaboration in trust management. We surmise the need for reference architecture for trust management to guide the development of a wide spectrum of trust management systems. In our previous work, we presented a preliminary reference architecture for trust management which provides customizable and reconfigurable trust management operations to accommodate varying levels of diversity and trust personalization. In this paper, we present a comprehensive taxonomy for trust management and extend our reference architecture to feature collaboration as a first-class object. Our goal is to promote the development of new collaborative trust management systems, where various trust management operations would involve collaborating entities. Using the proposed architecture, we implemented a collaborative personalized trust management system. Simulation results demonstrate the effectiveness and efficiency of our system.
Intrusion Detection Systems (IDSs) are crucial security mechanisms widely deployed for critical network protection. However, conventional IDSs become incompetent due to the rapid growth in network size and the sophistication of large scale attacks. To mitigate this problem, Collaborative IDSs (CIDSs) have been proposed in literature. In CIDSs, a number of IDSs exchange their intrusion alerts and other relevant data so as to achieve better intrusion detection performance. Nevertheless, the required information exchange may result in privacy leakage, especially when these IDSs belong to different self-interested organizations. In order to obtain a quantitative understanding of the fundamental tradeoff between the intrusion detection accuracy and the organizations' privacy, a repeated two-layer single-leader multi-follower game is proposed in this work. Based on our game-theoretic analysis, we are able to derive the expected behaviors of both the attacker and the IDSs and obtain the utility-privacy tradeoff curve. In addition, the existence of Nash equilibrium (NE) is proved and an asynchronous dynamic update algorithm is proposed to compute the optimal collaboration strategies of IDSs. Finally, simulation results are shown to validate the analysis.
Wireless sensor networks have achieved the substantial research interest in the present time because of their unique features such as fault tolerance, autonomous operation etc. The coverage maximization while considering the resource scarcity is a crucial problem in the wireless sensor networks. The approaches which address these problems and maximize the network lifetime are considered prominent. The node scheduling is such mechanism to address this issue. The scheduling strategy which addresses the target coverage problem based on coverage probability and trust values is proposed in Energy Efficient Coverage Protocol (EECP). In this paper the optimized decision rules is obtained by using the rough set theory to determine the number of active nodes. The results show that the proposed extension results in the lesser number of decision rules to consider in determination of node states in the network, hence it improves the network efficiency by reducing the number of packets transmitted and reducing the overhead.
Trust management issue in cloud domain has been a persistent research topic discussed among scholars. Similar issue is bound to occur in the surfacing fog domain. Although fog and cloud are relatively similar, evaluating trust in fog domain is more challenging than in cloud. Fog's high mobility support, distributive nature, and closer distance to end user means that they are likely to operate in vulnerable environments. Unlike cloud, fog has little to no human intervention, and lack of redundancy. Hence, it could experience downtime at any given time. Thus it is harder to trust fogs given their unpredictable status. These distinguishing factors, combined with the existing factors used for trust evaluation in cloud can be used as metrics to evaluate trust in fog. This paper discusses a use case of a campus scenario with several fog servers, and the metrics used in evaluating the trustworthiness of the fog servers. While fuzzy logic method is used to evaluate the trust, the contribution of this study is the identification of fuzzy logic configurations that could alter the trust value of a fog.
Mobile devices offer a convenient way of accessing our digital lives and many of those devices hold sensitive data that needs protecting. Mobile and wireless communications networks, combined with cloud computing as Mobile Cloud Computing (MCC), have emerged as a new way to provide a rich computational environment for mobile users, and business opportunities for cloud providers and network operators. It is the convenience of the cloud service and the ability to sync across multiple platforms/devices that has become the attraction to cloud computing. However, privacy, security and trust issues may still be a barrier that impedes the adoption of MCC by some undecided potential users. Those users still need to be convinced of the security of mobile devices, wireless networks and cloud computing. This paper is the result of a comprehensive review of one typical secure measure-authentication methodology research, spanning a period of five years from 2012–2017. MCC capabilities for sharing distributed resources is discussed. Authentication in MCC is divided in to two categories and the advantages of one category over its counterpart are presented, in the process of attempting to identify the most secure authentication scheme.
Remote user authentication is an essential process to provide services securely during accessing on-line applications where its aim is to find out the legitimacy of an user. The traditional password based remote user authentication is quite popular and widely used but such schemes are susceptible to dictionary attack. To enhance the system security, numerous password based remote user authentication schemes using smartcard have been submitted. However, most of the schemes proposed are either computationally expensive or vulnerable to several kinds of known attacks. In this paper, the authors have developed a two factor based remote user authentication scheme using ElGamal cryptosystem. The validity of the proposed scheme is also confirmed through BAN logic. Besides that authors have done security analysis and compared with related schemes which proclaim that the proposed scheme is able to resist against several kinds of known attacks effectively. The proposed scheme is also simulated with AVISPA tool and expected outcome is achieved where it ensures that the scheme is secured against some known attacks. Overall, the presented scheme is suitable, secure and applicable in any real time applications.
In this paper, we propose a novel method, based on keystroke dynamics, to distinguish between fake and truthful personal information written via a computer keyboard. Our method does not need any prior knowledge about the user who is providing data. To our knowledge, this is the first work that associates the typing human behavior with the production of lies regarding personal information. Via experimental analysis involving 190 subjects, we assess that this method is able to distinguish between truth and lies on specific types of autobiographical information, with an accuracy higher than 75%. Specifically, for information usually required in online registration forms (e.g., name, surname and email), the typing behavior diverged significantly between truthful or untruthful answers. According to our results, keystroke analysis could have a great potential in detecting the veracity of self-declared information, and it could be applied to a large number of practical scenarios requiring users to input personal data remotely via keyboard.
Over the years a lot of effort has been put on solving extensibility problems, while retaining important software engineering properties such as modular type-safety and separate compilation. Most previous work focused on operations that traverse and process extensible Abstract Syntax Tree (AST) structures. However, there is almost no work on operations that build such extensible ASTs, including parsing. This paper investigates solutions for the problem of modular parsing. We focus on semantic modularity and not just syntactic modularity. That is, the solutions should not only allow complete parsers to be built out of modular parsing components, but also enable the parsing components to be modularly type-checked and separately compiled. We present a technique based on parser combinators that enables modular parsing. Interestingly, the modularity requirements for modular parsing rule out several existing parser combinator approaches, which rely on some non-modular techniques. We show that Packrat parsing techniques, provide solutions for such modularity problems, and enable reasonable performance in a modular setting. Extensibility is achieved using multiple inheritance and Object Algebras. To evaluate the approach we conduct a case study based on the âTypes and Programming Languagesâ interpreters. The case study shows the effectiveness at reusing parsing code from existing interpreters, and the total parsing code is 69% shorter than an existing code base using a non-modular parsing approach.
Recent years have witnessed a rapid growth in the domain of Internet of Things (IoT). This network of billions of devices generates and exchanges huge amount of data. The limited cache capacity and memory bandwidth make transferring and processing such data on traditional CPUs and GPUs highly inefficient, both in terms of energy consumption and delay. However, many IoT applications are statistical at heart and can accept a part of inaccuracy in their computation. This enables the designers to reduce complexity of processing by approximating the results for a desired accuracy. In this paper, we propose an ultra-efficient approximate processing in-memory architecture, called APIM, which exploits the analog characteristics of non-volatile memories to support addition and multiplication inside the crossbar memory, while storing the data. The proposed design eliminates the overhead involved in transferring data to processor by virtually bringing the processor inside memory. APIM dynamically configures the precision of computation for each application in order to tune the level of accuracy during runtime. Our experimental evaluation running six general OpenCL applications shows that the proposed design achieves up to 20x performance improvement and provides 480x improvement in energy-delay product, ensuring acceptable quality of service. In exact mode, it achieves 28x energy savings and 4.8x speed up compared to the state-of-the-art GPU cores.