Biblio
There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.
In the recent years many companies are shifting towards cloud for expanding their business profit with least additional cost. Cloud computing is a growing technology which has emerged from the development of grid computing, virtualization and utility computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. There was a huge data loss during the recent Chennai floods during Dec 2015. If these data would have been stored at distributed data centers great loss could have been prevented. Though, such natural calamities are tempting many users to shift towards the cloud storage, security threats are inhibiting them to shift towards the cloud. Many solutions have been addressed for these security issues but they do not give guaranteed security. By guaranteed security we mean confidentiality, integrity and availability. Some of the existing techniques for providing security are Cryptographic Protocols, Data Sanitization, Predicate Logic, Access Control Mechanism, Honeypots, Sandboxing, Erasure Coding, RAID(Redundant Arrays of Independent Disks), Homomorphic Encryption and Split-Key Encryption. All these techniques either cannot work alone or adds computational and time complexity. An alternate scheme of combining encryption and channel coding schemes at one-go is proposed for increasing the levels of security. Hybrid encryption scheme is proposed to be used in the interleaver block of Turbo coder for avoiding burst error. Hybrid encryption avoids sharing of secret key via the unsecured channel. This provides both security and reliability by reducing error propagation effect with small additional cost and computational overhead. Time complexity can be reduced when encryption and encoding are done as a single process.
Internet of Things (IoT) devices are getting increasingly popular, becoming a core element for the next generations of informational architectures: smart city, smart factory, smart home, smart health-care and many others. IoT systems are mainly comprised of embedded devices with limited computing capabilities while having a cloud component which processes the data and delivers it to the end-users. IoT devices access the user private data, thus requiring robust security solution which must address features like usability and scalability. In this paper we discuss about an IoT authentication service for smart-home devices using a smart-phone as security anchor, QR codes and attribute based cryptography (ABC). Regarding the fact that in an IoT ecosystem some of the IoT devices and the cloud components may be considered untrusted, we propose a privacy preserving attribute based access control protocol to handle the device authentication to the cloud service. For the smart-phone centric authentication to the cloud component, we employ the FIDO UAF protocol and we extend it, by adding an attributed based privacy preserving component.
In this paper, we present a security and privacy enhancement (SPE) framework for unmodified mobile operating systems. SPE introduces a new layer between the application and the operating system and does not require a device be jailbroken or utilize a custom operating system. We utilize an existing ontology designed for enforcing security and privacy policies on mobile devices to build a policy that is customizable. Based on this policy, SPE provides enhancements to native controls that currently exist on the platform for privacy and security sensitive components. SPE allows access to these components in a way that allows the framework to ensure the application is truthful in its declared intent and ensure that the user's policy is enforced. In our evaluation we verify the correctness of the framework and the computing impact on the device. Additionally, we discovered security and privacy issues in several open source applications by utilizing the SPE Framework. From our findings, if SPE is adopted by mobile operating systems producers, it would provide consumers and businesses the additional privacy and security controls they demand and allow users to be more aware of security and privacy issues with applications on their devices.
Though controversial, surveillance activities are more and more performed for security reasons. However, such activities are extremely privacy-intrusive. This is seen as a necessary side-effect to ensure the success of such operations. In this paper, we propose an accountability-aware protocol designed for surveillance purposes. It relies on a strong incentive for a surveillance organisation to register its activity to a data protection authority. We first elicit a list of account-ability requirements, we provide an architecture showing the interaction of the different involved parties, and we propose an accountability-aware protocol which is formally specified in the applied pi calculus. We use the ProVerif tool to automatically verify that the protocol respects confidentiality, integrity and authentication properties.
Mobile applications have grown from knowing basic personal information to knowing intimate details of consumer's lives. The explosion of knowledge that applications contain and share can be contributed to many factors. Mobile devices are equipped with advanced sensors including GPS and cameras, while storing large amounts of personal information including photos and contacts. With millions of applications available to install, personal data is at constant risk of being misused. While mobile operating systems provide basic security and privacy controls, they are insufficient, leaving the consumer unaware of how applications are using permissions that were granted. In this paper, we propose a solution that aims to provide consumers awareness of applications misusing data and policies that can protect their data. From this investigation we present SPEProxy. SPEProxy utilizes a knowledge based approach to provide consumer's an ability to understand how applications are using permissions beyond their stated intent. Additionally, SPEProxy provides an awareness of fine grained policies that would allow the user to protect their data. SPEProxy is device and mobile operating system agnostic, meaning it does not require a specific device or operating system nor modification to the operating system or applications. This approach allows consumers to utilize the solution without requiring a high degree of technical expertise. We evaluated SPEProxy across 817 of the most popular applications in the iOS App Store and Google Play. In our evaluation, SPEProxy was highly effective across 86.55% applications where several well known applications exhibited misusing granted permissions.
SEAndroid is a mandatory access control (MAC) framework that can confine faulty applications on Android. Nevertheless, the effectiveness of SEAndroid enforcement depends on the employed policy. The growing complexity of Android makes it difficult for policy engineers to have complete domain knowledge on every system functionality. As a result, policy engineers sometimes craft over-permissive and ineffective policy rules, which unfortunately increased the attack surface of the Android system and have allowed multiple real-world privilege escalation attacks. We propose SPOKE, an SEAndroid Policy Knowledge Engine, that systematically extracts domain knowledge from rich-semantic functional tests and further uses the knowledge for characterizing the attack surface of SEAndroid policy rules. Our attack surface analysis is achieved by two steps: 1) It reveals policy rules that cannot be justified by the collected domain knowledge. 2) It identifies potentially over-permissive access patterns allowed by those unjustified rules as the attack surface. We evaluate SPOKE using 665 functional tests targeting 28 different categories of functionalities developed by Samsung Android Team. SPOKE successfully collected 12,491 access patterns for the 28 categories as domain knowledge, and used the knowledge to reveal 320 unjustified policy rules and 210 over-permissive access patterns defined by those rules, including one related to the notorious libstagefright vulnerability. These findings have been confirmed by policy engineers.
Web-Based applications are becoming more increasingly technically complex and sophisticated. The very nature of their feature-rich design and their capability to collate, process, and disseminate information over the Internet or from within an intranet makes them a popular target for attack. According to Open Web Application Security Project (OWASP) Top Ten Cheat sheet-2017, SQL Injection Attack is at peak among online attacks. This can be attributed primarily to lack of awareness on software security. Developing effective SQL injection detection approaches has been a challenge in spite of extensive research in this area. In this paper, we propose a signature based SQL injection attack detection framework by integrating fingerprinting method and Pattern Matching to distinguish genuine SQL queries from malicious queries. Our framework monitors SQL queries to the database and compares them against a dataset of signatures from known SQL injection attacks. If the fingerprint method cannot determine the legitimacy of query alone, then the Aho Corasick algorithm is invoked to ascertain whether attack signatures appear in the queries. The initial experimental results of our framework indicate the approach can identify wide variety of SQL injection attacks with negligible impact on performance.
Compromised smart meters reporting false power consumption data in Advanced Metering Infrastructure (AMI) may have drastic consequences on a smart grid's operations. Most existing works only deal with electricity theft from customers. However, several other types of data falsification attacks are possible, when meters are compromised by organized rivals. In this paper, we first propose a taxonomy of possible data falsification strategies such as additive, deductive, camouflage and conflict, in AMI micro-grids. Then, we devise a statistical anomaly detection technique to identify the incidence of proposed attack types, by studying their impact on the observed data. Subsequently, a trust model based on Kullback-Leibler divergence is proposed to identify compromised smart meters for additive and deductive attacks. The resultant detection rates and false alarms are minimized through a robust aggregate measure that is calculated based on the detected attack type and successfully discriminating legitimate changes from malicious ones. For conflict and camouflage attacks, a generalized linear model and Weibull function based kernel trick is used over the trust score to facilitate more accurate classification. Using real data sets collected from AMI, we investigate several trade-offs that occur between attacker's revenue and costs, as well as the margin of false data and fraction of compromised nodes. Experimental results show that our model has a high true positive detection rate, while the average false alarm rate is just 8%, for most practical attack strategies, without depending on the expensive hardware based monitoring.
Android privacy control is an important but difficult problem to solve. Previously, there was much research effort either focusing on extending the Android permission model with better policies or modifying the Android framework for fine-grained access control. In this work, we take an integral approach by designing and implementing SweetDroid, a calling-context-sensitive privacy policy enforcement framework. SweetDroid combines automated policy generation with automated policy enforcement. The automatically generated policies in SweetDroid are based on the calling contexts of privacy sensitive APIs; hence, SweetDroid is able to tell whether a particular API (e.g., getLastKnownLocation) under a certain execution path is leaking private information. The policy enforcement in SweetDroid is also fine-grained - it is at the individual API level, not at the permission level. We implement and evaluate the system based on thousands of Android apps, including those from a third-party market and malicious apps from VirusTotal. Our experiment results show that SweetDroid can successfully distinguish and enforce different privacy policies based on calling contexts, and the current design is both developer hassle-free and user transparent. SweetDroid is also efficient because it only introduces small storage and computational overhead.
Existing probabilistic privacy enforcement approaches permit the execution of a program that processes sensitive data only if the information it leaks is within the bounds specified by a given policy. Thus, to extract any information, users must manually design a program that satisfies the policy. In this work, we present a novel synthesis approach that automatically transforms a program into one that complies with a given policy. Our approach consists of two ingredients. First, we phrase the problem of determining the amount of leaked information as Bayesian inference, which enables us to leverage existing probabilistic programming engines. Second, we present two synthesis procedures that add uncertainty to the program's outputs as a way of reducing the amount of leaked information: an optimal one based on SMT solving and a greedy one with quadratic running time. We implemented and evaluated our approach on 10 representative programs from multiple application domains. We show that our system can successfully synthesize a permissive enforcement mechanism for all examples.
Redundant capacity in filesystem timestamps is recently proposed in the literature as an effective means for information hiding and data leakage. Here, we evaluate the steganographic capabilities of such channels and propose techniques to aid digital forensics investigation towards identifying and detecting manipulated filesystem timestamps. Our findings indicate that different storage media and interfaces exhibit different timestamp creation patterns. Such differences can be utilized to characterize file source media and increase the analysis capabilities of the incident response process.
We present a binary static analysis approach to detect intelligent electronic device (IED) malware based on the time requirements of electrical substations. We explore graph theory techniques to model the timing performance of an IED executable. Timing performance is subsequently used as a metric for IED malware detection. More specifically, we perform a series of steps to reduce a part of the IED malware detection problem into a classical problem of graph theory, namely finding single-source shortest paths on a weighted directed acyclic graph (DAG). Shortest paths represent execution flows that take the longest time to compute. Their clock cycles are examined to determine if they violate the real-time nature of substation monitoring and control, in which case IED malware detection is attained. We did this work with particular reference to implementations of protection and control algorithms that use the IEC 61850 standard for substation data representation and network communication. We tested our approach against IED exploits and malware, network scanning code, and numerous malware samples involved in recent ICS malware campaigns.
The growth in cloud-based services tailored for users means more and more personal data is being exploited, and with this comes the need to better handle user privacy. Software technologies concentrating on privacy preservation typically present a one-size fits all solution. However, users have different viewpoints of what privacy means to them and therefore, configurable and dynamic privacy preserving solutions have the potential to create useful and tailored services without breaching any user's privacy. In this paper, we present a model of user-centered privacy that can be used to analyse a service's behaviour against user preferences, such that a user can be informed of the privacy implications of that service and what fine-grained actions they can take to maintain their privacy. We show through study that the user-based privacy model can: i) provide customizable privacy aligned with user needs; and ii) identify potential privacy breaches.
There has been increasing interest in adopting BlockChain (BC), that underpins the crypto-currency Bitcoin, in Internet of Things (IoT) for security and privacy. However, BCs are computationally expensive and involve high bandwidth overhead and delays, which are not suitable for most IoT devices. This paper proposes a lightweight BC-based architecture for IoT that virtually eliminates the overheads of classic BC, while maintaining most of its security and privacy benefits. IoT devices benefit from a private immutable ledger, that acts similar to BC but is managed centrally, to optimize energy consumption. High resource devices create an overlay network to implement a publicly accessible distributed BC that ensures end-to-end security and privacy. The proposed architecture uses distributed trust to reduce the block validation processing time. We explore our approach in a smart home setting as a representative case study for broader IoT applications. Qualitative evaluation of the architecture under common threat models highlights its effectiveness in providing security and privacy for IoT applications. Simulations demonstrate that our method decreases packet and processing overhead significantly compared to the BC implementation used in Bitcoin.
Traditional Intrusion Detection Systems (IDSes) are generally implemented on vendor proprietary appliances or middleboxes, which usually lack a general programming interface, and their versatility and flexibility are also very poor. Emerging Network Function Virtualization (NFV) technology can virtualize IDSes and elastically scale them to deal with attack traffic variations. However, existing NFV solutions treat a virtualized IDS as a monolithic piece of software, which could lead to inflexibility and significant waste of resources. In this paper, we propose a novel approach to virtualize IDSes as microservices where the virtualized IDSes can be customized on demand, and the underlying microservices could be shared and scaled independently. We also conduct experiments, which demonstrate that virtualizing IDSes as microservices can gain greater flexibility and resource efficiency.
Population protocols are a well established model of computation by anonymous, identical finite state agents. A protocol is well-specified if from every initial configuration, all fair executions of the protocol reach a common consensus. The central verification question for population protocols is the well-specification problem: deciding if a given protocol is well-specified. Esparza et al. have recently shown that this problem is decidable, but with very high complexity: it is at least as hard as the Petri net reachability problem, which is EXPSPACE-hard, and for which only algorithms of non-primitive recursive complexity are currently known. In this paper we introduce the class WS3 of well-specified strongly-silent protocols and we prove that it is suitable for automatic verification. More precisely, we show that WS3 has the same computational power as general well-specified protocols, and captures standard protocols from the literature. Moreover, we show that the membership problem for WS3 reduces to solving boolean combinations of linear constraints over N. This allowed us to develop the first software able to automatically prove well-specification for all of the infinitely many possible inputs.
With the rapid development of sophisticated attack techniques, individual security systems that base all of their decisions and actions of attack prevention and response on their own observations and knowledge become incompetent. To cope with this problem, collaborative security in which a set of security entities are coordinated to perform specific security actions is proposed in literature. In collaborative security schemes, multiple entities collaborate with each other by sharing threat evidence or analytics to make more effective decisions. Nevertheless, the anticipated information exchange raises privacy concerns, especially for those privacy-sensitive entities. In order to obtain a quantitative understanding of the fundamental tradeoff between the effectiveness of collaboration and the entities' privacy, a repeated two-layer single-leader multi-follower game is proposed in this work. Based on our game-theoretic analysis, the expected behaviors of both the attacker and the security entities are derived and the utility-privacy tradeoff curve is obtained. In addition, the existence of Nash equilibrium (NE) for the collaborative entities is proven, and an asynchronous dynamic update algorithm is proposed to compute the optimal collaboration strategies of the entities. Furthermore, the existence of Byzantine entities is considered and its influence is investigated. Finally, simulation results are presented to validate the analysis.
Critical business applications in domains ranging from technical support to healthcare increasingly rely on large-scale, automatically constructed knowledge graphs. These applications use the results of complex queries over knowledge graphs in order to help users in taking crucial decisions such as which drug to administer, or whether certain actions are compliant with all the regulatory requirements and so on. However, these knowledge graphs constantly evolve, and the newer versions may adversely impact the results of queries that the previously taken business decisions were based on. We propose a framework based on provenance polynomials to track the impact of knowledge graph changes on arbitrary SPARQL query results. Focusing on the deletion of facts, we show how to efficiently determine the queries impacted by the change, develop ways to incrementally maintain these polynomials, and present an efficient implementation on top of RDF graph databases. Our experimental evaluation over large-scale RDF/SPARQL benchmarks show the effectiveness of our proposal.
We propose an approach to enforce security in disruption- and delay-tolerant networks (DTNs) where long delays, high packet drop rates, unavailability of central trusted entity etc. make traditional approaches unfeasible. We use trust model based on subjective logic to continuously evaluate trustworthiness of security credentials issued in distributed manner by network participants to deal with absence of centralised trusted authorities.
Universally Composable (UC) framework provides the strongest security notion for designing fully trusted cryptographic protocols, and it is very challenging on applying UC security in the design of RFID mutual authentication protocols. In this paper, we formulate the necessary conditions for achieving UC secure RFID mutual authentication protocols which can be fully trusted in arbitrary environment, and indicate the inadequacy of some existing schemes under the UC framework. We define the ideal functionality for RFID mutual authentication and propose the first UC secure RFID mutual authentication protocol based on public key encryption and certain trusted third parties which can be modeled as functionalities. We prove the security of our protocol under the strongest adversary model assuming both the tags' and readers' corruptions. We also present two (public) key update protocols for the cases of multiple readers: one uses Message Authentication Code (MAC) and the other uses trusted certificates in Public Key Infrastructure (PKI). Furthermore, we address the relations between our UC framework and the zero-knowledge privacy model proposed by Deng et al. [1].
User privacy is an important issue in a blockchain based transaction system. Bitcoin, being one of the most widely used blockchain based transaction system, fails to provide enough protection on users' privacy. Many subsequent studies focus on establishing a system that hides the linkage between the identities (pseudonyms) of users and the transactions they carry out in order to provide a high level of anonymity. Examples include Zerocoin, Zerocash and so on. It thus becomes an interesting question whether such new transaction systems do provide enough protection on users' privacy. In this paper, we propose a novel and effective approach for de-anonymizing these transaction systems by leveraging information in the system that is not directly related, including the number of transactions made by each identity and time stamp of sending and receiving. Combining probability studies with optimization tools, we establish a model which allows us to determine, among all possible ways of linking between transactions and identities, the one that is most likely to be true. Subsequent transaction graph analysis could then be carried out, leading to the de-anonymization of the system. To solve the model, we provide exact algorithms based on mixed integer linear programming. Our research also establishes interesting relationships between the de-anonymization problem and other problems studied in the literature of theoretical computer science, e.g., the graph matching problem and scheduling problem.
This paper presents an approach to formalizing and enforcing a class of use privacy properties in data-driven systems. In contrast to prior work, we focus on use restrictions on proxies (i.e. strong predictors) of protected information types. Our definition relates proxy use to intermediate computations that occur in a program, and identify two essential properties that characterize this behavior: 1) its result is strongly associated with the protected information type in question, and 2) it is likely to causally affect the final output of the program. For a specific instantiation of this definition, we present a program analysis technique that detects instances of proxy use in a model, and provides a witness that identifies which parts of the corresponding program exhibit the behavior. Recognizing that not all instances of proxy use of a protected information type are inappropriate, we make use of a normative judgment oracle that makes this inappropriateness determination for a given witness. Our repair algorithm uses the witness of an inappropriate proxy use to transform the model into one that provably does not exhibit proxy use, while avoiding changes that unduly affect classification accuracy. Using a corpus of social datasets, our evaluation shows that these algorithms are able to detect proxy use instances that would be difficult to find using existing techniques, and subsequently remove them while maintaining acceptable classification performance.
In this talk, I will discuss my lab's work in the emerging field of adversarial stylometry and machine learning. Machine learning algorithms are increasingly being used in security and privacy domains, in areas that go beyond intrusion or spam detection. For example, in digital forensics, questions often arise about the authors of documents: their identity, demographic background, and whether they can be linked to other documents. The field of stylometry uses linguistic features and machine learning techniques to answer these questions. We have applied stylometry to difficult domains such as underground hacker forums, open source projects (code), and tweets. I will discuss our Doppelgnger Finder algorithm, which enables us to group Sybil accounts on underground forums and detect blogs from Twitter feeds and reddit comments. In addition, I will discuss our work attributing unknown source code and binaries.
Many organizations process personal information in the course of normal operations. Improper disclosure of this information can be damaging, so organizations must obey privacy laws and regulations that impose restrictions on its release or risk penalties. Since electronic management of personal information must be held in strict compliance with the law, software systems designed for such purposes must have some guarantee of compliance. To support this, we develop a general methodology for designing and implementing verifiable information systems. This paper develops the design of the History Aware Programming Language into a framework for creating systems that can be mechanically checked against privacy specifications. We apply this framework to create and verify a prototypical Electronic Medical Record System (EMRS) expressed as a set of actor components and first-order linear temporal logic specifications in assume-guarantee form. We then show that the implementation of the EMRS provably enforces a formalized Health Insurance Portability and Accountability Act (HIPAA) policy using a combination of model checking and static analysis techniques.