Biblio
Applications in mobile marketplaces may leak private user information without notification. Existing mobile platforms provide little information on how applications use private user data, making it difficult for experts to validate appli- cations and for users to grant applications access to their private data. We propose a user-aware-privacy-control approach, which reveals how private information is used inside applications. We compute static information flows and classify them as safe/un- safe based on a tamper analysis that tracks whether private data is obscured before escaping through output channels. This flow information enables platforms to provide default settings that expose private data for only safe flows, thereby preserving privacy and minimizing decisions required from users. We build our approach into TouchDe- velop, an application-creation environment that allows users to write scripts on mobile devices and install scripts published by other users. We evaluate our approach by studying 546 scripts published by 194 users, and the results show that our approach effectively reduces the need to make access-granting choices to only 10.1 % (54) of all scripts. We also conduct a user survey that involves 50 TouchDevelop users to assess the effectiveness and usability of our approach. The results show that 90 % of the users consider our approach useful in protecting their privacy, and 54 % prefer our approach over other privacy-control approaches.
Hash based biometric template protection schemes (BTPS), such as fuzzy commitment, fuzzy vault, and secure sketch, address the privacy leakage concern on the plain biometric template storage in a database through using cryptographic hash calculation for template verification. However, cryptographic hashes have only computational security whose being cracked shall leak the biometric feature in these BTPS; and furthermore, existing BTPS are rarely able to detect during a verification process whether a probe template has been leaked from the database or not (i.e., being used by an imposter or a genuine user). In this paper we tailor the "honeywords" idea, which was proposed to detect the hashed password cracking, to enable the detectability of biometric template database leakage. However, unlike passwords, biometric features encoded in a template cannot be renewed after being cracked and thus not straightforwardly able to be protected by the honeyword idea. To enable the honeyword idea on biometrics, diversifiability (and thus renewability) is required on the biometric features. We propose to use BTPS for his purpose in this paper and present a machine learning based protected template generation protocol to ensure the best anonymity of the generated sugar template (from a user's genuine biometric feature) among other honey ones (from synthesized biometric features).
The smart grid aims to improve the efficiency, reliability and safety of the electric system via modern communication system, it's necessary to utilize cloud computing to process and store the data. In fact, it's a promising paradigm to integrate smart grid into cloud computing. However, access to cloud computing system also brings data security issues. This paper focuses on the protection of user privacy in smart meter system based on data combination privacy and trusted third party. The paper demonstrates the security issues for smart grid communication system and cloud computing respectively, and illustrates the security issues for the integration. And we introduce data chunk storage and chunk relationship confusion to protect user privacy. We also propose a chunk information list system for inserting and searching data.
Most existing approaches focus on examining the values are dangerous for information flow within inter-suspicious modules of cloud applications (apps) in a host by using malware threat analysis, rather than the risk posed by suspicious apps were connected to the cloud computing server. Accordingly, this paper proposes a taint propagation analysis model incorporating a weighted spanning tree analysis scheme to track data with taint marking using several taint checking tools. In the proposed model, Android programs perform dynamic taint propagation to analyse the spread of and risks posed by suspicious apps were connected to the cloud computing server. In determining the risk of taint propagation, risk and defence capability are used for each taint path for assisting a defender in recognising the attack results against network threats caused by malware infection and estimate the losses of associated taint sources. Finally, a case of threat analysis of a typical cyber security attack is presented to demonstrate the proposed approach. Our approach verified the details of an attack sequence for malware infection by incorporating a finite state machine (FSM) to appropriately reflect the real situations at various configuration settings and safeguard deployment. The experimental results proved that the threat analysis model allows a defender to convert the spread of taint propagation to loss and practically estimate the risk of a specific threat by using behavioural analysis with real malware infection.
Distributed Denial of Service (DDoS) attacks are one of the most important threads in network systems. Due to the distributed nature, DDoS attacks are very hard to detect, while they also have the destructive potential of classical denial of service attacks. In this study, a novel 2-step system is proposed for the detection of DDoS attacks. In the first step an anomaly detection is performed on the destination IP traffic. If an anomaly is detected on the network, the system proceeds into the second step where a decision on every user is made due to the behaviour models. Hence, it is possible to detect attacks in the network that diverges from users' behavior model.
Privacy is the most anticipated aspect in many perspectives especially with sensitive data and the database is being targeted incessantly for vulnerability. The database must be persistently monitored for ensuring comprehensive security. The proposed model is intended to cherish the database privacy by thwarting intrusions and inferences. The Database Static protection and Intrusion Tolerance Subsystem proposed in the architecture bolster this practice. This paper enunciates Privacy Cherished Database architecture model and how it achieves security under sundry circumstances.
Privacy is the most anticipated aspect in many perspectives especially with sensitive data and the database is being targeted incessantly for vulnerability. The database must be persistently monitored for ensuring comprehensive security. The proposed model is intended to cherish the database privacy by thwarting intrusions and inferences. The Database Static protection and Intrusion Tolerance Subsystem proposed in the architecture bolster this practice. This paper enunciates Privacy Cherished Database architecture model and how it achieves security under sundry circumstances.
Privacy is the most anticipated aspect in many perspectives especially with sensitive data and the database is being targeted incessantly for vulnerability. The database must be persistently monitored for ensuring comprehensive security. The proposed model is intended to cherish the database privacy by thwarting intrusions and inferences. The Database Static protection and Intrusion Tolerance Subsystem proposed in the architecture bolster this practice. This paper enunciates Privacy Cherished Database architecture model and how it achieves security under sundry circumstances.
In many Twitter applications, developers collect only a limited sample of tweets and a local portion of the Twitter network. Given such Twitter applications with limited data, how can we classify Twitter users as either bots or humans? We develop a collection of network-, linguistic-, and application-oriented variables that could be used as possible features, and identify specific features that distinguish well between humans and bots. In particular, by analyzing a large dataset relating to the 2014 Indian election, we show that a number of sentimentrelated factors are key to the identification of bots, significantly increasing the Area under the ROC Curve (AUROC). The same method may be used for other applications as well.
As any veteran of the editor wars can attest, Unix users can be fiercely and irrationally attached to the commands they use and the manner in which they use them. In this work, we investigate the problem of identifying users out of a large set of candidates (25-97) through their command-line histories. Using standard algorithms and feature sets inspired by natural language authorship attribution literature, we demonstrate conclusively that individual users can be identified with a high degree of accuracy through their command-line behavior. Further, we report on the best performing feature combinations, from the many thousands that are possible, both in terms of accuracy and generality. We validate our work by experimenting on three user corpora comprising data gathered over three decades at three distinct locations. These are the Greenberg user profile corpus (168 users), Schonlau masquerading corpus (50 users) and Cal Poly command history corpus (97 users). The first two are well known corpora published in 1991 and 2001 respectively. The last is developed by the authors in a year-long study in 2014 and represents the most recent corpus of its kind. For a 50 user configuration, we find feature sets that can successfully identify users with over 90% accuracy on the Cal Poly, Greenberg and one variant of the Schonlau corpus, and over 87% on the other Schonlau variant.
Communication in Mobile Ad hoc network is done over a shared wireless channel with no Central Authority (CA) to monitor. Responsibility of maintaining the integrity and secrecy of data, nodes in the network are held responsible. To attain the goal of trusted communication in MANET (Mobile Ad hoc Network) lot of approaches using key management has been implemented. This work proposes a composite identity and trust based model (CIDT) which depends on public key, physical identity, and trust of a node which helps in secure data transfer over wireless channels. CIDT is a modified DSR routing protocol for achieving security. Trust Factor of a node along with its key pair and identity is used to authenticate a node in the network. Experience based trust factor (TF) of a node is used to decide the authenticity of a node. A valid certificate is generated for authentic node to carry out the communication in the network. Proposed method works well for self certification scheme of a node in the network.
Communication in Mobile Ad hoc network is done over a shared wireless channel with no Central Authority (CA) to monitor. Responsibility of maintaining the integrity and secrecy of data, nodes in the network are held responsible. To attain the goal of trusted communication in MANET (Mobile Ad hoc Network) lot of approaches using key management has been implemented. This work proposes a composite identity and trust based model (CIDT) which depends on public key, physical identity, and trust of a node which helps in secure data transfer over wireless channels. CIDT is a modified DSR routing protocol for achieving security. Trust Factor of a node along with its key pair and identity is used to authenticate a node in the network. Experience based trust factor (TF) of a node is used to decide the authenticity of a node. A valid certificate is generated for authentic node to carry out the communication in the network. Proposed method works well for self certification scheme of a node in the network.
The hardware and low-level software in many mobile devices are capable of mobile-to-mobile communication, including ad-hoc 802.11, Bluetooth, and cognitive radios. We have started to leverage this capability to provide interpersonal communication both over infrastructure networks (the Internet), and over ad-hoc and delay-tolerant networks composed of the mobile devices themselves. This network is decentralized in the sense that it can function without any infrastructure, but does take advantage of infrastructure connections when available. All interpersonal communication is encrypted and authenticated so packets may be carried by devices belonging to untrusted others. The decentralized model of security builds a flexible trust network on top of the social network of communicating individuals. This social network can be used to prioritize packets to or from individuals closely related by the social network. Other packets are prioritized to favor packets likely to consume fewer network resources. Each device also has a policy that determines how many packets may be forwarded, with the goal of providing useful interpersonal communications using at most 1% of any given resource on mobile devices. One challenge in a fully decentralized network is routing. Our design uses Rendezvous Points (RPs) and Distributed Hash Tables (DHTs) for delivery over infrastructure networks, and hop-limited broadcast and Delay Tolerant Networking (DTN) within the wireless ad-hoc network.
One of the primary challenges when developing or implementing a security framework for any particular environment is determining the efficacy of the implementation. Does the implementation address all of the potential vulnerabilities in the environment, or are there still unaddressed issues? Further, if there is a choice between two frameworks, what objective measure can be used to compare the frameworks? To address these questions, we propose utilizing a technique of attack graph analysis to map the attack surface of the environment and identify the most likely avenues of attack. We show that with this technique we can quantify the baseline state of an application and compare that to the attack surface after implementation of a security framework, while simultaneously allowing for comparison between frameworks in the same environment or a single framework across multiple applications.
In this work, a new fingerprinting-based localization algorithm is proposed for an underwater medium by utilizing ultra-wideband (UWB) signals. In many conventional underwater systems, localization is accomplished by utilizing acoustic waves. On the other hand, electromagnetic waves haven't been employed for underwater localization due to the high attenuation of the signal in water. However, it is possible to use UWB signals for short-range underwater localization. In this work, the feasibility of performing localization for an underwater medium is illustrated by utilizing a fingerprinting-based localization approach. By employing the concept of compressive sampling, we propose a sparsity-based localization method for which we define a system model exploiting the spatial sparsity.
Recent events have brought to light the increasingly intertwined nature of modern infrastructures. As a result much effort is being put towards protecting these vital infrastructures without which modern society suffers dire consequences. These infrastructures, due to their intricate nature, behave in complex ways. Improving their resilience and understanding their behavior requires a collaborative effort between the private sector that operates these infrastructures and the government sector that regulates them. This collaboration in the form of information sharing requires a new type of information network whose goal is in two parts to enable infrastructure operators share status information among interdependent infrastructure nodes and also allow for the sharing of vital information concerning threats and other contingencies in the form of alerts. A communication model that meets these requirements while maintaining flexibility and scalability is presented in this paper.
Formal techniques provide exhaustive design verification, but computational margins have an important negative impact on its efficiency. Sequential equivalence checking is an effective approach, but traditionally it has been only applied between circuit descriptions with one-to-one correspondence for states. Applying it between RTL descriptions and high-level reference models requires removing signals, variables and states exclusive of the RTL description so as to comply with the state correspondence restriction. In this paper, we extend a previous formal methodology for RTL verification with high-level models, to check also the signals and protocol implemented in the RTL design. This protocol implementation is compared formally to a description captured from the specification. Thus, we can prove thoroughly the sequential behavior of a design under verification.
This paper presents verification and model based checking of the Trivial File Transfer Protocol (TFTP). Model checking is a technique for software verification that can detect concurrency defects within appropriate constraints by performing an exhaustive state space search on a software design or implementation and alert the implementing organization to potential design deficiencies that are otherwise difficult to be discovered. The TFTP is implemented on top of the Internet User Datagram Protocol (UDP) or any other datagram protocol. We aim to create a design model of TFTP protocol, with adding window size, using Promela to simulate it and validate some specified properties using spin. The verification has been done by using the model based checking tool SPIN which accepts design specification written in the verification language PROMELA. The results show that TFTP is free of live locks.
Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.
Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.
Recently, there has been a pronounced increase of interest in the field of renewable energy. In this area power inverters are crucial building blocks in a segment of energy converters, since they change direct current (DC) to alternating current (AC). Grid connected power inverters should operate in synchronism with the grid voltage. In this paper, the structure of a power system based on adaptive filtering is described. The main purpose of the adaptive filter is to adapt the output signal of the inverter to the corresponding load and/or grid signal. By involving adaptive filtering the response time decreases and quality of power delivery to the load or grid increases. A comparative analysis which relates to power system operation without and with adaptive filtering is given. In addition, the impact of variable impedance of load on quality of delivered power is considered. Results which relates to total harmonic distortion (THD) factor are obtained by Matlab/Simulink software.
This paper presents a unified approach for the detection of network anomalies. Current state of the art methods are often able to detect one class of anomalies at the cost of others. Our approach is based on using a Linear Dynamical System (LDS) to model network traffic. An LDS is equivalent to Hidden Markov Model (HMM) for continuous-valued data and can be computed using incremental methods to manage high-throughput (volume) and velocity that characterizes Big Data. Detailed experiments on synthetic and real network traces shows a significant improvement in detection capability over competing approaches. In the process we also address the issue of robustness of network anomaly detection systems in a principled fashion.