Biblio
Privacy is a very active subject of research and also of debate in the political circles. In order to make good decisions about privacy, we need measurement systems for privacy. Most of the traditional measures such as k-anonymity lack expressiveness in many cases. We present a privacy measuring framework, which can be used to measure the value of privacy to an individual and also to evaluate the efficacy of privacy enhancing technologies. Our method is centered on a subject, whose privacy can be measured through the amount and value of information learned about the subject by some observers. This gives rise to interesting probabilistic models for the value of privacy and measures for privacy enhancing technologies.
We report on our research on proving the security of multi-party cryptographic protocols using the EASYCRYPT proof assistant. We work in the computational model using the sequence of games approach, and define honest-butcurious (semi-honest) security using a variation of the real/ideal paradigm in which, for each protocol party, an adversary chooses protocol inputs in an attempt to distinguish the party's real and ideal games. Our proofs are information-theoretic, instead of being based on complexity theory and computational assumptions. We employ oracles (e.g., random oracles for hashing) whose encapsulated states depend on dynamically-made, nonprogrammable random choices. By limiting an adversary's oracle use, one may obtain concrete upper bounds on the distances between a party's real and ideal games that are expressed in terms of game parameters. Furthermore, our proofs work for adaptive adversaries, ones that, when choosing the value of a protocol input, may condition this choice on their current protocol view and oracle knowledge. We provide an analysis in EASYCRYPT of a three party private count retrieval protocol. We emphasize the lessons learned from completing this proof.
Provenance counterfeit and packet loss assaults are measured as threats in the large scale wireless sensor networks which are engaged for diverse application domains. The assortments of information source generate necessitate promising the reliability of information such as only truthful information is measured in the decision procedure. Details about the sensor nodes play an major role in finding trust value of sensor nodes. In this paper, a novel lightweight secure provenance method is initiated for improving the security of provenance data transmission. The anticipated system comprises provenance authentication and renovation at the base station by means of Merkle-Hellman knapsack algorithm based protected provenance encoding in the Bloom filter framework. Side Channel Monitoring (SCM) is exploited for noticing the presence of selfish nodes and packet drop behaviors. This lightweight secure provenance method decreases the energy and bandwidth utilization with well-organized storage and secure data transmission. The investigational outcomes establishes the efficacy and competence of the secure provenance secure system by professionally noticing provenance counterfeit and packet drop assaults which can be seen from the assessment in terms of provenance confirmation failure rate, collection error, packet drop rate, space complexity, energy consumption, true positive rate, false positive rate and packet drop attack detection.
This paper combines FMEA and n2 approaches in order to create a methodology to determine risks associated with the components of an underwater system. This methodology is based on defining the risk level related to each one of the components and interfaces that belong to a complex underwater system. As far as the authors know, this approach has not been reported before. The resulting information from the mentioned procedures is combined to find the system's critical elements and interfaces that are most affected by each failure mode. Finally, a calculation is performed to determine the severity level of each failure mode based on the system's critical elements.
Existing methods of generative adversarial network (GAN) use different criteria to distinguish between real and fake samples, such as probability [9],energy [44] energy or other losses [30]. In this paper, by employing the merits of deep metric learning, we propose a novel metric-based generative adversarial network (MBGAN), which uses the distance-criteria to distinguish between real and fake samples. Specifically, the discriminator of MBGAN adopts a triplet structure and learns a deep nonlinear transformation, which maps input samples into a new feature space. In the transformed space, the distance between real samples is minimized, while the distance between real sample and fake sample is maximized. Similar to the adversarial procedure of existing GANs, a generator is trained to produce synthesized examples, which are close to real examples, while a discriminator is trained to maximize the distance between real and fake samples to a large margin. Meanwhile, instead of using a fixed margin, we adopt a data-dependent margin [30], so that the generator could focus on improving the synthesized samples with poor quality, instead of wasting energy on well-produce samples. Our proposed method is verified on various benchmarks, such as CIFAR-10, SVHN and CelebA, and generates high-quality samples.
Denial-of-Service attacks have rapidly increased in terms of frequency and intensity, steadily becoming one of the biggest threats to Internet stability and reliability. However, a rigorous comprehensive characterization of this phenomenon, and of countermeasures to mitigate the associated risks, faces many infrastructure and analytic challenges. We make progress toward this goal, by introducing and applying a new framework to enable a macroscopic characterization of attacks, attack targets, and DDoS Protection Services (DPSs). Our analysis leverages data from four independent global Internet measurement infrastructures over the last two years: backscatter traffic to a large network telescope; logs from amplification honeypots; a DNS measurement platform covering 60% of the current namespace; and a DNS-based data set focusing on DPS adoption. Our results reveal the massive scale of the DoS problem, including an eye-opening statistic that one-third of all / 24 networks recently estimated to be active on the Internet have suffered at least one DoS attack over the last two years. We also discovered that often targets are simultaneously hit by different types of attacks. In our data, Web servers were the most prominent attack target; an average of 3% of the Web sites in .com, .net, and .org were involved with attacks, daily. Finally, we shed light on factors influencing migration to a DPS.
In this paper, we focus on the security issues and challenges in smart grid. Smart grid security features must address not only the expected deliberate attacks, but also inadvertent compromises of the information infrastructure due to user errors, equipment failures, and natural disasters. An important component of smart grid is the advanced metering infrastructure which is critical to support two-way communication of real time information for better electricity generation, distribution and consumption. These reasons makes security a prominent factor of importance to AMI. In recent times, attacks on smart grid have been modelled using attack tree. Attack tree has been extensively used as an efficient and effective tool to model security threats and vulnerabilities in systems where the ultimate goal of an attacker can be divided into a set of multiple concrete or atomic sub-goals. The sub-goals are related to each other as either AND-siblings or OR-siblings, which essentially depicts whether some or all of the sub-goals must be attained for the attacker to reach the goal. On the other hand, as a security professional one needs to find out the most effective way to address the security issues in the system under consideration. It is imperative to assume that each attack prevention strategy incurs some cost and the utility company would always look to minimize the same. We present a cost-effective mechanism to identify minimum number of potential atomic attacks in an attack tree.
Understanding and fending off attack campaigns against organizations, companies and individuals, has become a global struggle. As today's threat actors become more determined and organized, isolated efforts to detect and reveal threats are no longer effective. Although challenging, this situation can be significantly changed if information about security incidents is collected, shared and analyzed across organizations. To this end, different exchange data formats such as STIX, CyBOX, or IODEF have been recently proposed and numerous CERTs are adopting these threat intelligence standards to share tactical and technical threat insights. However, managing, analyzing and correlating the vast amount of data available from different sources to identify relevant attack patterns still remains an open problem. In this paper we present Mantis, a platform for threat intelligence that enables the unified analysis of different standards and the correlation of threat data trough a novel type-agnostic similarity algorithm based on attributed graphs. Its unified representation allows the security analyst to discover similar and related threats by linking patterns shared between seemingly unrelated attack campaigns through queries of different complexity. We evaluate the performance of Mantis as an information retrieval system for threat intelligence in different experiments. In an evaluation with over 14,000 CyBOX objects, the platform enables retrieving relevant threat reports with a mean average precision of 80%, given only a single object from an incident, such as a file or an HTTP request. We further illustrate the performance of this analysis in two case studies with the attack campaigns Stuxnet and Regin.
Detecting botnets and advanced persistent threats is a major challenge for network administrators. An important component of such malware is the command and control channel, which enables the malware to respond to controller commands. The detection of malware command and control channels could help prevent further malicious activity by cyber criminals using the malware. Detection of malware in network traffic is traditionally carried out by identifying specific patterns in packet payloads. Now bot writers encrypt the command and control payloads, making pattern recognition a less effective form of detection. This paper focuses instead on an effective anomaly based detection technique for bot and advanced persistent threats using a data mining approach combined with applied classification algorithms. After additional tuning, the final test on an unseen dataset, false positive rates of 0% with malware detection rates of 100% were achieved on two examined malware threats, with promising results on a number of other threats.
Like any other software engineering activity, assessing the security of a software system entails prioritizing the resources and minimizing the risks. Techniques ranging from the manual inspection to automated static and dynamic analyses are commonly employed to identify security vulnerabilities prior to the release of the software. However, none of these techniques is perfect, as static analysis is prone to producing lots of false positives and negatives, while dynamic analysis and manual inspection are unwieldy, both in terms of required time and cost. This research aims to improve these techniques by mining relevant information from vulnerabilities found in the app markets. The approach relies on the fact that many modern software systems, in particular mobile software, are developed using rich application development frameworks (ADF), allowing us to raise the level of abstraction for detecting vulnerabilities and thereby making it possible to classify the types of vulnerabilities that are encountered in a given category of application. By coupling this type of information with severity of the vulnerabilities, we are able to improve the efficiency of static and dynamic analyses, and target the manual effort on the riskiest vulnerabilities.
Relationship-based access control (ReBAC) provides a high level of expressiveness and flexibility that promotes security and information sharing. We formulate ReBAC as an object-oriented extension of attribute-based access control (ABAC) in which relationships are expressed using fields that refer to other objects, and path expressions are used to follow chains of relationships between objects. ReBAC policy mining algorithms have potential to significantly reduce the cost of migration from legacy access control systems to ReBAC, by partially automating the development of a ReBAC policy from an existing access control policy and attribute data. This paper presents an algorithm for mining ReBAC policies from access control lists (ACLs) and attribute data represented as an object model, and an evaluation of the algorithm on four sample policies and two large case studies. Our algorithm can be adapted to mine ReBAC policies from access logs and object models. It is the first algorithm for these problems.
The notion of style is pivotal to literature. The choice of a certain writing style moulds and enhances the overall character of a book. Stylometry uses statistical methods to analyze literary style. This work aims to build a recommendation system based on the similarity in stylometric cues of various authors. The problem at hand is in close proximity to the author attribution problem. It follows a supervised approach with an initial corpus of books labelled with their respective authors as training set and generate recommendations based on the misclassified books. Results in book similarity are substantiated by domain experts.
Let us consider a scenario that a data holder (e.g., a hospital) encrypts a data (e.g., a medical record) which relates a keyword (e.g., a disease name), and sends its ciphertext to a server. We here suppose not only the data but also the keyword should be kept private. A receiver sends a query to the server (e.g., average of body weights of cancer patients). Then, the server performs the homomorphic operation to the ciphertexts of the corresponding medical records, and returns the resultant ciphertext. In this scenario, the server should NOT be allowed to perform the homomorphic operation against ciphertexts associated with different keywords. If such a mis-operation happens, then medical records of different diseases are unexpectedly mixed. However, in the conventional homomorphic encryption, there is no way to prevent such an unexpected homomorphic operation, and this fact may become visible after decrypting a ciphertext, or as the most serious case it might be never detected. To circumvent this problem, in this paper, we propose mis-operation resistant homomorphic encryption, where even if one performs the homomorphic operations against ciphertexts associated with keywords ω' and ω, where ω -ω', the evaluation algorithm detects this fact. Moreover, even if one (intentionally or accidentally) performs the homomorphic operations against such ciphertexts, a ciphertext associated with a random keyword is generated, and the decryption algorithm rejects it. So, the receiver can recognize such a mis-operation happens in the evaluation phase. In addition to mis-operation resistance, we additionally adopt secure search functionality for keywords since it is desirable when one would like to delegate homomorphic operations to a third party. So, we call the proposed primitive mis-operation resistant searchable homomorphic encryption (MR-SHE). We also give our implementation result of inner products of encrypted vectors. In the case when both vectors are encrypted, the running time of the receiver is millisecond order for relatively small-dimensional (e.g., 26) vectors. In the case when one vector is encrypted, the running time of the receiver is approximately 5 msec even for relatively high-dimensional (e.g., 213) vectors.
Missing value is common in many machine learning problems and much effort has been made to handle missing data to improve the performance of the learned model. Sometimes, our task is not to train a model using those unlabeled/labeled data with missing value but process examples according to the values of some specified features. So, there is an urgent need of developing a method to predict those missing values. In this paper, we focus on learning from the known values to learn missing value as close as possible to the true one. It's difficult for us to predict missing value because we do not know the structure of the data matrix and some missing values may relate to some other missing values. We solve the problem by recovering the complete data matrix under the three reasonable constraints: feature relationship, upper recovery error bound and class relationship. The proposed algorithm can deal with both unlabeled and labeled data and generative adversarial idea will be used in labeled data to transfer knowledge. Extensive experiments have been conducted to show the effectiveness of the proposed algorithms.
Driven by CA compromises and the risk of man-in-the-middle attacks, new security features have been added to TLS, HTTPS, and the web PKI over the past five years. These include Certificate Transparency (CT), for making the CA system auditable; HSTS and HPKP headers, to harden the HTTPS posture of a domain; the DNS-based extensions CAA and TLSA, for control over certificate issuance and pinning; and SCSV, for protocol downgrade protection. This paper presents the first large scale investigation of these improvements to the HTTPS ecosystem, explicitly accounting for their combined usage. In addition to collecting passive measurements at the Internet uplinks of large University networks on three continents, we perform the largest domain-based active Internet scan to date, covering 193M domains. Furthermore, we track the long-term deployment history of new TLS security features by leveraging passive observations dating back to 2012. We find that while deployment of new security features has picked up in general, only SCSV (49M domains) and CT (7M domains) have gained enough momentum to improve the overall security of HTTPS. Features with higher complexity, such as HPKP, are deployed scarcely and often incorrectly. Our empirical findings are placed in the context of risk, deployment effort, and benefit of these new technologies, and actionable steps for improvement are proposed. We cross-correlate use of features and find some techniques with significant correlation in deployment. We support reproducible research and publicly release data and code.
Embedded systems are prone to security attacks from their limited resources available for self-protection and unsafe language typically used for application programming. Attacks targeting control flow is one of the most common exploitations for embedded systems. We propose a hardware-level, effective, and low overhead countermeasure to mitigate these types of attacks. In the proposed method, a Built-in Secure Register Bank (BSRB) is introduced to the processor micro-architecture to store the return addresses of subroutines. The inconsistency on the return addresses will direct the processor to select a clean copy to resume the normal control flow and mitigate the security threat. This proposed countermeasure is inaccessible for the programmer and does not require any compiler support, thus achieving better flexibility than software-based countermeasures. Experimental results show that the proposed method only increases the area and power by 3.8% and 4.4%, respectively, over the baseline OpenRISC processor.
Distributed Denial of Service (DDoS) attacks on web applications have been a persistent threat. Existing approaches for mitigating application layer DDoS attacks have limitations such low detection rate and inability to detect attacks targeting resource files. In this work, we propose Application layer DDoS (App-DDoS) attack detection framework by leveraging the concepts of Term Frequency (TF)-Inverse Document Frequency (IDF) and Latent Semantic Indexing (LSI). The approach involves analyzing web server logs to identify popular pages using TF-IDF; building normal resource access profile; generating query of accessed resources; and applying LSI technique to determine the similarity between a given session and known good sessions. A high-level of dissimilarity triggers a DDoS attack warning. We apply the proposed approach to traffics generated from three PHP applications. The initial results suggest that the proposed approach can identify ongoing DDoS attacks against web applications.
In Advanced Metering Infrastructure (AMI) networks, power data collections from smart meters are static. Due to such static nature, attackers may predict the transmission behavior of the smart meters which can be used to launch selective jamming attacks that can block the transmissions. To avoid such attack scenarios and increase the resilience of the AMI networks, in this paper, we propose dynamic data reporting schedules for smart meters based on the idea of moving target defense (MTD) paradigm. The idea behind MTD-based schedules is to randomize the transmission times so that the attackers will not be able to guess these schedules. Specifically, we assign a time slot for each smart meter and in each round we shuffle the slots with Fisher-Yates shuffle algorithm that has been shown to provide secure randomness. We also take into account the periodicity of the data transmissions that may be needed by the utility company. With the proposed approach, a smart meter is guaranteed to send its data at a different time slot in each round. We implemented the proposed approach in ns-3 using IEEE 802.11s wireless mesh standard as the communication infrastructure. Simulation results showed that our protocol can secure the network from the selective jamming attacks without sacrificing performance by providing similar or even better performance for collection time, packet delivery ratio and end-to-end delay compared to previously proposed protocols.
A Mobile Ad-hoc Network (MANET) is infrastructure-less network where nodes can move arbitrary in any place without the help of any fixed infrastructure. Due to the vague limit, no centralized administrator, dynamic topology and wireless connections it is powerless against various types of assaults. MANET has more threat contrast to any other conventional networks. AODV (Ad-hoc On-demand Distance Vector) is most utilized well-known routing protocol in MANET. AODV protocol is scared by "Black Hole" attack. A black hole attack is a serious assault that can be effortlessly employed towards AODV protocol. A black hole node that incorrectly replies for each path requests while not having active path to targeted destination and drops all the packets that received from other node. If these malicious nodes cooperate with every other as a set then the harm will be very extreme. In this paper, present review on various existing techniques for detection and mitigation of black hole attacks.
Data deduplication [3] is able to effectively identify and eliminate redundant data and only maintain a single copy of files and chunks. Hence, it is widely used in cloud storage systems to save the users' network bandwidth for uploading data. However, the occurrence of deduplication can be easily identified by monitoring and analyzing network traffic, which leads to the risk of user privacy leakage. The attacker can carry out a very dangerous side channel attack, i.e., learn-the-remaining-information (LRI) attack, to reveal users' privacy information by exploiting the side channel of network traffic in deduplication [1]. In the LRI attack, the attacker knows a large part of the target file in the cloud and tries to learn the remaining unknown parts via uploading all possible versions of the file's content. For example, the attacker knows all the contents of the target file X except the sensitive information \texttheta. To learn the sensitive information, the attacker needs to upload m files with all possible values of \texttheta, respectively. If a file Xd with the value \textthetad is deduplicated and other files are not, the attacker knows that the information \texttheta = \textthetad. In the threat model of the LRI attack, we consider a general cloud storage service model that includes two entities, i.e., the user and cloud storage server. The attack is launched by the users who aim to steal the privacy information of other users [1]. The attacker can act as a user via its own account or use multiple accounts to disguise as multiple users. The cloud storage server communicates with the users through Internet. The connections from the clients to the cloud storage server are encrypted by SSL or TLS protocol. Hence, the attacker can monitor and measure the amount of network traffic between the client and server but cannot intercept and analyze the contents of the transmitted data due to the encryption. The attacker can then perform the sophisticated traffic analysis with sufficient computing resources. We propose a simple yet effective scheme, called randomized redundant chunk scheme (RRCS), to significantly mitigate the risk of the LRI attack while maintaining the high bandwidth efficiency of deduplication. The basic idea behind RRCS is to add randomized redundant chunks to mix up the real deduplication states of files used for the LRI attack, which effectively obfuscates the view of the attacker, who attempts to exploit the side channel of network traffic for the LRI attack. RRCS includes three key function modules, range generation (RG), secure bounds setting (SBS), and security-irrelevant redundancy elimination (SRE). When uploading the random-number redundant chunks, RRCS first uses RG to generate a fixed range [0,$łambda$N] ($łambda$ $ε$ (0,1]), in which the number of added redundant chunks is randomly chosen, where N is the total number of chunks in a file and $łambda$ is a system parameter. However, the fixed range may cause a security issue. SBS is used to deal with the bounds of the fixed range to avoid the security issue. There may exist security-irrelevant redundant chunks in RRCS. SRE reduces the security-irrelevant redundant chunks to improve the deduplication efficiency. The design details are presented in our technical report [5]. Our security analysis demonstrates RRCS can significantly reduce the risk of the LRI attack [5]. We examine the performance of RRCS using three real-world trace-based datasets, i.e., Fslhomes [2], MacOS [2], and Onefull [4], and compare RRCS with the randomized threshold scheme (RTS) [1]. Our experimental results show that source-based deduplication eliminates 100% data redundancy which however has no security guarantee. File-level (chunk-level) RTS only eliminates 8.1% – 16.8% (9.8% – 20.3%) redundancy, due to only eliminating the redundancy of the files (chunks) that have many copies. RRCS with $łambda$ = 0.5 eliminates 76.1% – 78.0% redundancy and RRCS with $łambda$ = 1 eliminates 47.9% – 53.6% redundancy.
Though the telecommunication protocol ARP provides the most prominent service for data transmission in the network by providing the physical layer address for any host's network layer address, its stateless nature remains one of the most well-known opportunities for the attacker community and ultimate threat for the hosts in the network. ARP cache poisoning results in numerous attacks, of which the most noteworthy ones MITM, host impersonation and DoS attacks. This paper presents various recent mitigation methods and proposes a novel mitigation system for ARP cache Poisoning Attacks. The proposed system works as follows: for any ARP Request or Reply messages a time stamp is generated. When it is received or sent by a host, the host will make cross layer inspection and IP-MAC pair matching with ARP table Entry. If ARP table entry matches and cross layer consistency is ensured then ARP reply with Time Stamp is sent. If in both the cases evaluated to be bogus packet, then the IP-MAC pair is added to the untrusted list and further packet inspection is done to ensure no attack has been deployed onto the network. The time is also noted for each entry made into the ARP table which makes ARP stateful. The system is evaluated based on criteria specified by the researchers.
Mixr is a novel moving target defense (MTD) system that improves on the traditional address space layout randomization (ASLR) security technique by giving security architects the tools to add "runtime ASLR" to existing software programs and libraries without access to their source code or debugging information and without requiring changes to the host's linker, loader or kernel. Runtime ASLR systems rerandomize the code of a program/library throughout execution at rerandomization points and with a particular granularity. The security professional deploying the Mixr system on a program/library has the flexibility to specify the frequency of runtime rerandomization and the granularity. For example, she/he can specify that the program rerandomizes itself on 60-byte boundaries every time the write() system call is invoked. The Mixr MTD of runtime ASLR protects binary programs and software libraries that are vulnerable to information leaks and attacks based on that information. Mixr is an improvement on the state of the art in runtime ASLR systems. Mixr gives the security architect the flexibility to specify the rerandomization points and granularity and does not require access to the target program/library's source code, debugging information or other metadata. Nor does Mixr require changes to the host's linker, loader or kernel to execute the protected software. No existing runtime ASLR system offers those capabilities. The tradeoff is that applying the Mixr MTD of runtime ASLR protection requires successful disassembly of a program - something which is not always possible. Moreoever, the runtime overhead of a Mixr-protected program is non-trivial. Mixr, besides being a tool for implementing the MTD of runtime ASLR, has the potential to further improve software security in other ways. For example, Mixr could be deployed to implement noise injection into software to thwart side-channel attacks using differential power analysis.
Smartphones have become the pervasive personal computing platform. Recent years thus have witnessed exponential growth in research and development for secure and usable authentication schemes for smartphones. Several explicit (e.g., PIN-based) and/or implicit (e.g., biometrics-based) authentication methods have been designed and published in the literature. In fact, some of them have been embedded in commercial mobile products as well. However, the published studies report only the brighter side of the proposed scheme(s), e.g., higher accuracy attained by the proposed mechanism. While other associated operational issues, such as computational overhead, robustness to different environmental conditions/attacks, usability, are intentionally or unintentionally ignored. More specifically, most publicly available frameworks did not discuss or explore any other evaluation criterion, usability and environment-related measures except the accuracy under zero-effort. Thus, their baseline operations usually give a false sense of progress. This paper, therefore, presents some guidelines to researchers for designing, implementation, and evaluating smartphone user authentication methods for a positive impact on future technological developments.
Social networking sites such as Flickr, YouTube, Facebook, etc. contain huge amount of user contributed data for a variety of real-world events. We describe an unsupervised approach to the problem of automatically detecting subgroups of people holding similar tastes or either taste. Item or taste tags play an important role in detecting group or subgroup, if two or more persons share the same opinion on the item or taste, they tend to use similar content. We consider the latter to be an implicit attitude. In this paper, we have investigated the impact of implicit and explicit attitude in two genres of social media discussion data, more formal wikipedia discussions and a debate discussion forum that is much more informal. Experimental results strongly suggest that implicit attitude is an important complement for explicit attitudes (expressed via sentiment) and it can improve the sub-group detection performance independent of genre. Here, we have proposed taste-based group, which can enhance the quality of service.
In this paper, we propose principles of information control and sharing that support ORCON (ORiginator COntrolled access control) models while simultaneously improving components of confidentiality, availability, and integrity needed to inherently support, when needed, responsibility to share policies, rapid information dissemination, data provenance, and data redaction. This new paradigm of providing unfettered and unimpeded access to information by authorized users, while at the same time, making access by unauthorized users impossible, contrasts with historical approaches to information sharing that have focused on need to know rather than need to (or responsibility to) share.