Visible to the public Biblio

Found 2387 results

Filters: Keyword is human factors  [Clear All Filters]
2018-02-15
Fraser, J. G., Bouridane, A..  2017.  Have the security flaws surrounding BITCOIN effected the currency's value? 2017 Seventh International Conference on Emerging Security Technologies (EST). :50–55.

When Bitcoin was first introduced to the world in 2008 by an enigmatic programmer going by the pseudonym Satoshi Nakamoto, it was billed as the world's first decentralized virtual currency. Offering the first credible incarnation of a digital currency, Bitcoin was based on the principal of peer to peer transactions involving a complex public address and a private key that only the owner of the coin would know. This paper will seek to investigate how the usage and value of Bitcoin is affected by current events in the cyber environment. Is an advancement in the digital security of Bitcoin reflected by the value of the currency and conversely does a major security breech have a negative effect? By analyzing statistical data of the market value of Bitcoin at specific points where the currency has fluctuated dramatically, it is believed that trends can be found. This paper proposes that based on the data analyzed, the current integrity of the Bitcoin security is trusted by general users and the value and usage of the currency is growing. All the major fluctuations of the currency can be linked to significant events within the digital security environment however these fluctuations are beginning to decrease in frequency and severity. Bitcoin is still a volatile currency but this paper concludes that this is a result of security flaws in Bitcoin services as opposed to the Bitcoin protocol itself.

Dai, F., Shi, Y., Meng, N., Wei, L., Ye, Z..  2017.  From Bitcoin to cybersecurity: A comparative study of blockchain application and security issues. 2017 4th International Conference on Systems and Informatics (ICSAI). :975–979.

With the accelerated iteration of technological innovation, blockchain has rapidly become one of the hottest Internet technologies in recent years. As a decentralized and distributed data management solution, blockchain has restored the definition of trust by the embedded cryptography and consensus mechanism, thus providing security, anonymity and data integrity without the need of any third party. But there still exists some technical challenges and limitations in blockchain. This paper has conducted a systematic research on current blockchain application in cybersecurity. In order to solve the security issues, the paper analyzes the advantages that blockchain has brought to cybersecurity and summarizes current research and application of blockchain in cybersecurity related areas. Through in-depth analysis and summary of the existing work, the paper summarizes four major security issues of blockchain and performs a more granular analysis of each problem. Adopting an attribute-based encryption method, the paper also puts forward an enhanced access control strategy.

Zhu, J., Liu, P., He, L..  2017.  Mining Information on Bitcoin Network Data. 2017 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData). :999–1003.

Bitcoin, one major virtual currency, attracts users' attention by its novel mode in recent years. With blockchain as its basic technique, Bitcoin possesses strong security features which anonymizes user's identity to protect their private information. However, some criminals utilize Bitcoin to do several illegal activities bringing in great security threat to the society. Therefore, it is necessary to get knowledge of the current trend of Bitcoin and make effort to de-anonymize. In this paper, we put forward and realize a system to analyze Bitcoin from two aspects: blockchain data and network traffic data. We resolve the blockchain data to analyze Bitcoin from the point of Bitcoin address while simulate Bitcoin P2P protocol to evaluate Bitcoin from the point of IP address. At last, with our system, we finish analyzing its current trends and tracing its transactions by putting some statistics on Bitcoin transactions and addresses, tracing the transaction flow and de-anonymizing some Bitcoin addresses to IPs.

Zhang, Ren, Preneel, Bart.  2017.  On the Necessity of a Prescribed Block Validity Consensus: Analyzing Bitcoin Unlimited Mining Protocol. Proceedings of the 13th International Conference on Emerging Networking EXperiments and Technologies. :108–119.

Bitcoin has not only attracted many users but also been considered as a technical breakthrough by academia. However, the expanding potential of Bitcoin is largely untapped due to its limited throughput. The Bitcoin community is now facing its biggest crisis in history as the community splits on how to increase the throughput. Among various proposals, Bitcoin Unlimited recently became the most popular candidate, as it allows miners to collectively decide the block size limit according to the real network capacity. However, the security of BU is heatedly debated and no consensus has been reached as the issue is discussed in different miner incentive models. In this paper, we systematically evaluate BU's security with three incentive models via testing the two major arguments of BU supporters: the block validity consensus is not necessary for BU's security; such consensus would emerge in BU out of economic incentives. Our results invalidate both arguments and therefore disprove BU's security claims. Our paper further contributes to the field by addressing the necessity of a prescribed block validity consensus for cryptocurrencies.

Green, Matthew, Miers, Ian.  2017.  Bolt: Anonymous Payment Channels for Decentralized Currencies. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :473–489.
Bitcoin owes its success to the fact that transactions are transparently recorded in the blockchain, a global public ledger that removes the need for trusted parties. Unfortunately, recording every transaction in the blockchain causes privacy, latency, and scalability issues. Building on recent proposals for "micropayment channels" — two party associations that use the ledger only for dispute resolution — we introduce techniques for constructing anonymous payment channels. Our proposals allow for secure, instantaneous and private payments that substantially reduce the storage burden on the payment network. Specifically, we introduce three channel proposals, including a technique that allows payments via untrusted intermediaries. We build a concrete implementation of our scheme and show that it can be deployed via a soft fork to existing anonymous currencies such as ZCash.
Gentilal, Miraje, Martins, Paulo, Sousa, Leonel.  2017.  TrustZone-backed Bitcoin Wallet. Proceedings of the Fourth Workshop on Cryptography and Security in Computing Systems. :25–28.
With the increasing popularity of virtual currencies, it has become more important to have highly secure devices in which to store private-key information. Furthermore, ARM has made available an extension of processors architectures, designated TrustZone, which allows for the separation of trusted and non-trusted environments, while ensuring the integrity of the OS code. In this paper, we propose the exploitation of this technology to implement a flexible and reliable bitcoin wallet that is more resilient to dictionary and side-channel attacks. Making use of the TrustZone comes with the downside that writing and reading operations become slower, due to the encrypted storage, but we show that cryptographic operations can in fact be executed more efficiently as a result of platform-specific optimizations.
Han, Jordan W., Hoe, Ong J., Wing, Joseph S., Brohi, Sarfraz N..  2017.  A Conceptual Security Approach with Awareness Strategy and Implementation Policy to Eliminate Ransomware. Proceedings of the 2017 International Conference on Computer Science and Artificial Intelligence. :222–226.

Undeterred by numerous efforts deployed by antivirus software that shields users from various security threats, ransomware is constantly evolving as technology advances. The impact includes hackers hindering the user's accessibility to their data, and the user will pay ransom to retrieve their data. Ransomware also targets multimillion-dollar organizations, and it can cause colossal data loss. The organizations could face catastrophic consequences, and business operations could be ceased. This research contributes by spreading awareness of ransomware to alert people to tackle ransomware. The solution of this research is the conceptual development of a browser extension that provides assistance to warn users of plausible dangers while surfing the Internet. It allows the users to surf the web safely. Since the contribution of this research is conceptual, we can assume that technology users will adopt the proposed idea to prevent ransomware attacks on their personal computers once the solution is fully implemented in future research.

Han, Shuchu, Hu, Yifan, Skiena, Steven, Coskun, Baris, Liu, Meizhu, Qin, Hong, Perez, Jaime.  2017.  Generating Look-alike Names For Security Challenges. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. :57–67.
Motivated by the need to automatically generate behavior-based security challenges to improve user authentication for web services, we consider the problem of large-scale construction of realistic-looking names to serve as aliases for real individuals. We aim to use these names to construct security challenges, where users are asked to identify their real contacts among a presented pool of names. We seek these look-alike names to preserve name characteristics like gender, ethnicity, and popularity, while being unlinkable back to the source individual, thereby making the real contacts not easily guessable by attackers. To achive this, we introduce the technique of distributed name embeddings, representing names in a high-dimensional space such that distance between name components reflects the degree of cultural similarity between these strings. We present different approaches to construct name embeddings from contact lists observed at a large web-mail provider, and evaluate their cultural coherence. We demonstrate that name embeddings strongly encode gender and ethnicity, as well as name popularity. We applied this algorithm to generate imitation names in email contact list challenge. Our controlled user study verified that the proposed technique reduced the attacker's success rate to 26.08%, indistinguishable from random guessing, compared to a success rate of 62.16% from previous name generation algorithms. Finally, we use these embeddings to produce an open synthetic name resource of 1 million names for security applications, constructed to respect both cultural coherence and U.S. census name frequencies.
Brkan, Maja.  2017.  AI-supported Decision-making Under the General Data Protection Regulation. Proceedings of the 16th Edition of the International Conference on Articial Intelligence and Law. :3–8.
The purpose of this paper is to analyse the rules of the General Data Protection Regulation on automated decision making in the age of Big Data and to explore how to ensure transparency of such decisions, in particular those taken with the help of algorithms. The GDPR, in its Article 22, prohibits automated individual decision-making, including profiling. On the first impression, it seems that this provision strongly protects individuals and potentially even hampers the future development of AI in decision making. However, it can be argued that this prohibition, containing numerous limitations and exceptions, looks like a Swiss cheese with giant holes in it. Moreover, in case of automated decisions involving personal data of the data subject, the GDPR obliges the controller to provide the data subject with 'meaningful information about the logic involved' (Articles 13 and 14). If we link this information to the rights of data subject, we can see that the information about the logic involved needs to enable him/her to express his/her point of view and to contest the automated decision. While this requirement fits well within the broader framework of GDPR's quest for a high level of transparency, it also raises several queries particularly in cases where the decision is taken with the help of algorithms: What exactly needs to be revealed to the data subject? How can an algorithm-based decision be explained? Apart from technical obstacles, we are facing also intellectual property and state secrecy obstacles to this 'algorithmic transparency'.
Jia, Ruoxi, Dong, Roy, Sastry, S. Shankar, Spanos, Costas J..  2017.  Privacy-enhanced Architecture for Occupancy-based HVAC Control. Proceedings of the 8th International Conference on Cyber-Physical Systems. :177–186.

Large-scale sensing and actuation infrastructures have allowed buildings to achieve significant energy savings; at the same time, these technologies introduce significant privacy risks that must be addressed. In this paper, we present a framework for modeling the trade-off between improved control performance and increased privacy risks due to occupancy sensing. More specifically, we consider occupancy-based HVAC control as the control objective and the location traces of individual occupants as the private variables. Previous studies have shown that individual location information can be inferred from occupancy measurements. To ensure privacy, we design an architecture that distorts the occupancy data in order to hide individual occupant location information while maintaining HVAC performance. Using mutual information between the individual's location trace and the reported occupancy measurement as a privacy metric, we are able to optimally design a scheme to minimize privacy risk subject to a control performance guarantee. We evaluate our framework using real-world occupancy data: first, we verify that our privacy metric accurately assesses the adversary's ability to infer private variables from the distorted sensor measurements; then, we show that control performance is maintained through simulations of building operations using these distorted occupancy readings.

Wang, Junjue, Amos, Brandon, Das, Anupam, Pillai, Padmanabhan, Sadeh, Norman, Satyanarayanan, Mahadev.  2017.  A Scalable and Privacy-Aware IoT Service for Live Video Analytics. Proceedings of the 8th ACM on Multimedia Systems Conference. :38–49.

We present OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy. Integrating OpenFace with inter-frame tracking, we build RTFace, a mechanism for denaturing video streams that selectively blurs faces according to specified policies at full frame rates. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions. Finally, we present a scalable, privacy-aware architecture for large camera networks using RTFace.

Klow, Jeffrey, Proby, Jordan, Rueben, Matthew, Sowell, Ross T., Grimm, Cindy M., Smart, William D..  2017.  Privacy, Utility, and Cognitive Load in Remote Presence Systems. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. :167–168.
As teleoperated robot technology becomes cheaper, more powerful, and more reliable, remotely-operated telepresence robots will become more prevalent in homes and businesses, allowing visitors and business partners to be present without the need to travel. Hindering adoption is the issue of privacy: an Internet-connected telepresence robot has the ability to spy on its local area, either for the remote operator or a third party with access to the video data. Additionally, since the remote operator may move about and manipulate objects without local-user intervention, certain typical privacy-protecting techniques such as moving objects to a different room or putting them in a cabinet may prove insufficient. In this paper, we examine the effects of three whole-image filters on the remote operator's ability to discern details while completing a navigation task.
Bittner, Daniel M., Sarwate, Anand D., Wright, Rebecca N..  2017.  Differentially Private Noisy Search with Applications to Anomaly Detection (Abstract). Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. :53–53.
We consider the problem of privacy-sensitive anomaly detection - screening to detect individuals, behaviors, areas, or data samples of high interest. What defines an anomaly is context-specific; for example, a spoofed rather than genuine user attempting to log in to a web site, a fraudulent credit card transaction, or a suspicious traveler in an airport. The unifying assumption is that the number of anomalous points is quite small with respect to the population, so that deep screening of all individual data points would potentially be time-intensive, costly, and unnecessarily invasive of privacy. Such privacy violations can raise concerns due sensitive nature of data being used, raise fears about violations of data use agreements, and make people uncomfortable with anomaly detection methods. Anomaly detection is well studied, but methods to provide anomaly detection along with privacy are less well studied. Our overall goal in this research is to provide a framework for identifying anomalous data while guaranteeing quantifiable privacy in a rigorous sense. Once identified, such anomalies could warrant further data collection and investigation, depending on the context and relevant policies. In this research, we focus on privacy protection during the deployment of anomaly detection. Our main contribution is a differentially private access mechanism for finding anomalies using a search algorithm based on adaptive noisy group testing. To achieve this, we take as our starting point the notion of group testing [1], which was most famously used to screen US military draftees for syphilis during World War II. In group testing, individuals are tested in groups to limit the number of tests. Using multiple rounds of screenings, a small number of positive individuals can be detected very efficiently. Group testing has the added benefit of providing privacy to individuals through plausible deniability - since the group tests use aggregate data, individual contributions to the test are masked by the group. We follow on these concepts by demonstrating a search model utilizing adaptive queries on aggregated group data. Our work takes the first steps toward strengthening and formalizing these privacy concepts by achieving differential privacy [2]. Differential privacy is a statistical measure of disclosure risk that captures the intuition that an individual's privacy is protected if the results of a computation have at most a very small and quantifiable dependence on that individual's data. In the last decade, there hpractical adoption underway by high-profile companies such as Apple, Google, and Uber. In order to make differential privacy meaningful in the context of a task that seeks to specifically identify some (anomalous) individuals, we introduce the notion of anomaly-restricted differential privacy. Using ideas from information theory, we show that noise can be added to group query results in a way that provides differential privacy for non-anomalous individuals and still enables efficient and accurate detection of the anomalous individuals. Our method ensures that using differentially private aggregation of groups of points, providing privacy to individuals within the group while refining the group selection to the point that we can probabilistically narrow attention to a small numbers of individuals or samples for further attention. To summarize: We introduce a new notion of anomaly-restriction differential privacy, which may be of independent interest. We provide a noisy group-based search algorithm that satisfies the anomaly-restricted differential privacy definition. We provide both theoretical and empirical analysis of our noisy search algorithm, showing that it performs well in some cases, and exhibits the usual privacy/accuracy tradeoff of differentially private mechanisms. Potential anomaly detection applications for our work might include spatial search for outliers: this would rely on new sensing technologies that can perform queries in aggregate to reveal and isolate anomalous outliers. For example, this could lead to privacy-sensitive methods for searching for outlying cell phone activity patterns or Internet activity patterns in a geographic location.
Vu, Xuan-Son, Jiang, Lili, Brändström, Anders, Elmroth, Erik.  2017.  Personality-based Knowledge Extraction for Privacy-preserving Data Analysis. Proceedings of the Knowledge Capture Conference. :44:1–44:4.
In this paper, we present a differential privacy preserving approach, which extracts personality-based knowledge to serve privacy guarantee data analysis on personal sensitive data. Based on the approach, we further implement an end-to-end privacy guarantee system, KaPPA, to provide researchers iterative data analysis on sensitive data. The key challenge for differential privacy is determining a reasonable amount of privacy budget to balance privacy preserving and data utility. Most of the previous work applies unified privacy budget to all individual data, which leads to insufficient privacy protection for some individuals while over-protecting others. In KaPPA, the proposed personality-based privacy preserving approach automatically calculates privacy budget for each individual. Our experimental evaluations show a significant trade-off of sufficient privacy protection and data utility.
van Do, Thanh, Engelstad, Paal, Feng, Boning, Do, Van Thuan.  2017.  A Near Real Time SMS Grey Traffic Detection. Proceedings of the 6th International Conference on Software and Computer Applications. :244–249.
Lately, mobile operators experience threats from SMS grey routes which are used by fraudsters to evade SMS fees and to deny them millions in revenues. But more serious are the threats to the user's security and privacy and consequently the operator's reputation. Therefore, it is crucial for operators to have adequate solutions to protect both their network and their customers against this kind of fraud. Unfortunately, so far there is no sufficiently efficient countermeasure against grey routes. This paper proposes a near real time SMS grey traffic detection which makes use of Counting Bloom Filters combined with blacklist and whitelist to detect SMS grey traffic on the fly and to block them. The proposed detection has been implemented and proved to be quite efficient. The paper provides also comprehensive explanation of SMS grey routes and the challenges in their detection. The implementation and verification are also described thoroughly.
Chanyaswad, T., Al, M., Chang, J. M., Kung, S. Y..  2017.  Differential mutual information forward search for multi-kernel discriminant-component selection with an application to privacy-preserving classification. 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.

In machine learning, feature engineering has been a pivotal stage in building a high-quality predictor. Particularly, this work explores the multiple Kernel Discriminant Component Analysis (mKDCA) feature-map and its variants. However, seeking the right subset of kernels for mKDCA feature-map can be challenging. Therefore, we consider the problem of kernel selection, and propose an algorithm based on Differential Mutual Information (DMI) and incremental forward search. DMI serves as an effective metric for selecting kernels, as is theoretically supported by mutual information and Fisher's discriminant analysis. On the other hand, incremental forward search plays a role in removing redundancy among kernels. Finally, we illustrate the potential of the method via an application in privacy-aware classification, and show on three mobile-sensing datasets that selecting an effective set of kernels for mKDCA feature-maps can enhance the utility classification performance, while successfully preserve the data privacy. Specifically, the results show that the proposed DMI forward search method can perform better than the state-of-the-art, and, with much smaller computational cost, can perform as well as the optimal, yet computationally expensive, exhaustive search.

Phan, N., Wu, X., Hu, H., Dou, D..  2017.  Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning. 2017 IEEE International Conference on Data Mining (ICDM). :385–394.

In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds "more noise" into features which are "less relevant" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.

Yonetani, R., Boddeti, V. N., Kitani, K. M., Sato, Y..  2017.  Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption. 2017 IEEE International Conference on Computer Vision (ICCV). :2059–2069.

We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of homomorphic encryption of high-dimensional classifiers, we (1) impose sparsity constraints on local classifier updates and (2) propose a novel efficient encryption scheme named doublypermuted homomorphic encryption (DPHE) which is tailored to sparse high-dimensional data. DPHE (i) decomposes sparse data into its constituent non-zero values and their corresponding support indices, (ii) applies homomorphic encryption only to the non-zero values, and (iii) employs double permutations on the support indices to make them secret. Our experimental evaluation on several public datasets shows that the proposed approach achieves comparable performance against state-of-the-art visual recognition methods while preserving privacy and significantly outperforms other privacy-preserving methods.

2018-02-14
Filip, G., Meng, X., Burnett, G., Harvey, C..  2017.  Human factors considerations for cooperative positioning using positioning, navigational and sensor feedback to calibrate trust in CAVs. 2017 Forum on Cooperative Positioning and Service (CPGPS \#65289;. :134–139.

Given the complexities involved in the sensing, navigational and positioning environment on board automated vehicles we conduct an exploratory survey and identify factors capable of influencing the users' trust in such system. After the analysis of the survey data, the Situational Awareness of the Vehicle (SAV) emerges as an important factor capable of influencing the trust of the users. We follow up on that by conducting semi-structured interviews with 12 experts in the CAV field, focusing on the importance of the SAV, on the factors that are most important when talking about it as well as the need to keep the users informed regarding its status. We conclude that in the context of Connected and Automated Vehicles (CAVs), the importance of the SAV can now be expanded beyond its technical necessity of making vehicles function to a human factors area: calibrating users' trust.

2018-02-06
Allodi, Luca, Massacci, Fabio.  2017.  Attack Potential in Impact and Complexity. Proceedings of the 12th International Conference on Availability, Reliability and Security. :32:1–32:6.

Vulnerability exploitation is reportedly one of the main attack vectors against computer systems. Yet, most vulnerabilities remain unexploited by attackers. It is therefore of central importance to identify vulnerabilities that carry a high 'potential for attack'. In this paper we rely on Symantec data on real attacks detected in the wild to identify a trade-off in the Impact and Complexity of a vulnerability in terms of attacks that it generates; exploiting this effect, we devise a readily computable estimator of the vulnerability's Attack Potential that reliably estimates the expected volume of attacks against the vulnerability. We evaluate our estimator performance against standard patching policies by measuring foiled attacks and demanded workload expressed as the number of vulnerabilities entailed to patch. We show that our estimator significantly improves over standard patching policies by ruling out low-risk vulnerabilities, while maintaining invariant levels of coverage against attacks in the wild. Our estimator can be used as a first aid for vulnerability prioritisation to focus assessment efforts on high-potential vulnerabilities.

Pan, Liuxuan, Tomlinson, Allan, Koloydenko, Alexey A..  2017.  Time Pattern Analysis of Malware by Circular Statistics. Proceedings of the Symposium on Architectures for Networking and Communications Systems. :119–130.

Circular statistics present a new technique to analyse the time patterns of events in the field of cyber security. We apply this technique to analyse incidents of malware infections detected by network monitoring. In particular we are interested in the daily and weekly variations of these events. Based on "live" data provided by Spamhaus, we examine the hypothesis that attacks on four countries are distributed uniformly over 24 hours. Specifically, we use Rayleigh and Watson tests. While our results are mainly exploratory, we are able to demonstrate that the attacks are not uniformly distributed, nor do they follow a Poisson distribution as reported in other research. Our objective in this is to identify a distribution that can be used to establish risk metrics. Moreover, our approach provides a visual overview of the time patterns' variation, indicating when attacks are most likely. This will assist decision makers in cyber security to allocate resources or estimate the cost of system monitoring during high risk periods. Our results also reveal that the time patterns are influenced by the total number of attacks. Networks subject to a large volume of attacks exhibit bimodality while one case, where attacks were at relatively lower rate, showed a multi-modal daily variation.

Jonker, Mattijs, King, Alistair, Krupp, Johannes, Rossow, Christian, Sperotto, Anna, Dainotti, Alberto.  2017.  Millions of Targets Under Attack: A Macroscopic Characterization of the DoS Ecosystem. Proceedings of the 2017 Internet Measurement Conference. :100–113.

Denial-of-Service attacks have rapidly increased in terms of frequency and intensity, steadily becoming one of the biggest threats to Internet stability and reliability. However, a rigorous comprehensive characterization of this phenomenon, and of countermeasures to mitigate the associated risks, faces many infrastructure and analytic challenges. We make progress toward this goal, by introducing and applying a new framework to enable a macroscopic characterization of attacks, attack targets, and DDoS Protection Services (DPSs). Our analysis leverages data from four independent global Internet measurement infrastructures over the last two years: backscatter traffic to a large network telescope; logs from amplification honeypots; a DNS measurement platform covering 60% of the current namespace; and a DNS-based data set focusing on DPS adoption. Our results reveal the massive scale of the DoS problem, including an eye-opening statistic that one-third of all / 24 networks recently estimated to be active on the Internet have suffered at least one DoS attack over the last two years. We also discovered that often targets are simultaneously hit by different types of attacks. In our data, Web servers were the most prominent attack target; an average of 3% of the Web sites in .com, .net, and .org were involved with attacks, daily. Finally, we shed light on factors influencing migration to a DPS.

Jain, Bhushan, Tsai, Chia-Che, Porter, Donald E..  2017.  A Clairvoyant Approach to Evaluating Software (In)Security. Proceedings of the 16th Workshop on Hot Topics in Operating Systems. :62–68.

Nearly all modern software has security flaws–-either known or unknown by the users. However, metrics for evaluating software security (or lack thereof) are noisy at best. Common evaluation methods include counting the past vulnerabilities of the program, or comparing the size of the Trusted Computing Base (TCB), measured in lines of code (LoC) or binary size. Other than deleting large swaths of code from project, it is difficult to assess whether a code change decreased the likelihood of a future security vulnerability. Developers need a practical, constructive way of evaluating security. This position paper argues that we actually have all the tools needed to design a better, empirical method of security evaluation. We discuss related work that estimates the severity and vulnerability of certain attack vectors based on code properties that can be determined via static analysis. This paper proposes a grand, unified model that can predict the risk and severity of vulnerabilities in a program. Our prediction model uses machine learning to correlate these code features of open-source applications with the history of vulnerabilities reported in the CVE (Common Vulnerabilities and Exposures) database. Based on this model, one can incorporate an analysis into the standard development cycle that predicts whether the code is becoming more or less prone to vulnerabilities.

Bullough, Benjamin L, Yanchenko, Anna K, Smith, Christopher L, Zipkin, Joseph R.  2017.  Predicting Exploitation of Disclosed Software Vulnerabilities Using Open-Source Data. Proceeding IWSPA '17 Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics.

Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.

 

Boukoros, Spyros, Katzenbeisser, Stefan.  2017.  Measuring Privacy in High Dimensional Microdata Collections. Proceedings of the 12th International Conference on Availability, Reliability and Security. :15:1–15:8.

Microdata is collected by companies in order to enhance their quality of service as well as the accuracy of their recommendation systems. These data often become publicly available after they have been sanitized. Recent reidentification attacks on publicly available, sanitized datasets illustrate the privacy risks involved in microdata collections. Currently, users have to trust the provider that their data will be safe in case data is published or if a privacy breach occurs. In this work, we empower users by developing a novel, user-centric tool for privacy measurement and a new lightweight privacy metric. The goal of our tool is to estimate users' privacy level prior to sharing their data with a provider. Hence, users can consciously decide whether to contribute their data. Our tool estimates an individuals' privacy level based on published popularity statistics regarding the items in the provider's database, and the users' microdata. In this work, we describe the architecture of our tool as well as a novel privacy metric, which is necessary for our setting where we do not have access to the provider's database. Our tool is user friendly, relying on smart visual results that raise privacy awareness. We evaluate our tool using three real world datasets, collected from major providers. We demonstrate strong correlations between the average anonymity set per user and the privacy score obtained by our metric. Our results illustrate that our tool which uses minimal information from the provider, estimates users' privacy levels comparably well, as if it had access to the actual database.