Biblio

Found 5882 results

Filters: Keyword is composability  [Clear All Filters]
2017-05-16
Calix, Ricardo A., Cabrera, Armando, Iqbal, Irshad.  2016.  Analysis of Parallel Architectures for Network Intrusion Detection. Proceedings of the 5th Annual Conference on Research in Information Technology. :7–12.

Intrusion detection systems need to be both accurate and fast. Speed is important especially when operating at the network level. Additionally, many intrusion detection systems rely on signature based detection approaches. However, machine learning can also be helpful for intrusion detection. One key challenge when using machine learning, aside from the detection accuracy, is using machine learning algorithms that are fast. In this paper, several processing architectures are considered for use in machine learning based intrusion detection systems. These architectures include standard CPUs, GPUs, and cognitive processors. Results of their processing speeds are compared and discussed.

2017-08-18
Trivedi, Munesh Chandra, Sharma, Shivani, Yadav, Virendra Kumar.  2016.  Analysis of Several Image Steganography Techniques in Spatial Domain: A Survey. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :84:1–84:7.

Steganography enables user to hide confidential data in any digital medium such that its existence cannot be concealed by the third party. Several research work is being is conducted to improve steganography algorithm's efficiency. Recent trends in computing technology use steganography as an important tool for hiding confidential data. This paper summarizes some of the research work conducted in the field of image steganography in spatial domain along with their advantages and disadvantages. Future research work and experimental results of some techniques is also being discussed. The key goal is to show the powerful impact of steganography in information hiding and image processing domain.

2018-02-02
Khari, M., Vaishali, Kumar, M..  2016.  Analysis of software security testing using metaheuristic search technique. 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom). :2147–2152.

Metaheuristic search technique is one of the advance approach when compared with traditional heuristic search technique. To select one option among different alternatives is not hard to get but really hard is give assurance that being cost effective. This hard problem is solved by the meta-heuristic search technique with the help of fitness function. Fitness function is a crucial metrics or a measure which helps in deciding which solution is optimal to choose from available set of test sets. This paper discusses hill climbing, simulated annealing, tabu search, genetic algorithm and particle swarm optimization techniques in detail explaining with the help of the algorithm. If metaheuristic search techniques combine some of the security testing methods, it would result in better searching technique as well as secure too. This paper primarily focusses on the metaheuristic search techniques.

2017-08-22
Arathy, P. J., Nair, Vrinda V..  2016.  Analysis of Spoofing Detection Using Video Subsection Processing. Proceedings of the International Conference on Informatics and Analytics. :76:1–76:6.

Imposters gain unauthorized access to biometric recognition systems using fake biometric data of the legitimate user termed as spoofing. Spoofing of face recognition systems is done by photographs, 3D models and videos of the user. Attack video contains noise from the acquisition process. In this work, we use noise residual content of the video in order to detect spoofed videos. We take advantage of wavelet transform for representing the noise video. Samples of the noise video, termed as visual rhythm image is created for each video. Local Binary Pattern (LBP) and uniform Local Binary Pattern (LBPu2) are extracted from the visual rhythm image followed by classification using Support Vector Machine (SVM). Large size of video from which a number of frames are used for analysis results in huge execution timing. In this work the spoof detection algorithm is applied on various levels of subsections of the video frames resulting in reduced execution timing with reasonable detection accuracies.

2017-05-30
Ikram, Muhammad, Vallina-Rodriguez, Narseo, Seneviratne, Suranga, Kaafar, Mohamed Ali, Paxson, Vern.  2016.  An Analysis of the Privacy and Security Risks of Android VPN Permission-enabled Apps. Proceedings of the 2016 Internet Measurement Conference. :349–364.

Millions of users worldwide resort to mobile VPN clients to either circumvent censorship or to access geo-blocked content, and more generally for privacy and security purposes. In practice, however, users have little if any guarantees about the corresponding security and privacy settings, and perhaps no practical knowledge about the entities accessing their mobile traffic. In this paper we provide a first comprehensive analysis of 283 Android apps that use the Android VPN permission, which we extracted from a corpus of more than 1.4 million apps on the Google Play store. We perform a number of passive and active measurements designed to investigate a wide range of security and privacy features and to study the behavior of each VPN-based app. Our analysis includes investigation of possible malware presence, third-party library embedding, and traffic manipulation, as well as gauging user perception of the security and privacy of such apps. Our experiments reveal several instances of VPN apps that expose users to serious privacy and security vulnerabilities, such as use of insecure VPN tunneling protocols, as well as IPv6 and DNS traffic leakage. We also report on a number of apps actively performing TLS interception. Of particular concern are instances of apps that inject JavaScript programs for tracking, advertising, and for redirecting e-commerce traffic to external partners.

2017-04-20
Alvarez, E. D., Correa, B. D., Arango, I. F..  2016.  An analysis of XSS, CSRF and SQL injection in colombian software and web site development. 2016 8th Euro American Conference on Telematics and Information Systems (EATIS). :1–5.

Software development and web applications have become fundamental in our lives. Millions of users access these applications to communicate, obtain information and perform transactions. However, these users are exposed to many risks; commonly due to the developer's lack of experience in security protocols. Although there are many researches about web security and hacking protection, there are plenty of vulnerable websites. This article focuses in analyzing 3 main hacking techniques: XSS, CSRF, and SQL Injection over a representative group of Colombian websites. Our goal is to obtain information about how Colombian companies and organizations give (or not) relevance to security; and how the final user could be affected.

2017-08-02
Harbach, Marian, De Luca, Alexander, Egelman, Serge.  2016.  The Anatomy of Smartphone Unlocking: A Field Study of Android Lock Screens. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :4806–4817.

To prevent unauthorized parties from accessing data stored on their smartphones, users have the option of enabling a "lock screen" that requires a secret code (e.g., PIN, drawing a pattern, or biometric) to gain access to their devices. We present a detailed analysis of the smartphone locking mechanisms currently available to billions of smartphone users worldwide. Through a month-long field study, we logged events from a panel of users with instrumented smartphones (N=134). We are able to show how existing lock screen mechanisms provide users with distinct tradeoffs between usability (unlocking speed vs. unlocking frequency) and security. We find that PIN users take longer to enter their codes, but commit fewer errors than pattern users, who unlock more frequently and are very prone to errors. Overall, PIN and pattern users spent the same amount of time unlocking their devices on average. Additionally, unlock performance seemed unaffected for users enabling the stealth mode for patterns. Based on our results, we identify areas where device locking mechanisms can be improved to result in fewer human errors – increasing usability – while also maintaining security.

2017-08-22
Ma, Xiao, Hancock, Jeff, Naaman, Mor.  2016.  Anonymity, Intimacy and Self-Disclosure in Social Media. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :3857–3869.

Self-disclosure is rewarding and provides significant benefits for individuals, but it also involves risks, especially in social media settings. We conducted an online experiment to study the relationship between content intimacy and willingness to self-disclose in social media, and how identification (real name vs. anonymous) and audience type (social ties vs. people nearby) moderate that relationship. Content intimacy is known to regulate self-disclosure in face-to-face communication: people self-disclose less as content intimacy increases. We show that such regulation persists in online social media settings. Further, although anonymity and an audience of social ties are both known to increase self-disclosure, it is unclear whether they (1) increase self-disclosure baseline for content of all intimacy levels, or (2) weaken intimacy's regulation effect, making people more willing to disclose intimate content. We show that intimacy always regulates self-disclosure, regardless of settings. We also show that anonymity mainly increases self-disclosure baseline and (sometimes) weakens the regulation. On the other hand, an audience of social ties increases the baseline but strengthens the regulation. Finally, we demonstrate that anonymity has a more salient effect on content of negative valence.The results are critical to understanding the dynamics and opportunities of self-disclosure in social media services that vary levels of identification and types of audience.

Lazarenko, Aleksandr, Avdoshin, Sergey.  2016.  Anonymity of Tor: Myth and Reality. Proceedings of the 12th Central and Eastern European Software Engineering Conference in Russia. :10:1–10:5.

Privacy enhancing technologies (PETs) are ubiquitous nowadays. They are beneficial for a wide range of users. However, PETs are not always used for legal activity. The present paper is focused on Tor users deanonimization1 using out-of-the box technologies and a basic machine learning algorithm. The aim of the work is to show that it is possible to deanonimize a small fraction of users without having a lot of resources and state-of-the-art machine learning techniques. The deanonimization is a very important task from the point of view of national security. To address this issue, we are using a website fingerprinting attack.

2017-04-24
Wu, Fei, Yang, Yang, Zhang, Ouyang, Srinivasan, Kannan, Shroff, Ness B..  2016.  Anonymous-query Based Rate Control for Wireless Multicast: Approaching Optimality with Constant Feedback. Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing. :191–200.

For a multicast group of n receivers, existing techniques either achieve high throughput at the cost of prohibitively large (e.g., O(n)) feedback overhead, or achieve low feedback overhead but without either optimal or near-optimal throughput guarantees. Simultaneously achieving good throughput guarantees and low feedback overhead has been an open problem and could be the key reason why wireless multicast has not been successfully deployed in practice. In this paper, we develop a novel anonymous-query based rate control, which approaches the optimal throughput with a constant feedback overhead independent of the number of receivers. In addition to our theoretical results, through implementation on a software-defined ratio platform, we show that the anonymous-query based algorithm achieves low-overhead and robustness in practice.

2017-05-17
Martin, Paul D., Rushanan, Michael, Tantillo, Thomas, Lehmann, Christoph U., Rubin, Aviel D..  2016.  Applications of Secure Location Sensing in Healthcare. Proceedings of the 7th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. :58–67.

Secure location sensing has the potential to improve healthcare processes regarding security, efficiency, and safety. For example, enforcing close physical proximity to a patient when using a barcode medication administration system (BCMA) can mitigate the consequences of unsafe barcode scanning workarounds. We present Beacon+, a Bluetooth Low Energy (BLE) device that extends the design of Apple's popular iBeacon specification with unspoofable, temporal, and authenticated advertisements. Our prototype Beacon+ design enables secure location sensing applications such as real-time tracking of hospital assets (e.g., infusion pumps). We implement this exact real-time tracking system and use it as a foundation for a novel application that applies location-based restrictions on access control.

2017-08-18
Lakhdhar, Yosra, Rekhis, Slim, Boudriga, Noureddine.  2016.  An Approach To A Graph-Based Active Cyber Defense Model. Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media. :261–268.

Securing cyber system is a major concern as security attacks become more and more sophisticated. We develop in this paper a novel graph-based Active Cyber Defense (ACD) model to proactively respond to cyber attacks. The proposed model is based on the use of a semantically rich graph to describe cyber systems, types of used interconnection between them, and security related data useful to develop active defense strategies. The developed model takes into consideration the probabilistic nature of cyber attacks, and their degree of complexity. In this context, analytics are provided to proactively test the impact of vulnerabilities/threats increase on the system, analyze the consequent behavior of cyber systems and security solution, and decide about the security state of the whole cyber system. Our model integrates in the same framework decisions made by cyber defenders based on their expertise and knowledge, and decisions that are automatically generated using security analytic rules.

2017-04-20
Clarke, Daniel, McGregor, Graham, Rubin, Brianna, Stanford, Jonathan, Graham, T.C. Nicholas.  2016.  Arcaid: Addressing Situation Awareness and Simulator Sickness in a Virtual Reality Pac-Man Game. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts. :39–45.

This paper describes the challenges of converting the classic Pac-Man arcade game into a virtual reality game. Arcaid provides players with the tools to maintain sufficient situation awareness in an environment where, unlike the classic game, they do not have full view of the game state. We also illustrate methods that can be used to reduce a player's simulation sickness by providing visual focal points for players and designing user interface elements that do not disrupt immersion.

2017-10-13
Denysyuk, Oksana, Woelfel, Philipp.  2016.  Are Shared Objects Composable Under an Oblivious Adversary? Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing. :335–344.

Linearizability [5] of a concurrent object ensures that operations on that object appear to execute atomically. It is well known that linearizable implementations are composable: in an algorithm designed to work with atomic objects, replacing any atomic object with a linearizable implementation preserves the correctness of the original algorithm. However, replacing atomic objects with linearizable ones in a randomized algorithm can break the original probabilistic guarantees [3]. With an adaptive adversary, this problem is solved by using strongly linearizable [3] objects in the composition. How about with an oblivious adversary. In this paper, we ask the fundamental question of what property makes implementations composable under an oblivious adversary. It turns out that the property depends on the entire collection of objects used in the algorithm. We show that the composition of every randomized algorithm with a collection of linearizable objects OL is sound if and only if OL satisfies a property called library homogeneity. Roughly, this property says that, for each process, every operation on OL has the same length and linearization point. This result has several important implications. First, for an oblivious adversary, there is nothing analogous to linearizability to ensure that the atomic objects of an algorithm can be replaced with their implementations. Second, in general, algorithms cannot use implemented objects alongside atomic objects provided by the system, such as registers. These results show that, with an oblivious adversary, it is much harder to implement reusable object types than previously believed.

2017-05-19
Park, Jiyong, Kim, Junetae, Lee, Byungtae.  2016.  Are Uber Really to Blame for Sexual Assault?: Evidence from New York City Proceedings of the 18th Annual International Conference on Electronic Commerce: E-Commerce in Smart Connected World. :12:1–12:7.

With the boom of ride-sharing platforms, there has been a growing debate on ride-sharing regulations. In particular, allegations of rape against ride-sharing drivers put sexual assault at the center of this debate. However, there is no systematic and society-wide evidence regarding ride-sharing and sexual assault. Building on a theory of crime victimization, this study examines the effect of ride-sharing on sexual assault incidents using comprehensive data on Uber transactions and crime incidents in New York City over the period from January to March 2015. Our findings demonstrate that the Uber availability is negatively associated with the likelihood of rape, after controlling for endogeneity. Moreover, the deterrent effect of Uber on sexual assault is entirely driven by the taxi-sparse areas, namely outside Manhattan. This study sheds light on the potential of ride-sharing platforms and sharing economy to improve social welfare beyond economic gains.

2017-03-20
Nunes, Eric, Shakarian, Paulo, Simari, Gerardo I., Ruef, Andrew.  2016.  Argumentation models for cyber attribution. :837–844.

A major challenge in cyber-threat analysis is combining information from different sources to find the person or the group responsible for the cyber-attack. It is one of the most important technical and policy challenges in cybersecurity. The lack of ground truth for an individual responsible for an attack has limited previous studies. In this paper, we take a first step towards overcoming this limitation by building a dataset from the capture-the-flag event held at DEFCON, and propose an argumentation model based on a formal reasoning framework called DeLP (Defeasible Logic Programming) designed to aid an analyst in attributing a cyber-attack. We build models from latent variables to reduce the search space of culprits (attackers), and show that this reduction significantly improves the performance of classification-based approaches from 37% to 62% in identifying the attacker.
 

Nunes, Eric, Shakarian, Paulo, Simari, Gerardo I., Ruef, Andrew.  2016.  Argumentation models for cyber attribution. :837–844.

A major challenge in cyber-threat analysis is combining information from different sources to find the person or the group responsible for the cyber-attack. It is one of the most important technical and policy challenges in cybersecurity. The lack of ground truth for an individual responsible for an attack has limited previous studies. In this paper, we take a first step towards overcoming this limitation by building a dataset from the capture-the-flag event held at DEFCON, and propose an argumentation model based on a formal reasoning framework called DeLP (Defeasible Logic Programming) designed to aid an analyst in attributing a cyber-attack. We build models from latent variables to reduce the search space of culprits (attackers), and show that this reduction significantly improves the performance of classification-based approaches from 37% to 62% in identifying the attacker.

2017-11-27
Parate, M., Tajane, S., Indi, B..  2016.  Assessment of System Vulnerability for Smart Grid Applications. 2016 IEEE International Conference on Engineering and Technology (ICETECH). :1083–1088.

The smart grid is an electrical grid that has a duplex communication. This communication is between the utility and the consumer. Digital system, automation system, computers and control are the various systems of Smart Grid. It finds applications in a wide variety of systems. Some of its applications have been designed to reduce the risk of power system blackout. Dynamic vulnerability assessment is done to identify, quantify, and prioritize the vulnerabilities in a system. This paper presents a novel approach for classifying the data into one of the two classes called vulnerable or non-vulnerable by carrying out Dynamic Vulnerability Assessment (DVA) based on some data mining techniques such as Multichannel Singular Spectrum Analysis (MSSA), and Principal Component Analysis (PCA), and a machine learning tool such as Support Vector Machine Classifier (SVM-C) with learning algorithms that can analyze data. The developed methodology is tested in the IEEE 57 bus, where the cause of vulnerability is transient instability. The results show that data mining tools can effectively analyze the patterns of the electric signals, and SVM-C can use those patterns for analyzing the system data as vulnerable or non-vulnerable and determines System Vulnerability Status.

2017-08-02
Madi, Taous, Majumdar, Suryadipta, Wang, Yushun, Jarraya, Yosr, Pourzandi, Makan, Wang, Lingyu.  2016.  Auditing Security Compliance of the Virtualized Infrastructure in the Cloud: Application to OpenStack. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :195–206.

Cloud service providers typically adopt the multi-tenancy model to optimize resources usage and achieve the promised cost-effectiveness. Sharing resources between different tenants and the underlying complex technology increase the necessity of transparency and accountability. In this regard, auditing security compliance of the provider's infrastructure against standards, regulations and customers' policies takes on an increasing importance in the cloud to boost the trust between the stakeholders. However, virtualization and scalability make compliance verification challenging. In this work, we propose an automated framework that allows auditing the cloud infrastructure from the structural point of view while focusing on virtualization-related security properties and consistency between multiple control layers. Furthermore, to show the feasibility of our approach, we integrate our auditing system into OpenStack, one of the most used cloud infrastructure management systems. To show the scalability and validity of our framework, we present our experimental results on assessing several properties related to auditing inter-layer consistency, virtual machines co-residence, and virtual resources isolation.

2017-04-20
Egner, Alexandru Ionut, Luu, Duc, den Hartog, Jerry, Zannone, Nicola.  2016.  An Authorization Service for Collaborative Situation Awareness. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :136–138.

In international military coalitions, situation awareness is achieved by gathering critical intel from different authorities. Authorities want to retain control over their data, as they are sensitive by nature, and, thus, usually employ their own authorization solutions to regulate access to them. In this paper, we highlight that harmonizing authorization solutions at the coalition level raises many challenges. We demonstrate how we address authorization challenges in the context of a scenario defined by military experts using a prototype implementation of SAFAX, an XACML-based architectural framework tailored to the development of authorization services for distributed systems.

2017-08-18
Gupta, Arpit, Feamster, Nick, Vanbever, Laurent.  2016.  Authorizing Network Control at Software Defined Internet Exchange Points. Proceedings of the Symposium on SDN Research. :16:1–16:6.

Software Defined Internet Exchange Points (SDXes) increase the flexibility of interdomain traffic delivery on the Internet. Yet, an SDX inherently requires multiple participants to have access to a single, shared physical switch, which creates the need for an authorization mechanism to mediate this access. In this paper, we introduce a logic and mechanism called FLANC (A Formal Logic for Authorizing Network Control), which authorizes each participant to control forwarding actions on a shared switch and also allows participants to delegate forwarding actions to other participants at the switch (e.g., a trusted third party). FLANC extends "says" and "speaks for" logic that have been previously designed for operating system objects to handle expressions involving network traffic flows. We describe FLANC, explain how participants can use it to express authorization policies for realistic interdomain routing settings, and demonstrate that it is efficient enough to operate in operational settings.

2017-03-20
Pinho, Armando J., Pratas, Diogo, Ferreira, Paulo J. S. G..  2016.  Authorship Attribution Using Relative Compression. :329–338.

Authorship attribution is a classical classification problem. We use it here to illustrate the performance of a compression-based measure that relies on the notion of relative compression. Besides comparing with recent approaches that use multiple discriminant analysis and support vector machines, we compare it with the Normalized Conditional Compression Distance (a direct approximation of the Normalized Information Distance) and the popular Normalized Compression Distance. The Normalized Relative Compression (NRC) attained 100% correct classification in the data set used, showing consistency between the compression ratio and the classification performance, a characteristic not always present in other compression-based measures.
 

Pinho, Armando J., Pratas, Diogo, Ferreira, Paulo J. S. G..  2016.  Authorship Attribution Using Relative Compression. :329–338.

Authorship attribution is a classical classification problem. We use it here to illustrate the performance of a compression-based measure that relies on the notion of relative compression. Besides comparing with recent approaches that use multiple discriminant analysis and support vector machines, we compare it with the Normalized Conditional Compression Distance (a direct approximation of the Normalized Information Distance) and the popular Normalized Compression Distance. The Normalized Relative Compression (NRC) attained 100% correct classification in the data set used, showing consistency between the compression ratio and the classification performance, a characteristic not always present in other compression-based measures.

Graupner, Hendrik, Jaeger, David, Cheng, Feng, Meinel, Christoph.  2016.  Automated Parsing and Interpretation of Identity Leaks. Proceedings of the ACM International Conference on Computing Frontiers. :127–134.

The relevance of identity data leaks on the Internet is more present than ever. Almost every month we read about leakage of databases with more than a million users in the news. Smaller but not less dangerous leaks happen even multiple times a day. The public availability of such leaked data is a major threat to the victims, but also creates the opportunity to learn not only about security of service providers but also the behavior of users when choosing passwords. Our goal is to analyze this data and generate knowledge that can be used to increase security awareness and security, respectively. This paper presents a novel approach to automatic analysis of a vast majority of bigger and smaller leaks. Our contribution is the concept and a prototype implementation of a parser, composed of a syntactic and a semantic module, and a data analyzer for identity leaks. In this context, we deal with the two major challenges of a huge amount of different formats and the recognition of leaks' unknown data types. Based on the data collected, this paper reveals how easy it is for criminals to collect lots of passwords, which are plain text or only weakly hashed.

Graupner, Hendrik, Jaeger, David, Cheng, Feng, Meinel, Christoph.  2016.  Automated Parsing and Interpretation of Identity Leaks. Proceedings of the ACM International Conference on Computing Frontiers. :127–134.

The relevance of identity data leaks on the Internet is more present than ever. Almost every month we read about leakage of databases with more than a million users in the news. Smaller but not less dangerous leaks happen even multiple times a day. The public availability of such leaked data is a major threat to the victims, but also creates the opportunity to learn not only about security of service providers but also the behavior of users when choosing passwords. Our goal is to analyze this data and generate knowledge that can be used to increase security awareness and security, respectively. This paper presents a novel approach to automatic analysis of a vast majority of bigger and smaller leaks. Our contribution is the concept and a prototype implementation of a parser, composed of a syntactic and a semantic module, and a data analyzer for identity leaks. In this context, we deal with the two major challenges of a huge amount of different formats and the recognition of leaks' unknown data types. Based on the data collected, this paper reveals how easy it is for criminals to collect lots of passwords, which are plain text or only weakly hashed.