Biblio

Found 5938 results

Filters: First Letter Of Last Name is S  [Clear All Filters]
2016-12-05
Eric Yuan, Naeem Esfahani, Sam Malek.  2014.  A Systematic Survey of Self-Protecting Software Systems. ACM Transactions on Autonomous and Adaptive Systems (TAAS) - Special Section on Best Papers from SEAMS 2012 . 8(4)

Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have been shown to be inadequate for the challenges posed by modern software systems. Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. This article presents a significant extension of our preliminary study in this area. In particular, unlike our preliminary study, here we have followed a systematic literature review process, which has broadened the scope of our study and strengthened the validity of our conclusions. By proposing and applying a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area, we have identified key patterns, trends and challenges in the existing approaches, which reveals a number of opportunities that will shape the focus of future research efforts.

2016-12-06
Alain Forget, Saranga Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang.  2014.  Building the security behavior observatory: an infrastructure for long-term monitoring of client machines. HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security.

We present an architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security of the collected data, which will inevitably include intimate participant behavioral data. Third, the SBO must serve our research interests, which will inevitably change as collected data is analyzed and interpreted. This short paper summarizes some of our design and implementation benefits and discusses a few hurdles and trade-offs to consider when designing such a data collection system.

2016-12-05
Alireza Sadeghi, Naeem Esfahani, Sam Malek.  2014.  Mining the Categorized Software Repositories to Improve the Analysis of Security Vulnerabilities. Proceedings of the 17th International Conference on Fundamental Approaches to Software Engineering . 8411

Security has become the Achilles’ heel of most modern software systems. Techniques ranging from the manual inspection to automated static and dynamic analyses are commonly employed to identify security vulnerabilities prior to the release of the software. However, these techniques are time consuming and cannot keep up with the complexity of ever-growing software repositories (e.g., Google Play and Apple App Store). In this paper, we aim to improve the status quo and increase the efficiency of static analysis by mining relevant information from vulnerabilities found in the categorized software repositories. The approach relies on the fact that many modern software systems are developed using rich application development frameworks (ADF), allowing us to raise the level of abstraction for detecting vulnerabilities and thereby making it possible to classify the types of vulnerabilities that are encountered in a given category of application. We used open-source software repositories comprising more than 7 million lines of code to demonstrate how our approach can improve the efficiency of static analysis, and in turn, vulnerability detection.

Eric Yuan, Naeem Esfahani, Sam Malek.  2014.  Automated Mining of Software Component Interactions for Self-Adaptation. SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems. :27-36.

A self-adaptive software system should be able to monitor and analyze its runtime behavior and make adaptation decisions accordingly to meet certain desirable objectives. Traditional software adaptation techniques and recent “models@runtime” approaches usually require an a priori model for a system’s dynamic behavior. Oftentimes the model is difficult to define and labor-intensive to maintain, and tends to get out of date due to adaptation and architecture decay. We propose an alternative approach that does not require defining the system’s behavior model beforehand, but instead involves mining software component interactions from system execution traces to build a probabilistic usage model, which is in turn used to analyze, plan, and execute adaptations. Our preliminary evaluation of the approach against an Emergency Deployment System shows that the associations mining model can be used to effectively address a variety of adaptation needs, including (1) safely applying dynamic changes to a running software system without creating inconsistencies, (2) identifying potentially malicious (abnormal) behavior for self-protection, and (3) our ongoing research on improving deployment of software components in a distributed setting for performance self-optimization.

2016-12-06
Alain Forget, Saranga Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang.  2014.  Security Behavior Observatory: Infrastructure for Longterm Monitoring of Client Machines.

Much of the data researchers usually collect about users’ privacy and security behavior comes from short-term studies and focuses on specific, narrow activities. We present a design architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from a panel of hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security and privacy of the collected data, which will inevitably include intimate details about our participants’ behavior. Third, the SBO must serve our research interests, which will inevitably change over the course of the study, as collected data is analyzed, interpreted, and suggest further lines of inquiry. We describe in detail the SBO infrastructure, its secure data collection methods, the benefits of our design and implementation, as well as the hurdles and tradeoffs to consider when designing such a data collection system. 

2016-12-05
Marwan Abi-Antoun, Sumukhi Chandrashekar, Radu Vanciu, Andrew Giang.  2014.  Are Object Graphs Extracted Using Abstract Interpretation Significantly Different from the Code? SCAM '14 Proceedings of the 2014 IEEE 14th International Working Conference on Source Code Analysis and Manipulation.

To evolve object-oriented code, one must understand both the code structure in terms of classes, and the runtime structure in terms of abstractions of objects that are being created and relations between those objects. To help with this understanding, static program analysis can extract heap abstractions such as object graphs. But the extracted graphs can become too large if they do not sufficiently abstract objects, or too imprecise if they abstract objects excessively to the point of being similar to a class diagram, where one box for a class represents all the instances of that class. One previously proposed solution uses both annotations and abstract interpretation to extract a global, hierarchical, abstract object graph that conveys both abstraction and design intent, but can still be related to the code structure. In this paper, we define metrics that relate nodes and edges in the object graph to elements in the code structure, to measure how they differ, and if the differences are indicative of language or design features such as encapsulation, polymorphism and inheritance. We compute the metrics across eight systems totaling over 100 KLOC, and show a statistically significant difference between the code and the object graph. In several cases, the magnitude of this difference is large.

Marwan Abi-Antoun, Sumukhi Chandrashekar, Radu Vanciu, Andrew Giang.  2014.  Are Object Graphs Extracted Using Abstract Interpretation Significantly Different from the Code? Extended Version SCAM '14 Proceedings of the 2014 IEEE 14th International Working Conference on Source Code Analysis and Manipulation.

To evolve object-oriented code, one must understand both the code structure in terms of classes, and the runtime structure in terms of abstractions of objects that are being created and relations between those objects. To help with this understanding, static program analysis can extract heap abstractions such as object graphs. But the extracted graphs can become too large if they do not sufficiently abstract objects, or too imprecise if they abstract objects excessively to the point of being similar to a class diagram that shows one box for a class to represent all the instances of that class. One previously proposed solution uses both annotations and abstract interpretation to extract a global, hierarchical, abstract object graph that conveys both abstraction and design intent, but can still be related to the code structure. In this paper, we define metrics that relate nodes and edges in the object graph to elements in the code structure to measure how they differ, and if the differences are indicative of language or design features such as encapsulation, polymorphism and inheritance. We compute the metrics across eight systems totaling over 100 KLOC, and show a statistically significant difference between the code and the object graph. In several cases, the magnitude of this difference is large.

2020-10-08
Christian Hinrichs, Sebastian Lehnhoff, Michael Sonneschein.  2014.  COHDA: A combinatorial optimization heuristic for distributed agents. International Conference on Agents and Artificial Intelligence 2013.
2015-01-11
Heorhiadi, Victor, Fayaz, SeyedKaveh, Reiter, Michael K., Sekar, Vyas.  2014.  SNIPS: A Software-Defined Approach for Scaling Intrusion Prevention Systems via Offloading. 10th International Conference on Information Systems Security, ICISS 2014. 8880

Growing traffic volumes and the increasing complexity of attacks pose a constant scaling challenge for network intrusion prevention systems (NIPS). In this respect, offloading NIPS processing to compute clusters offers an immediately deployable alternative to expensive hardware upgrades. In practice, however, NIPS offloading is challenging on three fronts in contrast to passive network security functions: (1) NIPS offloading can impact other traffic engineering objectives; (2) NIPS offloading impacts user perceived latency; and (3) NIPS actively change traffic volumes by dropping unwanted traffic. To address these challenges, we present the SNIPS system. We design a formal optimization framework that captures tradeoffs across scalability, network load, and latency. We provide a practical implementation using recent advances in software-defined networking without requiring modifications to NIPS hardware. Our evaluations on realistic topologies show that SNIPS can reduce the maximum load by up to 10× while only increasing the latency by 2%.

2015-10-11
Subramani, Shweta, Vouk, Mladen A., Williams, Laurie.  2014.  An Analysis of Fedora Security Profile. HotSoS 2014 Symposium and Bootcamp on the Science of Security (HotSoS). :169-71.

In our previous work we showed that for Fedora, under normal operational conditions, security problem discovery appears to be a random process. While in the case of Fedora, and a number of other open source products, classical reliability models can be adapted to estimate the number of residual security problems under “normal” operational usage (not attacks), the predictive ability of the model is lower for security faults due to the rarity of security events and because there appears to be no real security reliability growth. The ratio of security to non-security faults is an indicator that the process needs improving, but it also may be leveraged to assess vulnerability profile of a release and possibly guide testing of its next version. We manually analyzed randomly sampled problems for four different versions of Fedora and classified them into security vulnerability categories. We also analyzed the distribution of these problems over the software’s lifespan and we found that they exhibit a symmetry which can perhaps be used in estimating the number of residual security problems in the software. Based on our work, we believe that an approach to vulnerability elimination based on a combination of “classical” and some non-operational “bounded” high-assurance testing along the lines discussed in may yield good vulnerability elimination results, as well as a way of estimating vulnerability level of a release. Classical SRE methods, metrics and models can be used to track both non-security and security problem detection under normal operational profile. We can then model the reliability growth, if any, and estimate the number of residual faults by estimating the lower and upper bounds on the total number of faults of a certain type. In production, there may be a simpler alternative. Just count the vulnerabilities and project over the next period assuming constant vulnerability discovery rate. In testing phase, to accelerate the process, one might leverage collected vulnerability information to generate non-operational test-cases aimed at vulnerability categories. The observed distributions of security problems reported under normal “operational” usage appear to support the above approach – i.e., what is learned say in the first x weeks can them be leveraged in selecting test cases in the next stage. Similarly, what is learned about a product Y weeks after its release may be very indicative of its vulnerability profile for the rest of its life given the assumption of constant vulnerability discovery rate.

Subramani, Shweta.  2014.  Security Profile of Fedora. Computer Science. MS:105.

The process of software development and evolution has proven difficult to improve. For example,  well documented security issues such as SQL injection (SQLi), after more than a decade, still top  most vulnerability lists. Quantitative security process and quality metrics are often subdued due to  lack of time and resources. Security problems are hard to quantify and even harder to predict or  relate to any process improvement activity.  The goal of this thesis is to assess usefulness of “classical” software reliability engineering (SRE)  models in the context of open source software security, the conditions under which they may be  useful, and the information that they can provide with respect to the security quality of a software  product.  We start with security problem reports for open source Fedora series of software releases.We  illustrate how one can learn from normal operational profile about the non-operational processes  related to security problems. One aspect is classification of security problems based on the human  traits that contribute to the injection of problems into code, whether due to poor practices or limited  knowledge (epistemic errors), or due to random accidental events (aleatoric errors). Knowing the  distribution aids in development of an attack profile. In the case of Fedora, the distribution of  security problems found post-release was consistent across four different releases of the software.  The security problem discovery rate appears to be roughly constant but much lower than the initial  non-security problem discovery rate. Previous work has shown that non-operational testing can help  accelerate and focus the problem discovery rate and that it can be successfully modeled.We find  that some classical reliability models can be used with success to estimate the residual number of  security problems, and through that provide a measure of the security characteristics of the software.  We propose an agile software testing process that combines operational and non-operational (or  attack related) testing with the intent of finding more security problems faster. 

2018-05-23
Arney, D., Plourde, J., Schrenker, R., Mattegunta, P., Whitehead, S.F., Goldman, J.M..  2014.  Design Pillars for Medical Cyber-Physical System Middleware. Proceedings of the 5th Workshop on Medical Cyber-Physical Systems (MCPS 2014). :124–132.
2015-05-04
Shahare, P.C., Chavhan, N.A..  2014.  An Approach to Secure Sink Node's Location Privacy in Wireless Sensor Networks. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :748-751.

Wireless Sensor Network has a wide range of applications including environmental monitoring and data gathering in hostile environments. This kind of network is easily leaned to different external and internal attacks because of its open nature. Sink node is a receiving and collection point that gathers data from the sensor nodes present in the network. Thus, it forms bridge between sensors and the user. A complete sensor network can be made useless if this sink node is attacked. To ensure continuous usage, it is very important to preserve the location privacy of sink nodes. A very good approach for securing location privacy of sink node is proposed in this paper. The proposed scheme tries to modify the traditional Blast technique by adding shortest path algorithm and an efficient clustering mechanism in the network and tries to minimize the energy consumption and packet delay.

2015-05-05
Stanisavljevic, Z., Stanisavljevic, J., Vuletic, P., Jovanovic, Z..  2014.  COALA - System for Visual Representation of Cryptography Algorithms. Learning Technologies, IEEE Transactions on. 7:178-190.

Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
 

2018-05-25
H. Huang, C. C. Ni, X. Ban, J. Gao, A. T. Schneider, S. Lin.  2014.  Connected wireless camera network deployment with visibility coverage. IEEE INFOCOM 2014 - IEEE Conference on Computer Communications. :1204-1212.
2015-05-06
Soleimani, M.T., Kahvand, M..  2014.  Defending packet dropping attacks based on dynamic trust model in wireless ad hoc networks. Mediterranean Electrotechnical Conference (MELECON), 2014 17th IEEE. :362-366.

Rapid advances in wireless ad hoc networks lead to increase their applications in real life. Since wireless ad hoc networks have no centralized infrastructure and management, they are vulnerable to several security threats. Malicious packet dropping is a serious attack against these networks. In this attack, an adversary node tries to drop all or partial received packets instead of forwarding them to the next hop through the path. A dangerous type of this attack is called black hole. In this attack, after absorbing network traffic by the malicious node, it drops all received packets to form a denial of service (DOS) attack. In this paper, a dynamic trust model to defend network against this attack is proposed. In this approach, a node trusts all immediate neighbors initially. Getting feedback from neighbors' behaviors, a node updates the corresponding trust value. The simulation results by NS-2 show that the attack is detected successfully with low false positive probability.

2018-05-25
S. Munir, J. A. Stankovic.  2014.  DepSys: Dependency aware integration of cyber-physical systems for smart homes. 2014 ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS). :127-138.
2015-05-05
Amin, S., Clark, T., Offutt, R., Serenko, K..  2014.  Design of a cyber security framework for ADS-B based surveillance systems. Systems and Information Engineering Design Symposium (SIEDS), 2014. :304-309.

The need for increased surveillance due to increase in flight volume in remote or oceanic regions outside the range of traditional radar coverage has been fulfilled by the advent of space-based Automatic Dependent Surveillance — Broadcast (ADS-B) Surveillance systems. ADS-B systems have the capability of providing air traffic controllers with highly accurate real-time flight data. ADS-B is dependent on digital communications between aircraft and ground stations of the air route traffic control center (ARTCC); however these communications are not secured. Anyone with the appropriate capabilities and equipment can interrogate the signal and transmit their own false data; this is known as spoofing. The possibility of this type of attacks decreases the situational awareness of United States airspace. The purpose of this project is to design a secure transmission framework that prevents ADS-B signals from being spoofed. Three alternative methods of securing ADS-B signals are evaluated: hashing, symmetric encryption, and asymmetric encryption. Security strength of the design alternatives is determined from research. Feasibility criteria are determined by comparative analysis of alternatives. Economic implications and possible collision risk is determined from simulations that model the United State airspace over the Gulf of Mexico and part of the airspace under attack respectively. The ultimate goal of the project is to show that if ADS-B signals can be secured, the situational awareness can improve and the ARTCC can use information from this surveillance system to decrease the separation between aircraft and ultimately maximize the use of the United States airspace.

2015-05-06
Kannan, S., Karimi, N., Karri, R., Sinanoglu, O..  2014.  Detection, diagnosis, and repair of faults in memristor-based memories. VLSI Test Symposium (VTS), 2014 IEEE 32nd. :1-6.

Memristors are an attractive option for use in future memory architectures due to their non-volatility, high density and low power operation. Notwithstanding these advantages, memristors and memristor-based memories are prone to high defect densities due to the non-deterministic nature of nanoscale fabrication. The typical approach to fault detection and diagnosis in memories entails testing one memory cell at a time. This is time consuming and does not scale for the dense, memristor-based memories. In this paper, we integrate solutions for detecting and locating faults in memristors, and ensure post-silicon recovery from memristor failures. We propose a hybrid diagnosis scheme that exploits sneak-paths inherent in crossbar memories, and uses March testing to test and diagnose multiple memory cells simultaneously, thereby reducing test time. We also provide a repair mechanism that prevents faults in the memory from being activated. The proposed schemes enable and leverage sneak paths during fault detection and diagnosis modes, while still maintaining a sneak-path free crossbar during normal operation. The proposed hybrid scheme reduces fault detection and diagnosis time by ~44%, compared to traditional March tests, and repairs the faulty cell with minimal overhead.
 

2015-05-05
Koyanagi, T., Shinjo, Y..  2014.  A fast and compact hybrid memory resident datastore for text analytics with autonomic memory allocation. Information and Communication Systems (ICICS), 2014 5th International Conference on. :1-7.

This paper describes a high-performance and space-efficient memory-resident datastore for text analytics systems based on a hash table for fast access, a dynamic trie for staging and a list of Level-Order Unary Degree Sequence (LOUDS) tries for compactness. We achieve efficient memory allocation and data placement by placing freqently access keys in the hash table, and infrequently accessed keys in the LOUDS tries without using conventional cache algorithms. Our algorithm also dynamically changes memory allocation sizes for these data structures according to the remaining available memory size. This technique yields 38.6% to 52.9% better throughput than a double array trie - a conventional fast and compact datastore.

2015-05-06
Daesung Choi, Sungdae Hong, Hyoung-Kee Choi.  2014.  A group-based security protocol for Machine Type Communications in LTE-Advanced. Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. :161-162.

We propose Authentication and Key Agreement (AKA) for Machine Type Communications (MTC) in LTE-Advanced. This protocol is based on an idea of grouping devices so that it would reduce signaling congestion in the access network and overload on the single authentication server. We verified that this protocol is designed to be secure against many attacks by using a software verification tool. Furthermore, performance evaluation suggests that this protocol is efficient with respect to authentication overhead and handover delay.
 

2015-05-01
Xiang, Yingmeng, Zhang, Yichi, Wang, Lingfeng, Sun, Weiqing.  2014.  Impact of UPFC on power system reliability considering its cyber vulnerability. T D Conference and Exposition, 2014 IEEE PES. :1-5.

The unified power flow controller (UPFC) has attracted much attention recently because of its capability in controlling the active and reactive power flows. The normal operation of UPFC is dependent on both its physical part and the associated cyber system. Thus malicious cyber attacks may impact the reliability of UPFC. As more information and communication technologies are being integrated into the current power grid, more frequent occurrences of cyber attacks are possible. In this paper, the cyber architecture of UPFC is analyzed, and the possible attack scenarios are considered and discussed. Based on the interdependency of the physical part and the cyber part, an integrated reliability model for UPFC is proposed and analyzed. The impact of UPFC on the overall system reliability is examined, and it is shown that cyber attacks against UPFC may yield an adverse influence.

2015-05-04
Shinganjude, R.D., Theng, D.P..  2014.  Inspecting the Ways of Source Anonymity in Wireless Sensor Network. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :705-707.

Sensor networks mainly deployed to monitor and report real events, and thus it is very difficult and expensive to achieve event source anonymity for it, as sensor networks are very limited in resources. Data obscurity i.e. the source anonymity problem implies that an unauthorized observer must be unable to detect the origin of events by analyzing the network traffic; this problem has emerged as an important topic in the security of wireless sensor networks. This work inspects the different approaches carried for attaining the source anonymity in wireless sensor network, with variety of techniques based on different adversarial assumptions. The approach meeting the best result in source anonymity is proposed for further improvement in the source location privacy. The paper suggests the implementation of most prominent and effective LSB Steganography technique for the improvement.

Skoberne, N., Maennel, O., Phillips, I., Bush, R., Zorz, J., Ciglaric, M..  2014.  IPv4 Address Sharing Mechanism Classification and Tradeoff Analysis. Networking, IEEE/ACM Transactions on. 22:391-404.

The growth of the Internet has made IPv4 addresses a scarce resource. Due to slow IPv6 deployment, IANA-level IPv4 address exhaustion was reached before the world could transition to an IPv6-only Internet. The continuing need for IPv4 reachability will only be supported by IPv4 address sharing. This paper reviews ISP-level address sharing mechanisms, which allow Internet service providers to connect multiple customers who share a single IPv4 address. Some mechanisms come with severe and unpredicted consequences, and all of them come with tradeoffs. We propose a novel classification, which we apply to existing mechanisms such as NAT444 and DS-Lite and proposals such as 4rd, MAP, etc. Our tradeoff analysis reveals insights into many problems including: abuse attribution, performance degradation, address and port usage efficiency, direct intercustomer communication, and availability.