Biblio

Found 1080 results

Filters: First Letter Of Title is T  [Clear All Filters]
2018-05-14
Viorel Preoteasa, Stavros Tripakis.  2016.  Towards Compositional Feedback in Non-Deterministic and Non-Input-Receptive Systems. Proceedings of the 31st Annual {ACM/IEEE} Symposium on Logic in Computer Science, {LICS} '16, New York, NY, USA, July 5-8, 2016. :768–777.
2017-05-18
Wang, Huangxin, Li, Fei, Chen, Songqing.  2016.  Towards Cost-Effective Moving Target Defense Against DDoS and Covert Channel Attacks. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :15–25.

Traditionally, network and system configurations are static. Attackers have plenty of time to exploit the system's vulnerabilities and thus they are able to choose when to launch attacks wisely to maximize the damage. An unpredictable system configuration can significantly lift the bar for attackers to conduct successful attacks. Recent years, moving target defense (MTD) has been advocated for this purpose. An MTD mechanism aims to introduce dynamics to the system through changing its configuration continuously over time, which we call adaptations. Though promising, the dynamic system reconfiguration introduces overhead to the applications currently running in the system. It is critical to determine the right time to conduct adaptations and to balance the overhead afforded and the security levels guaranteed. This problem is known as the MTD timing problem. Little prior work has been done to investigate the right time in making adaptations. In this paper, we take the first step to both theoretically and experimentally study the timing problem in moving target defenses. For a broad family of attacks including DDoS attacks and cloud covert channel attacks, we model this problem as a renewal reward process and propose an optimal algorithm in deciding the right time to make adaptations with the objective of minimizing the long-term cost rate. In our experiments, both DDoS attacks and cloud covert channel attacks are studied. Simulations based on real network traffic traces are conducted and we demonstrate that our proposed algorithm outperforms known adaptation schemes.

2017-03-20
Malecha, Gregory, Ricketts, Daniel, Alvarez, Mario M., Lerner, Sorin.  2016.  Towards foundational verification of cyber-physical systems. :1–5.

The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.
 

Malecha, Gregory, Ricketts, Daniel, Alvarez, Mario M., Lerner, Sorin.  2016.  Towards foundational verification of cyber-physical systems. :1–5.

The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.

2017-04-20
Yang, Kai, Wang, Jing, Bao, Lixia, Ding, Mei, Wang, Jiangtao, Wang, Yasha.  2016.  Towards Future Situation-Awareness: A Conceptual Middleware Framework for Opportunistic Situation Identification. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :95–101.

Opportunistic Situation Identification (OSI) is new paradigms for situation-aware systems, in which contexts for situation identification are sensed through sensors that happen to be available rather than pre-deployed and application-specific ones. OSI extends the application usage scale and reduces system costs. However, designing and implementing OSI module of situation-aware systems encounters several challenges, including the uncertainty of context availability, vulnerable network connectivity and privacy threat. This paper proposes a novel middleware framework to tackle such challenges, and its intuition is that it facilitates performing the situation reasoning locally on a smartphone without needing to rely on the cloud, thus reducing the dependency on the network and being more privacy-preserving. To realize such intuitions, we propose a hybrid learning approach to maximize the reasoning accuracy using limited phone's storage space, with the combination of two the-state-the-art techniques. Specifically, this paper provides a genetic algorithm based optimization approach to determine which pre-computed models will be selected for storage under the storage constraints. Validation of the approach based on an open dataset indicates that the proposed approach achieves higher accuracy with comparatively small storage cost. Further, the proposed utility function for model selection performs better than three baseline utility functions.

2017-09-05
Antonioli, Daniele, Agrawal, Anand, Tippenhauer, Nils Ole.  2016.  Towards High-Interaction Virtual ICS Honeypots-in-a-Box. Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy. :13–22.

In this work, we address the problem of designing and implementing honeypots for Industrial Control Systems (ICS). Honeypots are vulnerable systems that are set up with the intent to be probed and compromised by attackers. Analysis of those attacks then allows the defender to learn about novel attacks and general strategy of the attacker. Honeypots for ICS systems need to satisfy both traditional ICT requirements, such as cost and maintainability, and more specific ICS requirements, such as time and determinism. We propose the design of a virtual, high-interaction and server-based ICS honeypot to satisfy the requirements, and the deployment of a realistic, cost-effective, and maintainable ICS honeypot. An attacker model is introduced to complete the problem statement and requirements. Based on our design and the MiniCPS framework, we implemented a honeypot mimicking a water treatment testbed. To the best of our knowledge, the presented honeypot implementation is the first academic work targeting Ethernet/IP based ICS honeypots, the first ICS virtual honeypot that is high-interactive without the use of full virtualization technologies (such as a network of virtual machines), and the first ICS honeypot that can be managed with a Software-Defined Network (SDN) controller.

2017-03-07
Purohit, Suchit S., Bothale, Vinod M., Gandhi, Savita R..  2016.  Towards M-gov in Solid Waste Management Sector Using RFID, Integrated Technologies. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :61:1–61:4.

Due to explosive increase in teledensity, penetration of mobile networks in urban as well as rural areas, m-governance in India is growing from infancy to a more mature shape. Various steps are taken by Indian government for offering citizen services through mobile platform hence offering smooth transition from web based e-gov services to more pervasive mobile based services. Municipalities and Municipal corporations in India are already providing m-gov services like property and professional tax transaction, Birth and death registration, Marriage registration, due of taxes and charges etc. through SMS alerts or via call centers. To the best of our knowledge no municipality offers mobile based services in Solid Waste management sector. This paper proposes an m-gov service implemented as Android mobile application for SWM department, AMC, Ahmadabad. The application operates on real time data collected from a fully automated Solid waste Collection process integrated using RFID, GPS, GIS and GPRS proposed in the preceding work by the authors. The mobile application facilitates citizens to interactively view the status of the cleaning process of their area file complaints in the case of failure and also can follow up the status of their complaints which could be handled by SWM officials using the same application. This application also facilitates SWM officials to observe, analyze the real time status of the collection process and generated reports.

2017-09-05
Queiroz, Rodrigo, Berger, Thorsten, Czarnecki, Krzysztof.  2016.  Towards Predicting Feature Defects in Software Product Lines. Proceedings of the 7th International Workshop on Feature-Oriented Software Development. :58–62.

Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.

2018-05-27
2017-03-07
Krishnan, Sanjay, Haas, Daniel, Franklin, Michael J., Wu, Eugene.  2016.  Towards Reliable Interactive Data Cleaning: A User Survey and Recommendations. Proceedings of the Workshop on Human-In-the-Loop Data Analytics. :9:1–9:5.

Data cleaning is frequently an iterative process tailored to the requirements of a specific analysis task. The design and implementation of iterative data cleaning tools presents novel challenges, both technical and organizational, to the community. In this paper, we present results from a user survey (N = 29) of data analysts and infrastructure engineers from industry and academia. We highlight three important themes: (1) the iterative nature of data cleaning, (2) the lack of rigor in evaluating the correctness of data cleaning, and (3) the disconnect between the analysts who query the data and the infrastructure engineers who design the cleaning pipelines. We conclude by presenting a number of recommendations for future work in which we envision an interactive data cleaning system that accounts for the observed challenges.

2017-09-26
Rothberg, Valentin, Dietrich, Christian, Ziegler, Andreas, Lohmann, Daniel.  2016.  Towards Scalable Configuration Testing in Variable Software. Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences. :156–167.

Testing a software product line such as Linux implies building the source with different configurations. Manual approaches to generate configurations that enable code of interest are doomed to fail due to the high amount of variation points distributed over the feature model, the build system and the source code. Research has proposed various approaches to generate covering configurations, but the algorithms show many drawbacks related to run-time, exhaustiveness and the amount of generated configurations. Hence, analyzing an entire Linux source can yield more than 30 thousand configurations and thereby exceeds the limited budget and resources for build testing. In this paper, we present an approach to fill the gap between a systematic generation of configurations and the necessity to fully build software in order to test it. By merging previously generated configurations, we reduce the number of necessary builds and enable global variability-aware testing. We reduce the problem of merging configurations to finding maximum cliques in a graph. We evaluate the approach on the Linux kernel, compare the results to common practices in industry, and show that our implementation scales even when facing graphs with millions of edges.

2018-05-23
2016-11-09
2017-04-03
Savola, Reijo M., Savolainen, Pekka, Salonen, Jarno.  2016.  Towards Security Metrics-supported IP Traceback. Proccedings of the 10th European Conference on Software Architecture Workshops. :32:1–32:5.

The threat of DDOS and other cyberattacks has increased during the last decade. In addition to the radical increase in the number of attacks, they are also becoming more sophisticated with the targets ranging from ordinary users to service providers and even critical infrastructure. According to some resources, the sophistication of attacks is increasing faster than the mitigating actions against them. For example determining the location of the attack origin is becoming impossible as cyber attackers employ specific means to evade detection of the attack origin by default, such as using proxy services and source address spoofing. The purpose of this paper is to initiate discussion about effective Internet Protocol traceback mechanisms that are needed to overcome this problem. We propose an approach for traceback that is based on extensive use of security metrics before (proactive) and during (reactive) the attacks.

2017-06-05
Shukla, Apoorv, Schmid, Stefan, Feldmann, Anja, Ludwig, Arne, Dudycz, Szymon, Schuetze, Andre.  2016.  Towards Transiently Secure Updates in Asynchronous SDNs. Proceedings of the 2016 ACM SIGCOMM Conference. :597–598.

Software-Defined Networks (SDNs) promise to overcome the often complex and error-prone operation of tradi- tional computer networks, by enabling programmabil- ity, automation and verifiability. Yet, SDNs also in- troduce new challenges, for example due to the asyn- chronous communication channel between the logically centralized control platform and the switches in the data plane. In particular, the asynchronous commu- nication of network update commands (e.g., OpenFlow FlowMod messages) may lead to transient inconsisten- cies, such as loops or bypassed waypoints (e.g., fire- walls). One approach to ensure transient consistency even in asynchronous environments is to employ smart scheduling algorithms: algorithms which update subsets of switches in each communication round only, where each subset in itself guarantees consistency. In this demo, we show how to change routing policies in a transiently consistent manner. We demonstrate two al- gorithms, namely, Wayup [5] and Peacock [4], which partition the network updates sent from SDN controller towards OpenFlow software switches into multiple rounds as per respective algorithms. Later, the barrier mes- sages are utilized to ensure reliable network updates.

2017-03-07
Ceolin, Davide, Noordegraaf, Julia, Aroyo, Lora, van Son, Chantal.  2016.  Towards Web Documents Quality Assessment for Digital Humanities Scholars. Proceedings of the 8th ACM Conference on Web Science. :315–317.

We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.

2017-10-03
Zhang, Fan, Cecchetti, Ethan, Croman, Kyle, Juels, Ari, Shi, Elaine.  2016.  Town Crier: An Authenticated Data Feed for Smart Contracts. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :270–282.

Smart contracts are programs that execute autonomously on blockchains. Their key envisioned uses (e.g. financial instruments) require them to consume data from outside the blockchain (e.g. stock quotes). Trustworthy data feeds that support a broad range of data requests will thus be critical to smart contract ecosystems. We present an authenticated data feed system called Town Crier (TC). TC acts as a bridge between smart contracts and existing web sites, which are already commonly trusted for non-blockchain applications. It combines a blockchain front end with a trusted hardware back end to scrape HTTPS-enabled websites and serve source-authenticated data to relying smart contracts. TC also supports confidentiality. It enables private data requests with encrypted parameters. Additionally, in a generalization that executes smart-contract logic within TC, the system permits secure use of user credentials to scrape access-controlled online data sources. We describe TC's design principles and architecture and report on an implementation that uses Intel's recently introduced Software Guard Extensions (SGX) to furnish data to the Ethereum smart contract system. We formally model TC and define and prove its basic security properties in the Universal Composibility (UC) framework. Our results include definitions and techniques of general interest relating to resource consumption (Ethereum's "gas" fee system) and TCB minimization. We also report on experiments with three example applications. We plan to launch TC soon as an online public service.

2017-11-13
Juliato, M., Gebotys, C., Sanchez, I. A..  2016.  TPM-supported key agreement protocols for increased autonomy in constellation of spacecrafts. 2016 IEEE Aerospace Conference. :1–9.

The incorporation of security mechanisms to protect spacecraft's TT&c; payload links is becoming a constant requirement in many space missions. More advanced mission concepts will allow spacecrafts to have higher levels of autonomy, which includes performing key management operations independently of control centers. This is especially beneficial to support missions operating distantly from Earth. In order to support such levels of autonomy, key agreement is one approach that allows spacecrafts to establish new cryptographic keys as they deem necessary. This work introduces an approach based on a trusted platform module that allows for key agreement to be performed with minimal computational efforts and protocol iterations. Besides, it allows for opportunistic control center reporting while avoiding man-in-the-middle and replay attacks.

2017-05-30
Sun, Pengfei, Han, Rui, Zhang, Mingbo, Zonouz, Saman.  2016.  Trace-free Memory Data Structure Forensics via Past Inference and Future Speculations. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :570–582.

A yet-to-be-solved but very vital problem in forensics analysis is accurate memory dump data type reverse engineering where the target process is not a priori specified and could be any of the running processes within the system. We present ReViver, a lightweight system-wide solution that extracts data type information from the memory dump without its past execution traces. ReViver constructs the dump's accurate data structure layout through collection of statistical information about possible past traces, forensics inspection of the present memory dump, and speculative investigation of potential future executions of the suspended process. First, ReViver analyzes a heavily instrumented set of execution paths of the same executable that end in the same state of the memory dump (the eip and call stack), and collects statistical information the potential data structure instances on the captured dump. Second, ReViver uses the statistical information and performs a word-byword data type forensics inspection of the captured memory dump. Finally, ReViver revives the dump's execution and explores its potential future execution paths symbolically. ReViver traces the executions including library/system calls for their known argument/return data types, and performs backward taint analysis to mark the dump bytes with relevant data type information. ReViver's experimental results on real-world applications are very promising (98.1%), and show that ReViver improves the accuracy of the past trace-free memory forensics solutions significantly while maintaining a negligible runtime performance overhead (1.8%).

2017-06-05
Kirchler, Matthias, Herrmann, Dominik, Lindemann, Jens, Kloft, Marius.  2016.  Tracked Without a Trace: Linking Sessions of Users by Unsupervised Learning of Patterns in Their DNS Traffic. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. :23–34.

Behavior-based tracking is an unobtrusive technique that allows observers to monitor user activities on the Internet over long periods of time – in spite of changing IP addresses. Previous work has employed supervised classifiers in order to link the sessions of individual users. However, classifiers need labeled training sessions, which are difficult to obtain for observers. In this paper we show how this limitation can be overcome with an unsupervised learning technique. We present a modified k-means algorithm and evaluate it on a realistic dataset that contains the Domain Name System (DNS) queries of 3,862 users. For this purpose, we simulate an observer that tries to track all users, and an Internet Service Provider that assigns a different IP address to every user on every day. The highest tracking accuracy is achieved within the subgroup of highly active users. Almost all sessions of 73% of the users in this subgroup can be linked over a period of 56 days. 19% of the highly active users can be traced completely, i.e., all their sessions are assigned to a single cluster. This fraction increases to 40% for shorter periods of seven days. As service providers may engage in behavior-based tracking to complement their existing profiling efforts, it constitutes a severe privacy threat for users of online services. Users can defend against behavior-based tracking by changing their IP address frequently, but this is cumbersome at the moment.

2017-04-24
Hauweele, David.  2016.  Tracking Inefficient Power Usages in WSN by Monitoring the Network Firmware: Ph.D. Forum Abstract. Proceedings of the 15th International Conference on Information Processing in Sensor Networks. :32:1–32:2.

The specification and implementation of network protocols are difficult tasks. This is particularly the case for WSN nodes which are typically implemented on low power microcontrollers with limited processing capabilities. Those platforms do not have the resources to run a full-fledged operating system but instead are programmed using a Real Time Operating System (RTOS) specialized in low-power wireless communications. The most popular are Contiki and TinyOS. Those RTOS support a fully-compliant IPv6 stack including the 6LoWPAN adaptation layer [1], several radio duty-cycling MAC protocols [2], [3] and multiple routing protocols [4].

2016-10-06
Aiping Xiong, Robert Proctor, Wanling Zou, Ninghui Li.  2016.  Tracking users’ fixations when evaluating the validity of a web site.

Phishing refers to attacks over the Internet that often proceed in the following manner. An unsolicited email is sent by the deceiver posing as a legitimate party, with the intent of getting the user to click on a link that leads to a fraudulent webpage. This webpage mimics the authentic one of a reputable organization and requests personal information such as passwords and credit card numbers from the user. If the phishing attack is successful, that personal information can then be used for various illegal activities by the perpetrator. The most reliable sign of a phishing website may be that its domain name is incorrect in the address bar. In recognition of this, all major web browsers now use domain highlighting, that is, the domain name is shown in bold font. Domain highlighting is based on the assumption that users will attend to the address bar and that they will be able to distinguish legitimate from illegitimate domain names. We previously found little evidence for the effectiveness of domain highlighting, even when participants were directed to look at the address bar, in a study with many participants conducted online through Mechanical Turk. The present study was conducted in a laboratory setting that allowed us to have better control over the viewing conditions and measure the parts of the display at which the users looked. We conducted a laboratory experiment to assess whether directing users to attend to the address bar and the use of domain highlighting assist them at detecting fraudulent webpages. An Eyelink 1000plus eye tracker was used to monitor participants’ gaze patterns throughout the experiment. 48 participants were recruited from an undergraduate subject pool; half had been phished previously and half had not. They were required to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two trial blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Whether or not the domain name was highlighted in the address bar was manipulated between subjects. Results confirmed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. Participants rarely looked at the address bar during the trial block in which they were not directed to the address bar. The percentage of time spent looking at the address bar increased significantly when the participants were directed to look at it. The number of fixations on the address bar also increased, with both measures indicating that more attention was allocated to the address bar when it was emphasized. When participants were directed to look at the address bar, correct decisions were improved slightly for fraudulent webpages (“unsafe”) but not for the authentic ones (“safe”). Domain highlighting had little influence even when participants were directed to look at the address bar, suggesting that participants do not rely on the domain name for their decisions about webpage legitimacy. Without the general knowledge of domain names and specific knowledge about particular domain names, domain highlighting will not be effective.

Aiping Xiong, Robert Proctor, Wanling Zou, Ninghui Li.  2016.  Tracking users’ fixations when evaluating the validity of a web site.

Phishing refers to attacks over the Internet that often proceed in the following manner. An unsolicited email is sent by the deceiver posing as a legitimate party, with the intent of getting the user to click on a link that leads to a fraudulent webpage. This webpage mimics the authentic one of a reputable organization and requests personal information such as passwords and credit card numbers from the user. If the phishing attack is successful, that personal information can then be used for various illegal activities by the perpetrator. The most reliable sign of a phishing website may be that its domain name is incorrect in the address bar. In recognition of this, all major web browsers now use domain highlighting, that is, the domain name is shown in bold font. Domain highlighting is based on the assumption that users will attend to the address bar and that they will be able to distinguish legitimate from illegitimate domain names. We previously found little evidence for the effectiveness of domain highlighting, even when participants were directed to look at the address bar, in a study with many participants conducted online through Mechanical Turk. The present study was conducted in a laboratory setting that allowed us to have better control over the viewing conditions and measure the parts of the display at which the users looked. We conducted a laboratory experiment to assess whether directing users to attend to the address bar and the use of domain highlighting assist them at detecting fraudulent webpages. An Eyelink 1000plus eye tracker was used to monitor participants’ gaze patterns throughout the experiment. 48 participants were recruited from an undergraduate subject pool; half had been phished previously and half had not. They were required to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two trial blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Whether or not the domain name was highlighted in the address bar was manipulated between subjects. Results confirmed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. Participants rarely looked at the address bar during the trial block in which they were not directed to the address bar. The percentage of time spent looking at the address bar increased significantly when the participants were directed to look at it. The number of fixations on the address bar also increased, with both measures indicating that more attention was allocated to the address bar when it was emphasized. When participants were directed to look at the address bar, correct decisions were improved slightly for fraudulent webpages (“unsafe”) but not for the authentic ones (“safe”). Domain highlighting had little influence even when participants were directed to look at the address bar, suggesting that participants do not rely on the domain name for their decisions about webpage legitimacy. Without the general knowledge of domain names and specific knowledge about particular domain names, domain highlighting will not be effective.

2018-05-17
Hubicki, Christian M, Aguilar, Jeffrey J, Goldman, Daniel I, Ames, Aaron D.  2016.  Tractable terrain-aware motion planning on granular media: An impulsive jumping study. Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. :3887–3892.
2018-05-15