Visible to the public Biblio

Found 16998 results

2017-10-25
Slavin, Rocky, Wang, Xiaoyin, Hosseini, Mitra Bokaei, Hester, James, Krishnan, Ram, Bhatia, Jaspreet, Breaux, Travis D., Niu, Jianwei.  2016.  Toward a Framework for Detecting Privacy Policy Violations in Android Application Code. Proceedings of the 38th International Conference on Software Engineering. :25–36.

Mobile applications frequently access sensitive personal information to meet user or business requirements. Because such information is sensitive in general, regulators increasingly require mobile-app developers to publish privacy policies that describe what information is collected. Furthermore, regulators have fined companies when these policies are inconsistent with the actual data practices of mobile apps. To help mobile-app developers check their privacy policies against their apps' code for consistency, we propose a semi-automated framework that consists of a policy terminology-API method map that links policy phrases to API methods that produce sensitive information, and information flow analysis to detect misalignments. We present an implementation of our framework based on a privacy-policy-phrase ontology and a collection of mappings from API methods to policy phrases. Our empirical evaluation on 477 top Android apps discovered 341 potential privacy policy violations.

Chefranov, Alexander G., Narimani, Amir.  2016.  Participant Authenticating, Error Detecting, and 100% Multiple Errors Repairing Chang-Chen-Wang's Secret Sharing Method Enhancement. Proceedings of the 9th International Conference on Security of Information and Networks. :112–115.

Chang-Chen-Wang's (3,n) Secret grayscale image Sharing between n grayscale cover images method with participant Authentication and damaged pixels Repairing (SSAR) properties is analyzed; it restores the secret image from any three of the cover images used. We show that SSAR may fail, is not able fake participant recognizing, and has limited by 62.5% repairing ability. We propose SSAR (4,n) enhancement, SSAR-E, allowing 100% exact restoration of a corrupted pixel using any four of n covers, and recognizing a fake participant with the help of cryptographic hash functions with 5-bit values that allows better (vs. 4 bits) error detection. Using a special permutation with only one loop including all the secret image pixels, SSAR-E is able restoring all the secret image damaged pixels having just one correct pixel left. SSAR-E allows restoring the secret image to authorized parties only contrary to SSAR. The performance and size of cover images for SSAR-E are the same as for SSAR.

Amin, Maitri.  2016.  A Survey of Financial Losses Due to Malware. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :145:1–145:4.

General survey stat that the main damage malware can cause is to slow down their PCs and perhaps crash some websites which is quite wrong, The Russian antivirus software developer teamed up with B2B International for a study worldwide recently, shown 36% of users lose money online as a result of a malware attack. Currently malware can't be detected by traditional way based anti-malware tools due to their polymorphic and/or metamorphic nature. Here we have improvised a current detection technique of malware based on mining Application Programming Interface (API) calls and developed the first public dataset to promote malware research. • In survey of cyber-attacks 6.2% financial attacks are due to malware which increase to 1.3 % in 2013 compared to 2012. • Financial data theft causes 27.6% to reach 28,400,000. Victims abused by this targeting malware countered 3,800,000, which is 18.6% greater than previous year. • Finance-committed malware, associated with Bitcoin has demonstrated the most dynamic development. Where's, Zeus is still top listed for playing important roles to steal banking credentials. Solutionary study stats that companies are spending a staggering amount of money in the aftermath of damaging attack: DDoS attacks recover \$6,500 per hour from malware and more than \$3,000 each time for up to 30 days to moderate and improve from malware attacks. [1]

Azevedo, Ernani, Machado, Marcos, Melo, Rodrigo, Aschoff, Rafael, Sadok, Djamel, Carmo, Ubiratan do.  2016.  Adopting Security Routines in Legacy Organizations. Proceedings of the 2016 Workshop on Fostering Latin-American Research in Data Communication Networks. :55–57.

Security is a well-known critical issue and exploitation of vulnerabilities is increasing in number, sophistication and damage. Furthermore, legacy systems tend to offer difficulty when upgrades are needed, specially when security recommendations are proposed. This paper presents a strategy for legacy systems based on three disciplines which guide the adoption of secure routines while avoid production drop. We present a prototype framework and discuss its success in providing security to the network of a power plant.

Mallik, Nilanjan, Wali, A. S., Kuri, Narendra.  2016.  Damage Location Identification Through Neural Network Learning from Optical Fiber Signal for Structural Health Monitoring. Proceedings of the 5th International Conference on Mechatronics and Control Engineering. :157–161.

Present work deals with prediction of damage location in a composite cantilever beam using signal from optical fiber sensor coupled with a neural network with back propagation based learning mechanism. The experimental study uses glass/epoxy composite cantilever beam. Notch perpendicular to the axis of the beam and spanning throughout the width of the beam is introduced at three different locations viz. at the middle of the span, towards the free end of the beam and towards the fixed end of the beam. A plastic optical fiber of 6 cm gage length is mounted on the top surface of the beam along the axis of the beam exactly at the mid span. He-Ne laser is used as light source for the optical fiber and light emitting from other end of the fiber is converted to electrical signal through a converter. A three layer feed forward neural network architecture is adopted having one each input layer, hidden layer and output layer. Three features are extracted from the signal viz. resonance frequency, normalized amplitude and normalized area under resonance frequency. These three features act as inputs to the neural network input layer. The outputs qualitatively identify the location of the notch.

Dinçer, B. Taner, Macdonald, Craig, Ounis, Iadh.  2016.  Risk-Sensitive Evaluation and Learning to Rank Using Multiple Baselines. Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. :483–492.

A robust retrieval system ensures that user experience is not damaged by the presence of poorly-performing queries. Such robustness can be measured by risk-sensitive evaluation measures, which assess the extent to which a system performs worse than a given baseline system. However, using a particular, single system as the baseline suffers from the fact that retrieval performance highly varies among IR systems across topics. Thus, a single system would in general fail in providing enough information about the real baseline performance for every topic under consideration, and hence it would in general fail in measuring the real risk associated with any given system. Based upon the Chi-squared statistic, we propose a new measure ZRisk that exhibits more promise since it takes into account multiple baselines when measuring risk, and a derivative measure called GeoRisk, which enhances ZRisk by also taking into account the overall magnitude of effectiveness. This paper demonstrates the benefits of ZRisk and GeoRisk upon TREC data, and how to exploit GeoRisk for risk-sensitive learning to rank, thereby making use of multiple baselines within the learning objective function to obtain effective yet risk-averse/robust ranking systems. Experiments using 10,000 topics from the MSLR learning to rank dataset demonstrate the efficacy of the proposed Chi-square statistic-based objective function.

Mondal, Tamal, Roy, Jaydeep, Bhattacharya, Indrajit, Chakraborty, Sandip, Saha, Arka, Saha, Subhanjan.  2016.  Smart Navigation and Dynamic Path Planning of a Micro-jet in a Post Disaster Scenario. Proceedings of the Second ACM SIGSPATIALInternational Workshop on the Use of GIS in Emergency Management. :14:1–14:8.

Small sized unmanned aerial vehicles (UAV) play major roles in variety of applications for aerial explorations and surveillance, transport, videography/photography and other areas. However, some other real life applications of UAV have also been studied. One of them is as a 'Disaster Response' component. In a post disaster situation, the UAVs can be used for search and rescue, damage assessment, rapid response and other emergency operations. However, in a disaster response situation it is very challenging to predict whether the climatic conditions are suitable to fly the UAV. Also it is necessary for an efficient dynamic path planning technique for effective damage assessment. In this paper, such dynamic path planning algorithms have been proposed for micro-jet, a small sized fixed wing UAV for data collection and dissemination in a post disaster situation. The proposed algorithms have been implemented on paparazziUAV simulator considering different environment simulators (wind speed, wind direction etc.) and calibration parameters of UAV like battery level, flight duration etc. The results have been obtained and compared with baseline algorithm used in paparazziUAV simulator for navigation. It has been observed that, the proposed navigation techniques work well in terms of different calibration parameters (flight duration, battery level) and can be effective not only for shelter point detection but also to reserve battery level, flight time for micro-jet in a post disaster scenario. The proposed techniques take approximately 20% less time and consume approximately 19% less battery power than baseline navigation technique. From analysis of produced results, it has been observed that the proposed work can be helpful for estimating the feasibility of flying UAV in a disaster response situation. Finally, the proposed path planning techniques have been carried out during field test using a micro-jet. It has been observed that, our proposed dynamic path planning algorithms give proximate results compare to simulation in terms of flight duration and battery level consumption.

Moura, Giovane C.M., Schmidt, Ricardo de O., Heidemann, John, de Vries, Wouter B., Muller, Moritz, Wei, Lan, Hesselman, Cristian.  2016.  Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event. Proceedings of the 2016 Internet Measurement Conference. :255–270.
Distributed Denial-of-Service (DDoS) attacks continue to be a major threat on the Internet today. DDoS attacks overwhelm target services with requests or other traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate a service in multiple physical locations/sites. If all sites announce a common prefix, BGP will associate users around the Internet with a nearby site, defining the catchment of that site. Anycast defends against DDoS both by increasing aggregate capacity across many sites, and allowing each site's catchment to contain attack traffic, leaving other sites unaffected. IP anycast is widely used by commercial CDNs and for essential infrastructure such as DNS, but there is little evaluation of anycast under stress. This paper provides the first evaluation of several IP anycast services under stress with public data. Our subject is the Internet's Root Domain Name Service, made up of 13 independently designed services ("letters", 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at 100× normal load on Nov. 30 and Dec. 1, 2015. We use public data for most of our analysis to examine how different services respond to stress, and identify two policies: sites may absorb attack traffic, containing the damage but reducing service to some users, or they may withdraw routes to shift both good and bad traffic to other sites. We study how these deployment policies resulted in different levels of service to different users during the events. We also show evidence of collateral damage on other services located near the attacks.
Strasburg, Chris, Basu, Samik, Wong, Johnny.  2016.  A Cross-Domain Comparable Measurement Framework to Quantify Intrusion Detection Effectiveness. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :11:1–11:8.
As the frequency, severity, and sophistication of cyber attacks increase, along with our dependence on reliable computing infrastructure, the role of Intrusion Detection Systems (IDS) gaining importance. One of the challenges in deploying an IDS stems from selecting a combination of detectors that are relevant and accurate for the environment where security is being considered. In this work, we propose a new measurement approach to address two key obstacles: the base-rate fallacy, and the unit of analysis problem. Our key contribution is to utilize the notion of a `signal', an indicator of an event that is observable to an IDS, as the measurement target, and apply the multiple instance paradigm (from machine learning) to enable cross-comparable measures regardless of the unit of analysis. To support our approach, we present a detailed case study and provide empirical examples of the effectiveness of both the model and measure by demonstrating the automated construction, optimization, and correlation of signals from different domains of observation (e.g. network based, host based, application based) and using different IDS techniques (signature based, anomaly based).
Sharkov, George.  2016.  From Cybersecurity to Collaborative Resiliency. Proceedings of the 2016 ACM Workshop on Automated Decision Making for Active Cyber Defense. :3–9.
This paper presents the holistic approach to cyber resilience as a means of preparing for the "unknown unknowns". Principles of augmented cyber risks management and resilience management model at national level are presented, with elaboration on multi-stakeholder engagement and partnership for the implementation of national cyber resilience collaborative framework. The complementarity of governance, law, and business/industry initiatives is outlined, with examples of the collaborative resilience model for the Bulgarian national strategy and its multi-national engagements.
Nykänen, Riku, Kärkkäinen, Tommi.  2016.  Supporting Cyber Resilience with Semantic Wiki. Proceedings of the 12th International Symposium on Open Collaboration. :21:1–21:8.

Cyber resilient organizations, their functions and computing infrastructures, should be tolerant towards rapid and unexpected changes in the environment. Information security is an organization-wide common mission; whose success strongly depends on efficient knowledge sharing. For this purpose, semantic wikis have proved their strength as a flexible collaboration and knowledge sharing platforms. However, there has not been notable academic research on how semantic wikis could be used as information security management platform in organizations for improved cyber resilience. In this paper, we propose to use semantic wiki as an agile information security management platform. More precisely, the wiki contents are based on the structured model of the NIST Special Publication 800-53 information security control catalogue that is extended in the research with the additional properties that support the information security management and especially the security control implementation. We present common uses cases to manage the information security in organizations and how the use cases can be implemented using the semantic wiki platform. As organizations seek cyber resilience, where focus is in the availability of cyber-related assets and services, we extend the control selection with option to focus on availability. The results of the study show that a semantic wiki based information security management and collaboration platform can provide a cost-efficient solution for improved cyber resilience, especially for small and medium sized organizations that struggle to develop information security with the limited resources.

Sonowal, Gunikhan, Kuppusamy, K. S..  2016.  MASPHID: A Model to Assist Screen Reader Users for Detecting Phishing Sites Using Aural and Visual Similarity Measures. Proceedings of the International Conference on Informatics and Analytics. :87:1–87:6.

Phishing is one of the major issues in cyber security. In phishing, attackers steal sensitive information from users by impersonation of legitimate websites. This information captured by phisher is used for variety of scenarios such as buying goods using online transaction illegally or sometime may sell the collected user data to illegal sources. Till date, various detection techniques are proposed by different researchers but still phishing detection remains a challenging problem. While phishing remains to be a threat for all users, persons with visual impairments fall under the soft target category, as they primarily depend on the non-visual web access mode. The persons with visual impairments solely depends on the audio generated by the screen readers to identify and comprehend a web page. This weak-link shall be harnessed by attackers in creating impersonate sites that produces same audio output but are visually different. This paper proposes a model titled "MASPHID" (Model for Assisting Screenreader users to Phishing Detection) to assist persons with visual impairments in detecting phishing sites which are aurally similar but visually dissimilar. The proposed technique is designed in such a manner that phishing detection shall be carried out without burdening the users with technical details. This model works against zeroday phishing attack and evaluate high accuracy.

Yu Wang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differential Privacy and Minimum-Variance Unbiased Estimation in Multi-agent Control Systems. 20th World Congress, The International Federation of Automatic Control (IFAC).

In a discrete-time linear multi-agent control system, where the agents are coupled via an environmental state, knowledge of the environmental state is desirable to control the agents locally. However, since the environmental state depends on the behavior of the agents, sharing it directly among these agents jeopardizes the privacy of the agents' pro les, de ned as the  combination of the agents' initial states and the sequence of local control inputs over time. A commonly used solution is to randomize the environmental state before sharing { this leads to a natural trade-o between the privacy of the agents' pro les and the variance of estimating the environmental state. By treating the multi-agent system as a probabilistic model of the environmental state parametrized by the agents' pro les, we show that when the agents' pro les is "-di erentially private, there is a lower bound on the `1 induced norm of the covariance  matrix of the minimum-variance unbiased estimator of the environmental state. This lower bound is achieved by a randomized mechanism that uses Laplace noise.

2017-10-24
Yu Wang, University of Illinois at Urbana-Champaign, Matthew Hale, University of Illinois at Urbana-Champaign, Magnus Egerstedt, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differentially Private Objective Functions in Distributed Cloud-based Optimization. 20th World Congress of the International Federations of Automatic Control (IFAC 2017 World Congress).

Abstract—In this work, we study the problem of keeping the objective functions of individual agents "-differentially private in cloud-based distributed optimization, where agents are subject to global constraints and seek to minimize local objective functions. The communication architecture between agents is cloud-based – instead of communicating directly with each other, they oordinate by sharing states through a trusted cloud computer. In this problem, the difficulty is twofold: the objective functions are used repeatedly in every iteration, and the influence of  erturbing them extends to other agents and lasts over time. To solve the problem, we analyze the propagation of perturbations on objective functions over time, and derive an upper bound on them. With the upper bound, we design a noise-adding mechanism that randomizes the cloudbased distributed optimization algorithm to keep the individual objective functions "-differentially private. In addition, we study the trade-off between the privacy of objective functions and the performance of the new cloud-based distributed optimization algorithm with noise. We present simulation results to numerically verify the theoretical results presented.

John C. Mace, Newcastle University, Nipun Thekkummal, Newcastle University, Charles Morisset, Newcastle University, Aad Van Moorsel, Newcastle University.  2017.  ADaCS: A Tool for Analysing Data Collection Strategies. European Workshop on Performance Engineering (EPEW 2017).

Given a model with multiple input parameters, and multiple possible sources for collecting data for those parameters, a data collection strategy is a way of deciding from which sources to sample data, in order to reduce the variance on the output of the model. Cain and Van Moorsel have previously formulated the problem of optimal data collection strategy, when each arameter can be associated with a prior normal distribution, and when sampling is associated with a cost. In this paper, we present ADaCS, a new tool built as an extension of PRISM, which automatically analyses all possible data collection strategies for a model, and selects the optimal one. We illustrate ADaCS on attack trees, which are a structured approach to analyse the impact and the likelihood of success of attacks and defenses on computer and socio-technical systems. Furthermore, we introduce a new strategy exploration heuristic that significantly improves on a brute force approach.

Atul Bohara, University of Illinois at Urbana-Champaign, Mohammad A. Noureddine, University of Illinois at Urbana-Champaign, Ahmed Fawaz, University of Illinois at Urbana-Champaign, William Sanders, University of Illinois at Urbana-Champaign.  2017.  An Unsupervised Multi-Detector Approach for Identifying Malicious Lateral Movement. IEEE 36th Symposium on Reliable Distributed Systems (SRDS).

Abstract—Lateral movement-based attacks are increasingly leading to compromises in large private and government networks, often resulting in information exfiltration or service disruption. Such attacks are often slow and stealthy and usually evade existing security products. To enable effective detection of such attacks, we present a new approach based on graph-based modeling of the security state of the target system and correlation of diverse indicators of anomalous host behavior. We believe that irrespective of the specific attack vectors used, attackers typically establish a command and control channel to operate, and move in the target system to escalate their privileges and reach sensitive areas. Accordingly, we identify important features of command and control and lateral movement activities and extract them from internal and external communication traffic. Driven by the analysis of the features, we propose the use of multiple anomaly detection techniques to identify compromised hosts. These methods include Principal Component Analysis, k-means clustering, and Median Absolute Deviation-based utlier detection. We evaluate the accuracy of identifying compromised hosts by using injected attack traffic in a real enterprise network dataset, for various attack communication models. Our results show that the proposed approach can detect infected hosts with high accuracy and a low false positive rate.

2017-10-19
Knote, Robin, Baraki, Harun, Söllner, Matthias, Geihs, Kurt, Leimeister, Jan Marco.  2016.  From Requirement to Design Patterns for Ubiquitous Computing Applications. Proceedings of the 21st European Conference on Pattern Languages of Programs. :26:1–26:11.
Ubiquitous Computing describes a concept where computing appears around us at any time and any location. Respective systems rely on context-sensitivity and adaptability. This means that they constantly collect data of the user and his context to adapt its functionalities to certain situations. Hence, the development of Ubiquitous Computing systems is not only a technical issue and must be considered from a privacy, legal and usability perspective, too. This indicates a need for several experts from different disciplines to participate in the development process, mentioning requirements and evaluating design alternatives. In order to capture the knowledge of these interdisciplinary teams to make it reusable for similar problems, a pattern logic can be applied. In the early phase of a development project, requirement patterns are used to describe recurring requirements for similar problems, whereas in a more advanced development phase, design patterns are deployed to find a suitable design for recurring requirements. However, existing literature does not give sufficient insights on how both concepts are related and how the process of deriving design patterns from requirements (patterns) appears in practice. In our work, we give insights on how trust-related requirements for Ubiquitous Computing applications evolve to interdisciplinary design patterns. We elaborate on a six-step process using an example requirement pattern. With this contribution, we shed light on the relation of interdisciplinary requirement and design patterns and provide experienced practitioners and scholars regarding UC application development a way for systematic and effective pattern utilization.
Zhang, Chenwei, Xie, Sihong, Li, Yaliang, Gao, Jing, Fan, Wei, Yu, Philip S..  2016.  Multi-source Hierarchical Prediction Consolidation. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2251–2256.
In big data applications such as healthcare data mining, due to privacy concerns, it is necessary to collect predictions from multiple information sources for the same instance, with raw features being discarded or withheld when aggregating multiple predictions. Besides, crowd-sourced labels need to be aggregated to estimate the ground truth of the data. Due to the imperfection caused by predictive models or human crowdsourcing workers, noisy and conflicting information is ubiquitous and inevitable. Although state-of-the-art aggregation methods have been proposed to handle label spaces with flat structures, as the label space is becoming more and more complicated, aggregation under a label hierarchical structure becomes necessary but has been largely ignored. These label hierarchies can be quite informative as they are usually created by domain experts to make sense of highly complex label correlations such as protein functionality interactions or disease relationships. We propose a novel multi-source hierarchical prediction consolidation method to effectively exploits the complicated hierarchical label structures to resolve the noisy and conflicting information that inherently originates from multiple imperfect sources. We formulate the problem as an optimization problem with a closed-form solution. The consolidation result is inferred in a totally unsupervised, iterative fashion. Experimental results on both synthetic and real-world data sets show the effectiveness of the proposed method over existing alternatives.
Grushka - Cohen, Hagit, Sofer, Oded, Biller, Ofer, Shapira, Bracha, Rokach, Lior.  2016.  CyberRank: Knowledge Elicitation for Risk Assessment of Database Security. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2009–2012.
Security systems for databases produce numerous alerts about anomalous activities and policy rule violations. Prioritizing these alerts will help security personnel focus their efforts on the most urgent alerts. Currently, this is done manually by security experts that rank the alerts or define static risk scoring rules. Existing solutions are expensive, consume valuable expert time, and do not dynamically adapt to changes in policy. Adopting a learning approach for ranking alerts is complex due to the efforts required by security experts to initially train such a model. The more features used, the more accurate the model is likely to be, but this will require the collection of a greater amount of user feedback and prolong the calibration process. In this paper, we propose CyberRank, a novel algorithm for automatic preference elicitation that is effective for situations with limited experts' time and outperforms other algorithms for initial training of the system. We generate synthetic examples and annotate them using a model produced by Analytic Hierarchical Processing (AHP) to bootstrap a preference learning algorithm. We evaluate different approaches with a new dataset of expert ranked pairs of database transactions, in terms of their risk to the organization. We evaluated using manual risk assessments of transaction pairs, CyberRank outperforms all other methods for cold start scenario with error reduction of 20%.
Cerf, Sophie, Robu, Bogdan, Marchand, Nicolas, Boutet, Antoine, Primault, Vincent, Mokhtar, Sonia Ben, Bouchenak, Sara.  2016.  Toward an Easy Configuration of Location Privacy Protection Mechanisms. Proceedings of the Posters and Demos Session of the 17th International Middleware Conference. :11–12.

The widespread adoption of Location-Based Services (LBSs) has come with controversy about privacy. While leveraging location information leads to improving services through geo-contextualization, it rises privacy concerns as new knowledge can be inferred from location records, such as home/work places, habits or religious beliefs. To overcome this problem, several Location Privacy Protection Mechanisms (LPPMs) have been proposed in the literature these last years. However, every mechanism comes with its own configuration parameters that directly impact the privacy guarantees and the resulting utility of protected data. In this context, it can be difficult for a non-expert system designer to choose appropriate configuration parameters to use according to the expected privacy and utility. In this paper, we present a framework enabling the easy configuration of LPPMs. To achieve that, our framework performs an offline, in-depth automated analysis of LPPMs to provide the formal relationship between their configuration parameters and both privacy and the utility metrics. This framework is modular: by using different metrics, a system designer is able to fine-tune her LPPM according to her expected privacy and utility guarantees (i.e., the guarantee itself and the level of this guarantee). To illustrate the capability of our framework, we analyse Geo-Indistinguishability (a well known differentially private LPPM) and we provide the formal relationship between its &epsis; configuration parameter and two privacy and utility metrics.

Dupree, Janna Lynn, Devries, Richard, Berry, Daniel M., Lank, Edward.  2016.  Privacy Personas: Clustering Users via Attitudes and Behaviors Toward Security Practices. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :5228–5239.
A primary goal of research in usable security and privacy is to understand the differences and similarities between users. While past researchers have clustered users into different groups, past categories of users have proven to be poor predictors of end-user behaviors. In this paper, we perform an alternative clustering of users based on their behaviors. Through the analysis of data from surveys and interviews of participants, we identify five user clusters that emerge from end-user behaviors-Fundamentalists, Lazy Experts, Technicians, Amateurs and the Marginally Concerned. We examine the stability of our clusters through a survey-based study of an alternative sample, showing that clustering remains consistent. We conduct a small-scale design study to demonstrate the utility of our clusters in design. Finally, we argue that our clusters complement past work in understanding privacy choices, and that our categorization technique can aid in the design of new computer security technologies.
Ko, Wilson K.H., Wu, Yan, Tee, Keng Peng.  2016.  LAP: A Human-in-the-loop Adaptation Approach for Industrial Robots. Proceedings of the Fourth International Conference on Human Agent Interaction. :313–319.

In the last few years, a shift from mass production to mass customisation is observed in the industry. Easily reprogrammable robots that can perform a wide variety of tasks are desired to keep up with the trend of mass customisation while saving costs and development time. Learning by Demonstration (LfD) is an easy way to program the robots in an intuitive manner and provides a solution to this problem. In this work, we discuss and evaluate LAP, a three-stage LfD method that conforms to the criteria for the high-mix-low-volume (HMLV) industrial settings. The algorithm learns a trajectory in the task space after which small segments can be adapted on-the-fly by using a human-in-the-loop approach. The human operator acts as a high-level adaptation, correction and evaluation mechanism to guide the robot. This way, no sensors or complex feedback algorithms are needed to improve robot behaviour, so errors and inaccuracies induced by these subsystems are avoided. After the system performs at a satisfactory level after the adaptation, the operator will be removed from the loop. The robot will then proceed in a feed-forward fashion to optimise for speed. We demonstrate this method by simulating an industrial painting application. A KUKA LBR iiwa is taught how to draw an eight figure which is reshaped by the operator during adaptation.

Wu, Wenfei, Zhang, Ying, Banerjee, Sujata.  2016.  Automatic Synthesis of NF Models by Program Analysis. Proceedings of the 15th ACM Workshop on Hot Topics in Networks. :29–35.

Network functions (NFs), like firewall, NAT, IDS, have been widely deployed in today’s modern networks. However, currently there is no standard specification or modeling language that can accurately describe the complexity and diversity of different NFs. Recently there have been research efforts to propose NF models. However, they are often generated manually and thus error-prone. This paper proposes a method to automatically synthesize NF models via program analysis. We develop a tool called NFactor, which conducts code refactoring and program slicing on NF source code, in order to generate its forwarding model. We demonstrate its usefulness on two NFs and evaluate its correctness. A few applications of NFactor are described, including network verification.

Zhang, Peng, Li, Hao, Hu, Chengchen, Hu, Liujia, Xiong, Lei, Wang, Ruilong, Zhang, Yuemei.  2016.  Mind the Gap: Monitoring the Control-Data Plane Consistency in Software Defined Networks. Proceedings of the 12th International on Conference on Emerging Networking EXperiments and Technologies. :19–33.

How to debug large networks is always a challenging task. Software Defined Network (SDN) offers a centralized con- trol platform where operators can statically verify network policies, instead of checking configuration files device-by-device. While such a static verification is useful, it is still not enough: due to data plane faults, packets may not be forwarded according to control plane policies, resulting in network faults at runtime. To address this issue, we present VeriDP, a tool that can continuously monitor what we call control-data plane consistency, defined as the consistency between control plane policies and data plane forwarding behaviors. We prototype VeriDP with small modifications of both hardware and software SDN switches, and show that it can achieve a verification speed of 3 μs per packet, with a false negative rate as low as 0.1%, for the Stanford backbone and Internet2 topologies. In addition, when verification fails, VeriDP can localize faulty switches with a probability as high as 96% for fat tree topologies.

Nikravesh, Ashkan, Hong, David Ke, Chen, Qi Alfred, Madhyastha, Harsha V., Mao, Z. Morley.  2016.  QoE Inference Without Application Control. Proceedings of the 2016 Workshop on QoE-based Analysis and Management of Data Communication Networks. :19–24.
Network quality-of-service (QoS) does not always directly translate to users' quality-of-experience (QoE), e.g., changes in a video streaming app's frame rate in reaction to changes in packet loss rate depend on various factors such as the adaptation strategy used by the app and the app's use of forward error correction (FEC) codes. Therefore, knowledge of user QoE is desirable in several scenarios that have traditionally operated on QoS information. Examples include traffic management by ISPs and resource allocation by the operating system (OS). However, today, entities such as ISPs and OSes that implement these optimizations typically do not have a convenient way of obtaining input from applications on user QoE. To address this problem, we propose offline generation of per-application models mapping application-independent QoS metrics to corresponding application-specific QoE metrics, thereby enabling entities (such as ISPs and OSes) that can observe a user's network traffic to infer the user's QoE, in the absence of direct input. In this paper, we describe how such models can be generated and present our results from two popular video applications with significantly different QoE metrics. We also showcase the use of these models for ISPs to perform QoE-aware traffic management and for the OS to offer an efficient QoE diagnosis service.