Biblio
The rising prevalence of algorithmic interfaces, such as curated feeds in online news, raises new questions for designers, scholars, and critics of media. This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust. A two-stage process of how transparency affects trust was hypothesized drawing on theories of information processing and procedural justice. In an online field experiment, three levels of system transparency were tested in the high-stakes context of peer assessment. Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust. Attitudes of individuals whose expectations were met did not vary with transparency. Results are discussed in terms of a dual process model of attitude change and the depth of justification of perceived inconsistency. Designing for trust requires balanced interface transparency - not too little and not too much.
Reputation systems in current electronic marketplaces can easily be manipulated by malicious sellers in order to appear more reputable than appropriate. We conducted a controlled experiment with 40 UK and 41 German participants on their ability to detect malicious behavior by means of an eBay-like feedback profile versus a novel interface involving an interactive visualization of reputation data. The results show that participants using the new interface could better detect and understand malicious behavior in three out of four attacks (the overall detection accuracy 77% in the new vs. 56% in the old interface). Moreover, with the new interface, only 7% of the users decided to buy from the malicious seller (the options being to buy from one of the available sellers or to abstain from buying), as opposed to 30% in the old interface condition.
Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.
The concept of digital right management (DRM) has become extremely important in current mobile environments. This paper shows how partial bitstream encryption can allow the secure distribution of hardware applications resembling the mechanisms of traditional software DRM. Building on the recent developments towards the secure distribution of hardware cores, the paper demonstrates a prototypical implementation of a user mobile device supporting such distribution mechanisms. The prototype extends the Android operating system with support for hardware reconfigurability and showcases the interplay of novel security concepts enabled by hardware DRM, the advantages of a design flow based on high-level synthesis, and the opportunities provided by current software-rich reconfigurable Systems-on-Chips. Relying on this prototype, we also collected extensive quantitative results demonstrating the limited overhead incurred by the secure distribution architecture.
Many of the game-changing innovations the Internet brought and continues to bring to all of our daily professional and private lifes come with privacy-related costs. The more day-to-day activities are based on the Internet, the more personal data are generated, collected, stored and used. Big Data, Internet of Things, cyber-physical-systems and similar trends will be based on even more personal information all of us use and produce constantly. Three major points are to be noted here: First, there is no common European or even worldwide agreement whether and in how far these collections need to be limited. There is, though, no common privacy law âĂŞ neither in Europe nore worldwide. Second, laws that do exist constantly fail in steering the developments. Technology innovations come so fast, are so disruptive and so market-demand driven, that an ex-post control by law and courts constantly comes late and/or is circumvented and/or ignored. Third, lack of consensus and lack of steering lead to huge data accumulations and market monopolies built up very quickly and held by very few companies working on a global level with data driven business models. These early movers are in many cases in very dominant market positions making it not only more difficult to regulate their behavior but also to keep the markets open for future competitors. This workshop will evaluate current European and international attempts to deal with this situation. Although all four panelists have a legal background, the meeting will be less interested in an in-depth review of existing laws and their impact, but more in the underlying technological and ethical principles (and their inconsistencies) leading to the sitation described. Specific attention will be attributed to technology driven attempts to deal with the situation, such as privacy by design, privacy by default, usable privacy etc.
Cloud storage services such as Dropbox [1] and Google Drive [2] are becoming more and more popular. On the one hand, they provide users with mobility, scalability, and convenience. However, privacy issues arise when the storage becomes not fully controlled by users. Although modern encryption schemes are effective at protecting content of data, there are two drawbacks of the encryption-before-outsourcing approach: First, one kind of sensitive information, Access Pattern of the data is left unprotected. Moreover, encryption usually makes the data difficult to use. In this paper, we propose AIS (Access Indistinguishable Storage), the first client-side system that can partially conceal access pattern of the cloud storage in constant time. Besides data content, AIS can conceal information about the number of initial files, and length of each initial file. When it comes to the access phase after initiation, AIS can effectively conceal the behavior (read or write) and target file of the current access. Moreover, the existence and length of each file will remain confidential as long as there is no access after initiation. One application of AIS is SSE (Searchable Symmetric Encryption), which makes the encrypted data searchable. Based on AIS, we propose SBA (SSE Built on AIS). To the best of our knowledge, SBA is safer than any other SSE systems of the same complexity, and SBA is the first to conceal whether current keyword was queried before, the first to conceal whether current operation is an addition or deletion, and the first to support direct modification of files.
With the emergence of the internet of things (IoT) and participatory sensing (PS) paradigms trustworthiness of remotely sensed data has become a vital research question. In this work, we present the design of a trusted sensor, which uses physically unclonable functions (PUFs) as anchor to ensure integrity, authenticity and non-repudiation guarantees on the sensed data. We propose trusted sensors for mobile devices to address the problem of potential manipulation of mobile sensors' readings by exploiting vulnerabilities of mobile device OS in participatory sensing for IoT applications. Preliminary results from our implementation of trusted visual sensor node show that the proposed security solution can be realized without consuming significant amount of resources of the sensor node.
As demand for wireless mobile connectivity continues to explode, cellular network infrastructure capacity requirements continue to grow. While 5G tries to address capacity requirements at the radio layer, the load on the cellular core network infrastructure (called Enhanced Packet Core (EPC)) stresses the network infrastructure. Our work examines the architecture, protocols of current cellular infrastructures and the workload on the EPC. We study the challenges in dimensioning capacity and review the design alternatives to support the significant scale up desired, even for the near future. We breakdown the workload on the network infrastructure into its components-signaling event transactions; database or lookup transactions and packet processing. We quantitatively show the control plane and data plane load on the various components of the EPC and estimate how future 5G cellular network workloads will scale. This analysis helps us to understand the scalability challenges for future 5G EPC network components. Other efforts to scale the 5G cellular network take a system view where the control plane is separated from the data path and is terminated on a centralized SDN controller. The SDN controller configures the data path on a widely distributed switching infrastructure. Our analysis of the workload informs us on the feasibility of various design alternatives and motivates our efforts to develop our clean-slate approach, called CleanG.
Modern web browsers are incredibly complex, with millions of lines of code and over one thousand JavaScript functions and properties available to website authors. This work investigates how these browser features are used on the modern, open web. We find that JavaScript features differ wildly in popularity, with over 50% of provided features never used on the web's 10,000 most popular sites according to Alexa We also look at how popular ad and tracking blockers change the features used by sites, and identify a set of approximately 10% of features that are disproportionately blocked (prevented from executing by these extensions at least 90% of the time they are used). We additionally find that in the presence of these blockers, over 83% of available features are executed on less than 1% of the most popular 10,000 websites. We further measure other aspects of browser feature usage on the web, including how many features websites use, how the length of time a browser feature has been in the browser relates to its usage on the web, and how many security vulnerabilities have been associated with related browser features.
The success or failure of a mobile application (`app') is largely determined by user ratings. Users frequently make their app choices based on the ratings of apps in comparison with similar, often competing apps. Users also expect apps to continually provide new features while maintaining quality, or the ratings drop. At the same time apps must also be secure, but is there a historical trade-off between security and ratings? Or are app store ratings a more all-encompassing measure of product maturity? We used static analysis tools to collect security-related metrics in 38,466 Android apps from the Google Play store. We compared the rate of an app's permission misuse, number of requested permissions, and Androrisk score, against its user rating. We found that high-rated apps have statistically significantly higher security risk metrics than low-rated apps. However, the correlations are weak. This result supports the conventional wisdom that users are not factoring security risks into their ratings in a meaningful way. This could be due to several reasons including users not placing much emphasis on security, or that the typical user is unable to gauge the security risk level of the apps they use everyday.
Honeypot systems are an effective method for defending production systems from security breaches and to gain detailed information about attackers' motivation, tactics, software and infrastructure. In this paper we present how different types of honeypots can be employed to gain valuable information about attacks and attackers, and also outline new and innovative possibilities for future research.
Defending information systems against advanced attacks is a challenging task; even if all the systems have been properly updated and all the known vulnerabilities have been patched, there is still the possibility of previously unknown zero day attack compromising the system. Honeypots offer a more proactive tool for detecting possible attacks. What is more, they can act as a tool for understanding attackers intentions. In this paper, we propose a design for a diversified honeypot. By increasing variability present in software, diversification decreases the number of assumptions an attacker can make about the target system.
Black-holes, gray-holes and, wormholes, are devastating to the correct operation of any network. These attacks (among others) are based on the premise that packets will travel through compromised nodes, and methods exist to coax routing into these traps. Detection of these attacks are mainly centered around finding the subversion in action. In networks, bottleneck nodes -- those that sit on many potential routes between sender and receiver -- are an optimal location for compromise. Finding naturally occurring path bottlenecks, however, does not entitle network subversion, and as such are more difficult to detect. The dynamic nature of mobile ad-hoc networks (manets) causes ubiquitous routing algorithms to be even more susceptible to this class of attacks. Finding perceived bottlenecks in an olsr based manet, is able to capture between 50%-75% of data. In this paper we propose a method of subtly expanding perceived bottlenecks into complete bottlenecks, raising capture rate up to 99%; albeit, at high cost. We further tune the method to reduce cost, and measure the corresponding capture rate.
Emergency evacuations during disasters minimize loss of lives and injuries. It is not surprising that emergency evacuation preparedness is mandatory for organizations in many jurisdictions. In the case of corporations, this requirement translates to considerable expenses, consisting of construction costs, equipment, recruitment, retention and training. In addition, required regular evacuation drills cause recurring expenses and loss of productivity. Any automation to assist in these drills and in actual evacuations can mean savings of costs, time and lives. Evacuation assistance systems rely on attendance systems that often fall short in accuracy, particularly in environments with lot of "non-swipers" (customers, visitors, etc.,). A critical question to answer in the case of an emergency is "How many people are still in the building?". This number is calculated by comparing the number of people gathered at assembly point to the last known number of people inside the building. An IoT based system can enhance the answer to that question by providing the number of people in the building, provide their last known locations in an automated fashion and even automate the reconciliation process. Our proposed system detects the people in the building automatically using multiple channels such as WiFi and motion detection. Such a system needs the ability to link specific identifiers to persons reliably. In this paper we present our statistics and heuristics based solutions for linking detected identifiers as belonging to an actual persons in a privacy preserving manner using IoT technologies.
Radio network information is leaked well beyond the perimeter in which the radio network is deployed. We investigate attacks where person location can be inferred using the radio characteristics of wireless links (e.g., the received signal strength). An attacker can deploy a network of receivers which measure the received signal strength of the radio signals transmitted by the legitimate wireless devices inside a perimeter, allowing the attacker to learn the locations of people moving in the vicinity of the devices inside the perimeter. In this paper, we develop the first solution to this location privacy problem where neither the attacker nodes nor the tracked moving object transmit any RF signals. We first model the radio network leakage attack using a Stackelberg game. Next, we define utility and cost functions related to the defender and attacker actions. Last, using our utility and cost functions, we find the optimal strategy for the defender by applying a greedy method. We evaluate our game theoretic framework using experiments and find that our approach significantly reduces the chance of an attacker determining the location of people inside a perimeter.
A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.
The Common Vulnerability Scoring System (CVSS) is the de facto standard for vulnerability severity measurement today and is crucial in the analytics driving software fortification. Required by the U.S. National Vulnerability Database, over 75,000 vulnerabilities have been scored using CVSS. We compare how the CVSS correlates with another, closely-related measure of security impact: bounties. Recent economic studies of vulnerability disclosure processes show a clear relationship between black market value and bounty payments. We analyzed the CVSS scores and bounty awarded for 703 vulnerabilities across 24 products. We found a weak (Spearmanâs Ï = 0.34) correlation between CVSS scores and bounties, with CVSS being more likely to underestimate bounty. We believe such a negative result is a cause for concern. We investigated why these measurements were so discordant by (a) analyzing the individual questions of CVSS with respect to bounties and (b) conducting a qualitative study to find the similarities and differences between CVSS and the publicly-available criteria for awarding bounties. Among our findings were that the bounty criteria were more explicit about code execution and privilege escalation whereas CVSS makes no explicit mention of those. We also found that bounty valuations are evaluated solely by project maintainers, whereas CVSS has little provenance in practice.
The recent growth of anonymous social network services – such as 4chan, Whisper, and Yik Yak – has brought online anonymity into the spotlight. For these services to function properly, the integrity of user anonymity must be preserved. If an attacker can determine the physical location from where an anonymous message was sent, then the attacker can potentially use side information (for example, knowledge of who lives at the location) to de-anonymize the sender of the message. In this paper, we investigate whether the popular anonymous social media application Yik Yak is susceptible to localization attacks, thereby putting user anonymity at risk. The problem is challenging because Yik Yak application does not provide information about distances between user and message origins or any other message location information. We provide a comprehensive data collection and supervised machine learning methodology that does not require any reverse engineering of the Yik Yak protocol, is fully automated, and can be remotely run from anywhere. We show that we can accurately predict the locations of messages up to a small average error of 106 meters. We also devise an experiment where each message emanates from one of nine dorm colleges on the University of California Santa Cruz campus. We are able to determine the correct dorm college that generated each message 100\textbackslash% of the time.
Content Security Policy (CSP) is powerful client-side security layer that helps in mitigating and detecting wide ranges of Web attacks including cross-site scripting (XSS). However, utilizing CSP by site administrators is a fallible process and may require significant changes in web application code. In this paper, we propose an approach to help site administers to overcome these limitations in order to utilize the full benefits of CSP mechanism which leads to more immune sites from XSS. The algorithm is implemented as a plugin. It does not interfere with the Web application original code. The plugin can be “installed” on any other web application with minimum efforts. The algorithm can be implemented as part of Web Server layer, not as part of the business logic layer. It can be extended to support generating CSP for contents that are modified by JavaScript after loading. Current approach inspects the static contents of URLs.
Named Data Networking (NDN), a clean-slate data oriented Internet architecture targeting on replacing IP, brings many potential benefits for content distribution. Real deployment of NDN is crucial to verify this new architecture and promote academic research, but work in this field is at an early stage. Due to the fundamental design paradigm difference between NDN and IP, Deploying NDN as IP overlay causes high overhead and inefficient transmission, typically in streaming applications. Aiming at achieving efficient NDN streaming distribution, this paper proposes a transitional architecture of NDN/IP hybrid network dubbed Centaur, which embodies both NDN's smartness, scalability and IP's transmission efficiency and deployment feasibility. In Centaur, the upper NDN module acts as the smart head while the lower IP module functions as the powerful feet. The head is intelligent in content retrieval and self-control, while the IP feet are able to transport large amount of media data faster than that if NDN directly overlaying on IP. To evaluate the performance of our proposal, we implement a real streaming prototype in ndnSIM and compare it with both NDN-Hippo and P2P under various experiment scenarios. The result shows that Centaur can achieve better load balance with lower overhead, which is close to the performance that ideal NDN can achieve. All of these validate that our proposal is a promising choice for the incremental and compatible deployment of NDN.
Online Social Networks (OSNs) are continuously suffering from the negative impact of Cross-Site Scripting (XSS) vulnerabilities. This paper describes a novel framework for mitigating XSS attack on OSN-based platforms. It is completely based on the request authentication and view isolation approach. It detects XSS attack through validating string value extracted from the vulnerable checkpoint present in the web page by implementing string examination algorithm with the help of XSS attack vector repository. Any similarity (i.e. string is not validated) indicates the presence of malicious code injected by the attacker and finally it removes the script code to mitigate XSS attack. To assess the defending ability of our designed model, we have tested it on OSN-based web application i.e. Humhub. The experimental results revealed that our model discovers the XSS attack vectors with low false negatives and false positive rate tolerable performance overhead.
Active defense is a popular defense technique based on systems that hinder an attacker's progress by design, rather than reactively responding to an attack only after its detection. Well-known active defense systems are honeypots. Honeypots are fake systems, designed to look like real production systems, aimed at trapping an attacker, and analyzing his attack strategy and goals. These types of systems suffer from a major weakness: it is extremely hard to design them in such a way that an attacker cannot distinguish them from a real production system. In this paper, we advocate that, instead of adding additional fake systems in the corporate network, the production systems themselves should be instrumented to provide active defense capabilities. This perspective to active defense allows containing costs and complexity, while at the same time provides the attacker with a more realistic-looking target, and gives the Incident Response Team more time to identify the attacker. The proposed proof-of-concept prototype system can be used to implement active defense in any corporate production network, with little upfront work, and little maintenance.
In this work, we give a lattice attack on the ECDSA implementation in the latest version of OpenSSL, which implement the scalar multiplication by windowed Non-Adjacent Form method. We propose a totally different but more efficient method of extracting and utilizing information from the side-channel results, remarkably improving the previous attacks. First, we develop a new efficient method, which can extract almost all information from the side-channel results, obtaining 105.8 bits of information per signature on average for 256-bit ECDSA. Then in order to make the utmost of our extracted information, we translate the problem of recovering secret key to the Extended Hidden Number Problem, which can be solved by lattice reduction algorithms. Finally, we introduce the methods of elimination, merging, most significant digit recovering and enumeration to improve the attack. Our attack is mounted to the \series secp256k1\ curve, and the result shows that only 4 signatures would be enough to recover the secret key if the Flush+Reload attack is implemented perfectly without any error,which is much better than the best known result needing at least 13 signatures.
Recognition of facial expressions authenticity is quite troublesome for humans. Therefore, it is an interesting topic for the computer vision community, as the developed algorithms for facial expressions authenticity estimation may be used as indicators of deception. This paper discusses the state-of-the art methods developed for smile veracity estimation and proposes a plan of development and validation of a novel approach to automated discrimination between genuine and posed facial expressions. The proposed fully automated technique is based on the extension of the high-dimensional Local Binary Patterns (LBP) to the spatio-temporal domain and combines them with the dynamics of facial landmarks movements. The proposed technique will be validated on several existing smile databases and a novel database created with the use of a high speed camera. Finally, the developed framework will be applied for the detection of deception in real life scenarios.