Biblio
We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.
Smart contracts are programs that execute autonomously on blockchains. Their key envisioned uses (e.g. financial instruments) require them to consume data from outside the blockchain (e.g. stock quotes). Trustworthy data feeds that support a broad range of data requests will thus be critical to smart contract ecosystems. We present an authenticated data feed system called Town Crier (TC). TC acts as a bridge between smart contracts and existing web sites, which are already commonly trusted for non-blockchain applications. It combines a blockchain front end with a trusted hardware back end to scrape HTTPS-enabled websites and serve source-authenticated data to relying smart contracts. TC also supports confidentiality. It enables private data requests with encrypted parameters. Additionally, in a generalization that executes smart-contract logic within TC, the system permits secure use of user credentials to scrape access-controlled online data sources. We describe TC's design principles and architecture and report on an implementation that uses Intel's recently introduced Software Guard Extensions (SGX) to furnish data to the Ethereum smart contract system. We formally model TC and define and prove its basic security properties in the Universal Composibility (UC) framework. Our results include definitions and techniques of general interest relating to resource consumption (Ethereum's "gas" fee system) and TCB minimization. We also report on experiments with three example applications. We plan to launch TC soon as an online public service.
We propose a framework for generating a signal control policy for a traffic network of signalized intersections to accomplish control objectives expressible using linear temporal logic. By applying techniques from model checking and formal methods, we obtain a correct-by-construction controller that is guaranteed to satisfy complex specifications. To apply these tools, we identify and exploit structural properties particular to traffic networks that allow for efficient computation of a finite-state abstraction. In particular, traffic networks exhibit a componentwise monotonicity property which enables reaching set computations that scale linearly with the dimension of the continuous state space.}, %keywords={Indexes;Roads;Throughput;Trajectory;Vehicle dynamics;Vehicles;Finite state abstraction;linear temporal logic;transportation networks
With the popularity of cloud computing, database outsourcing has been adopted by many companies. However, database owners may not 100% trust their database service providers. As a result, database privacy becomes a key issue for protecting data from the database service providers. Many researches have been conducted to address this issue, but few of them considered the simultaneous transparent support of existing DBMSs (Database Management Systems), applications and RADTs (Rapid Application Development Tools). A transparent framework based on accessing bridge and mobile app for protecting database privacy with PKI (Public Key Infrastructure) is, therefore, proposed to fill the blank. The framework uses PKI as its security base and encrypts sensitive data with data owners' public keys to protect data privacy. Mobile app is used to control private key and decrypt data, so that accessing sensitive data is completely controlled by data owners in a secure and independent channel. Accessing bridge utilizes database accessing middleware standard to transparently support existing DBMSs, applications and RADTs. This paper presents the framework, analyzes its transparency and security, and evaluates its performance via experiments.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
The threat from insiders is an ever-growing concern for organisations, and in recent years the harm that insiders pose has been widely demonstrated. This paper describes our recent work into how we might support insider threat detection when actions are taken which can be immediately determined as of concern because they fall into one of two categories: they violate a policy which is specifically crafted to describe behaviours that are highly likely to be of concern if they are exhibited, or they exhibit behaviours which follow a pattern of a known insider threat attack. In particular, we view these concerning actions as something that we can design and implement tripwires within a system to detect. We then orchestrate these tripwires in conjunction with an anomaly detection system and present an approach to formalising tripwires of both categories. Our intention being that by having a single framework for describing them, alongside a library of existing tripwires in use, we can provide the community of practitioners and researchers with the basis to document and evolve this common understanding of tripwires.
Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system's output serves as one of the inputs for the users' decision making processes. In this work, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly.
Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system's output serves as one of the inputs for the users' decision making processes. In this work, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly.
There has been a tremendous increase in popularity and adoption of wearable fitness trackers. These fitness trackers predominantly use Bluetooth Low Energy (BLE) for communicating and syncing the data with user's smartphone. This paper presents a measurement-driven study of possible privacy leakage from BLE communication between the fitness tracker and the smartphone. Using real BLE traffic traces collected in the wild and in controlled experiments, we show that majority of the fitness trackers use unchanged BLE address while advertising, making it feasible to track them. The BLE traffic of the fitness trackers is found to be correlated with the intensity of user's activity, making it possible for an eavesdropper to determine user's current activity (walking, sitting, idle or running) through BLE traffic analysis. Furthermore, we also demonstrate that the BLE traffic can represent user's gait which is known to be distinct from user to user. This makes it possible to identify a person (from a small group of users) based on the BLE traffic of her fitness tracker. As BLE-based wearable fitness trackers become widely adopted, our aim is to identify important privacy implications of their usage and discuss prevention strategies.
Wake locks are widely used in Android apps to protect critical computations from being disrupted by device sleeping. Inappropriate use of wake locks often seriously impacts user experience. However, little is known on how wake locks are used in real-world Android apps and the impact of their misuses. To bridge the gap, we conducted a large-scale empirical study on 44,736 commercial and 31 open-source Android apps. By automated program analysis and manual investigation, we observed (1) common program points where wake locks are acquired and released, (2) 13 types of critical computational tasks that are often protected by wake locks, and (3) eight patterns of wake lock misuses that commonly cause functional and non-functional issues, only three of which had been studied by existing work. Based on our findings, we designed a static analysis technique, Elite, to detect two most common patterns of wake lock misuses. Our experiments on real-world subjects showed that Elite is effective and can outperform two state-of-the-art techniques.
Cyber-attacks are cheap, easy to conduct and often pose little risk in terms of attribution, but their impact could be lasting. The low attribution is because tracing cyber-attacks is primitive in the current network architecture. Moreover, even when attribution is known, the absence of enforcement provisions in international law makes cyber attacks tough to litigate, and hence attribution is hardly a deterrent. Rather than attributing attacks, we can re-look at cyber-attacks as societal events associated with social, political, economic and cultural (SPEC) motivations. Because it is possible to observe SPEC motives on the internet, social media data could be valuable in understanding cyber attacks. In this research, we use sentiment in Twitter posts to observe country-to-country perceptions, and Arbor Networks data to build ground truth of country-to-country DDoS cyber-attacks. Using this dataset, this research makes three important contributions: a) We evaluate the impact of heightened sentiments towards a country on the trend of cyber-attacks received by the country. We find that, for some countries, the probability of attacks increases by up to 27% while experiencing negative sentiments from other nations. b) Using cyber-attacks trend and sentiments trend, we build a decision tree model to find attacks that could be related to extreme sentiments. c) To verify our model, we describe three examples in which cyber-attacks follow increased tension between nations, as perceived in social media.
In this study, the trusted third party (TTP) in Australia's B2C marketplace is studied and the factors influencing consumers' trust behaviour are examined from the perspective of consumers' online trust. Based on the literature review and combined with the development status and background of Australia's e-commerce, underpinned by the Theory of Planned Behaviour (TPB) and a conceptual trust model, this paper expatiates the specific factors and influence mechanism of TTP on consumers' trust behaviour. Also this paper explains two different functions of TTP to solve the online trust problem faced by consumers. Meanwhile, this paper summarizes five different types of services provided by TTPs during the establishment of the trust relationship. Finally, the present study selects 100 B2C enterprises by the simple random sampling method and makes a detailed analysis of their TTPs, to verify the services and functions of the proposed TTP in the trust model. This study is of some significance for comprehending the influence mechanism, functions and services of TTPs on consumers' trust behaviour in the realistic Australian B2C environment.
Voice-controlled intelligent personal assistants, such as Cortana, Google Now, Siri and Alexa, are increasingly becoming a part of users' daily lives, especially on mobile devices. They introduce a significant change in information access, not only by introducing voice control and touch gestures but also by enabling dialogues where the context is preserved. This raises the need for evaluation of their effectiveness in assisting users with their tasks. However, in order to understand which type of user interactions reflect different degrees of user satisfaction we need explicit judgements. In this paper, we describe a user study that was designed to measure user satisfaction over a range of typical scenarios of use: controlling a device, web search, and structured search dialogue. Using this data, we study how user satisfaction varied with different usage scenarios and what signals can be used for modeling satisfaction in the different scenarios. We find that the notion of satisfaction varies across different scenarios, and show that, in some scenarios (e.g. making a phone call), task completion is very important while for others (e.g. planning a night out), the amount of effort spent is key. We also study how the nature and complexity of the task at hand affects user satisfaction, and find that preserving the conversation context is essential and that overall task-level satisfaction cannot be reduced to query-level satisfaction alone. Finally, we shed light on the relative effectiveness and usefulness of voice-controlled intelligent agents, explaining their increasing popularity and uptake relative to the traditional query-response interaction.
In a wireless system, a signal map shows the signal strength at different locations termed reference points (RPs). As access points (APs) and their transmission power may change over time, keeping an updated signal map is important for applications such as Wi-Fi optimization and indoor localization. Traditionally, the signal map is obtained by a full site survey, which is time-consuming and costly. We address in this paper how to efficiently update a signal map given sparse samples randomly crowdsourced in the space (e.g., by signal monitors, explicit human input, or implicit user participation). We propose Compressive Signal Reconstruction (CSR), a novel learning system employing Bayesian compressive sensing (BCS) for online signal map update. CSR does not rely on any path loss model or line of sight, and is generic enough to serve as a plug-in of any wireless system. Besides signal map update, CSR also computes the estimation error of signals in terms of confidence interval. CSR models the signal correlation with a kernel function. Using it, CSR constructs a sensing matrix based on the newly sampled signals. The sensing matrix is then used to compute the signal change at all the RPs with any BCS algorithm. We have conducted extensive experiments on CSR in our university campus. Our results show that CSR outperforms other state-of-the-art algorithms by a wide margin (reducing signal error by about 30% and sampling points by 20%).
Event discovery from single pictures is a challenging problem that has raised significant interest in the last decade. During this time, a number of interesting solutions have been proposed to tackle event discovery in still images. However, a large scale benchmarking image dataset for the evaluation and comparison of event discovery algorithms from single images is still lagging behind. To this aim, in this paper we provide a large-scale properly annotated and balanced dataset of 490,000 images, covering every aspect of 14 different types of social events, selected among the most shared ones in the social network. Such a large scale collection of event-related images is intended to become a powerful support tool for the research community in multimedia analysis by providing a common benchmark for training, testing, validation and comparison of existing and novel algorithms. In this paper, we provide a detailed description of how the dataset is collected, organized and how it can be beneficial for the researchers in the multimedia analysis domain. Moreover, a deep learning based approach is introduced into event discovery from single images as one of the possible applications of this dataset with a belief that deep learning can prove to be a breakthrough also in this research area. By providing this dataset, we hope to gather research community in the multimedia and signal processing domains to advance this application.
Interactive systems are developed according to requirements, which may be, for instance, documentation, prototypes, diagrams, etc. The informal nature of system requirements may be a source of problems: it may be the case that a system does not implement the requirements as expected, thus, a way to validate whether an implementation follows the requirements is needed. We propose a novel approach to validating a system using formal models of the system. In this approach, a set of traces generated from the execution of the real interactive system is searched over the state space of the formal model. The scalability of the approach is demonstrated by an application to an industrial system in the nuclear plant domain. The combination of trace analysis and formal methods provides feedback that can bring improvements to both the real interactive system and the formal model.
Currently, different forms of ransomware are increasingly threatening Internet users. Modern ransomware encrypts important user data, and it is only possible to recover it once a ransom has been paid. In this article we show how software-defined networking can be utilized to improve ransomware mitigation. In more detail, we analyze the behavior of popular ransomware - CryptoWall - and, based on this knowledge, propose two real-time mitigation methods. Then we describe the design of an SDN-based system, implemented using OpenFlow, that facilitates a timely reaction to this threat, and is a crucial factor in the case of crypto ransomware. What is important is that such a design does not significantly affect overall network performance. Experimental results confirm that the proposed approach is feasible and efficient.
With the outgrowth of video editing tools, video information trustworthiness becomes a hypersensitive field. Today many devices have the capability of capturing digital videos such as CCTV, digital cameras and mobile phones and these videos may transmitted over the Internet or any other non secure channel. As digital video can be used to as supporting evidence, it has to be protected against manipulation or tampering. As most video authentication techniques are based on watermarking and digital signatures, these techniques are effectively used in copyright purposes but difficult to implement in other cases such as video surveillance or in videos captured by consumer's cameras. In this paper we propose an intelligent technique for video authentication which uses the video local information which makes it useful for real world applications. The proposed algorithm relies on the video's statistical local information which was applied on a dataset of videos captured by a range of consumer video cameras. The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.
This paper has presented an approach of vTPM (virtual Trusted Platform Module) Dynamic Trust Extension (DTE) to satisfy the requirements of frequent migrations. With DTE, vTPM is a delegation of the capability of signing attestation data from the underlying pTPM (physical TPM), with one valid time token issued by an Authentication Server (AS). DTE maintains a strong association between vTPM and its underlying pTPM, and has clear distinguishability between vTPM and pTPM because of the different security strength of the two types of TPM. In DTE, there is no need for vTPM to re-acquire Identity Key (IK) certificate(s) after migration, and pTPM can have a trust revocation in real time. Furthermore, DTE can provide forward security. Seen from the performance measurements of its prototype, DTE is feasible.
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of high-performance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.
Two mainstream techniques are traditionally used to authorize access to a WiFi network. Small scale networks usually rely on the offline distribution of a WPA/WPA2 static pre-shared secret key (PSK); security hence relies on the fact that this PSK is not leaked by end user, and is not disclosed via dictionary or brute-force attacks. On the other side, Enterprise and large scale networks typically employ online authorization using an 802.1X-based authentication service leveraging a backend online infrastructure (e.g. Radius servers/proxies). In this work, we propose a new mechanism which does not require neither online operation nor backend access control infrastructure, but which does not force us to rely on a static pre-shared secret key. The idea is very simple, yet effective: directly broadcast in the WLAN beacons an encrypted version of the secret key required to access the WLAN network, so that only the users which possess suitable authorization credentials can decrypt and use it. This proposed approach clearly decouples the management of authorization credentials, issued offline to the authorized end users, from the actual secret key used in the WLAN network, which can thus be in principle changed at each new user's access. The solution described in the paper relies on attribute-based encryption, and is designed to be compatible with WPA2 and deployable within standard 802.11 management frames. Since no user identification is required (access control is based on attributes rather than on the user identity), the proposed approach further improves privacy. We demonstrate the feasibility of the proposed solution via a concrete implementation in Linux-based devices and via relevant testing in a real-world experimental setup.