Biblio
We present an automatic method to build a layered vector graphics structure ready for animation from a clean-line vector drawing of an organic, smooth shape. Inspiring from 3D segmentation methods, we introduce a new metric computed on the medial axis of a region to identify and quantify the visual salience of a sub-region relative to the rest. This enables us to recursively separate each region into two closed sub-regions at the location of the most salient junction. The resulting structure, layered in depth, can be used to pose and animate the drawing using a regular 2D skeleton.
Recently, the chaotic public-key cryptography attracts much attention of researchers, due to the great characters of chaotic maps. With the security superiorities and computation efficiencies of chaotic map over other cryptosystems, in this paper, a novel Identity-based signcryption scheme is proposed using extended chaotic maps. The difficulty of chaos-based discrete logarithm (CDL) problem lies the foundation of the security of proposed ECM-IBSC scheme.
Detecting and repairing dirty data is one of the perennial challenges in data analytics, and failure to do so can result in inaccurate analytics and unreliable decisions. Over the past few years, there has been a surge of interest from both industry and academia on data cleaning problems including new abstractions, interfaces, approaches for scalability, and statistical techniques. To better understand the new advances in the field, we will first present a taxonomy of the data cleaning literature in which we highlight the recent interest in techniques that use constraints, rules, or patterns to detect errors, which we call qualitative data cleaning. We will describe the state-of-the-art techniques and also highlight their limitations with a series of illustrative examples. While traditionally such approaches are distinct from quantitative approaches such as outlier detection, we also discuss recent work that casts such approaches into a statistical estimation framework including: using Machine Learning to improve the efficiency and accuracy of data cleaning and considering the effects of data cleaning on statistical analysis.
Hardware support for isolated execution (such as Intel SGX) enables development of applications that keep their code and data confidential even while running in a hostile or compromised host. However, automatically verifying that such applications satisfy confidentiality remains challenging. We present a methodology for designing such applications in a way that enables certifying their confidentiality. Our methodology consists of forcing the application to communicate with the external world through a narrow interface, compiling it with runtime checks that aid verification, and linking it with a small runtime that implements the narrow interface. The runtime includes services such as secure communication channels and memory management. We formalize this restriction on the application as Information Release Confinement (IRC), and we show that it allows us to decompose the task of proving confidentiality into (a) one-time, human-assisted functional verification of the runtime to ensure that it does not leak secrets, (b) automatic verification of the application's machine code to ensure that it satisfies IRC and does not directly read or corrupt the runtime's internal state. We present /CONFIDENTIAL: a verifier for IRC that is modular, automatic, and keeps our compiler out of the trusted computing base. Our evaluation suggests that the methodology scales to real-world applications.
It is common practice for data scientists to acquire and integrate disparate data sources to achieve higher quality results. But even with a perfectly cleaned and merged data set, two fundamental questions remain: (1) is the integrated data set complete and (2) what is the impact of any unknown (i.e., unobserved) data on query results? In this work, we develop and analyze techniques to estimate the impact of the unknown data (a.k.a., unknown unknowns) on simple aggregate queries. The key idea is that the overlap between different data sources enables us to estimate the number and values of the missing data items. Our main techniques are parameter-free and do not assume prior knowledge about the distribution. Through a series of experiments, we show that estimating the impact of unknown unknowns is invaluable to better assess the results of aggregate queries over integrated data sources.
Garbage is an endemic problem in developing cities due to the continual influx of migrants from rural areas coupled with deficient municipal capacity planning. In cities like Dhaka, open waste dumps contribute to the prevalence of disease, environmental contamination, catastrophic flooding, and deadly fires. Recent interest in the garbage problem has prompted cursory proposals to introduce technology solutions for mapping and fundraising. Yet, the role of technology and its potential benefits are unexplored in this large-scale problem. In this paper, we contribute to the understanding of the waste ecology in Dhaka and how the various actors acquire, perform, negotiate, and coordinate their roles. Within this context, we explore design opportunities for using computing technologies to support collaboration between waste pickers and residents of these communities. We find opportunities in the presence of technology and the absence of mechanisms to facilitate coordination of community funding and crowd work.
The cloud has become an established and widespread paradigm. This success is due to the gain of flexibility and savings provided by this technology. However, the main obstacle to full cloud adoption is security. The cloud, as many other systems taking advantage of the Internet, is also facing threats that compromise data confidentiality and availability. In addition, new cloud-specific attacks have emerged and current intrusion detection and prevention mechanisms are not enough to protect the complex infrastructure of the cloud from these vulnerabilities. Furthermore, one of the promises of the cloud is the Quality of Service (QoS) by continuous delivery, which must be ensured even in case of intrusion. This work presents an overview of the main cloud vulnerabilities, along with the solutions proposed in the context of the H2020 CLARUS project in terms of monitoring techniques for intrusion detection and prevention, including attack-tolerance mechanisms.
It is expected that DRAM memory will be augmented, and perhaps eventually replaced, by one of several up-and-coming memory technologies. These are all non-volatile, in that they retain their contents without power. This allows primary memory to be used as a fast disk replacement. It also enables more aggressive programming models that directly leverage persistence of primary memory. However, it is challenging to maintain consistency of memory in such an environment. There is no consensus on the right programming model for doing so, and subtle differences can have large, and sometimes surprising, effects on the implementation and its performance. The existing literature describes multiple programming systems that provide point solutions to the selective persistence for user data structures. Real progress in this area requires a choice of programming model, which we cannot reasonably make without a real understanding of the design space. Point solutions are insufficient. We systematically explore what we consider to be the most promising part of the space, precisely defining semantics and identifying implementation costs. This allows us to be much more explicit and precise about semantic and implementation trade-offs that were usually glossed over in prior work. It also exposes some promising new design alternatives.
We study a sensor network setting in which samples are encrypted individually using different keys and maintained on a cloud storage. For large systems, e.g. those that generate several millions of samples per day, fine-grained sharing of encrypted samples is challenging. Existing solutions, such as Attribute-Based Encryption (ABE) and Key Aggregation Cryptosystem (KAC), can be utilized to address the challenge, but only to a certain extent. They are often computationally expensive and thus unlikely to operate at scale. We propose an algorithmic enhancement and two heuristics to improve KAC's key reconstruction cost, while preserving its provable security. The improvement is particularly significant for range and down-sampling queries – accelerating the reconstruction cost from quadratic to linear running time. Experimental study shows that for queries of size 32k samples, the proposed fast reconstruction techniques speed-up the original KAC by at least 90 times on range and down-sampling queries, and by eight times on general (arbitrary) queries. It also shows that at the expense of splitting the query into 16 sub-queries and correspondingly issuing that number of different aggregated keys, reconstruction time can be reduced by 19 times. As such, the proposed techniques make KAC more applicable in practical scenarios such as sensor networks or the Internet of Things.
Differential privacy has become the dominant standard in the research community for strong privacy protection. There has been a flood of research into query answering algorithms that meet this standard. Algorithms are becoming increasingly complex, and in particular, the performance of many emerging algorithms is data dependent, meaning the distribution of the noise added to query answers may change depending on the input data. Theoretical analysis typically only considers the worst case, making empirical study of average case performance increasingly important. In this paper we propose a set of evaluation principles which we argue are essential for sound evaluation. Based on these principles we propose DPBench, a novel evaluation framework for standardized evaluation of privacy algorithms. We then apply our benchmark to evaluate algorithms for answering 1- and 2-dimensional range queries. The result is a thorough empirical study of 15 published algorithms on a total of 27 datasets that offers new insights into algorithm behavior–-in particular the influence of dataset scale and shape–-and a more complete characterization of the state of the art. Our methodology is able to resolve inconsistencies in prior empirical studies and place algorithm performance in context through comparison to simple baselines. Finally, we pose open research questions which we hope will guide future algorithm design.
Android malware is becoming very effective in evading detection techniques, and traditional malware detection techniques are demonstrating their weaknesses. Signature based detection shows at least two drawbacks: first, the detection is possible only after the malware has been identified, and the time needed to produce and distribute the signature provides attackers with window of opportunities for spreading the malware in the wild. For solving this problem, different approaches that try to characterize the malicious behavior through the invoked system and API calls emerged. Unfortunately, several evasion techniques have proven effective to evade detection based on system and API calls. In this paper, we propose an approach for capturing the malicious behavior in terms of device resource consumption (using a thorough set of features), which is much more difficult to camouflage. We describe a procedure, and the corresponding practical setting, for extracting those features with the aim of maximizing their discriminative power. Finally, we describe the promising results we obtained experimenting on more than 2000 applications, on which our approach exhibited an accuracy greater than 99%.
The Mobile Ad-hoc Networks (MANET) are suffering from network partitioning when there is group mobility and thus cannot efficiently provide connectivity to all nodes in the network. Autonomous Mobile Mesh Network (AMMNET) is a new class of MANET which will overcome the weakness of MANET, especially from network partitioning. However, AMMNET is vulnerable to routing attacks such as Blackhole attack in which malicious node can make itself as intragroup, intergroup or intergroup bridge router and disrupt the network. In AMMNET, To maintain connectivity, network survivability is an important aspect of reliable communication. Maintaning security is a challenge in the self organising nature of the topology. To address this weakness proposed approach measured the performance of the impact of security enhancement on AMMNET with the basis of bait detection scheme. Modified bait approach that will prevent blackhole node entering into the network and helps to maintain the reliability of the network. The proposed scheme uses the idea of Wumpus World concept from Artificial Intelligence. Modified bait scheme will prevent the blackhole attack and secures network.
The production and sale of counterfeit and substandard pharmaceutical products, such as essential medicines, is an important global public health problem. We describe a chemometric passport-based approach to improve the security of the pharmaceutical supply chain. Our method is based on applying nuclear quadrupole resonance (NQR) spectroscopy to authenticate the contents of medicine packets. NQR is a non-invasive, non-destructive, and quantitative radio frequency (RF) spectroscopic technique. It is sensitive to subtle features of the solid-state chemical environment and thus generates unique chemical fingerprints that are intrinsically difficult to replicate. We describe several advanced NQR techniques, including two-dimensional measurements, polarization enhancement, and spin density imaging, that further improve the security of our authentication approach. We also present experimental results that confirm the specificity and sensitivity of NQR and its ability to detect counterfeit medicines.
In this work we put forward our novel approach using graph partitioning and Micro-Community detection techniques. We firstly use algebraic connectivity or Fiedler Eigenvector and spectral partitioning for community detection. We then used modularity maximization and micro level clustering for detecting micro-communities with concept of community energy. We run micro-community clustering algorithm recursively with modularity maximization which helps us identify dense, deeper and hidden community structures. We experimented our MicroCommunity Clustering (MCC) algorithm for various types of complex technological and social community networks such as directed weighted, directed unweighted, undirected weighted, undirected unweighted. A novel fact about this algorithm is that it is scalable in nature.
Massively Open Online Courses (MOOCs) provide a unique opportunity to reach out to students who would not normally be reached by alleviating the need to be physically present in the classroom. However, teaching software security coursework outside of a classroom setting can be challenging. What are the challenges when converting security material from an on-campus course to the MOOC format? The goal of this research is to assist educators in constructing software security coursework by providing a comparison of classroom courses and MOOCs. In this work, we compare demographic information, student motivations, and student results from an on-campus software security course and a MOOC version of the same course. We found that the two populations of students differed, with the MOOC reaching a more diverse set of students than the on-campus course. We found that students in the on-campus course had higher quiz scores, on average, than students in the MOOC. Finally, we document our experience running the courses and what we would do differently to assist future educators constructing similar MOOC's.
Building trust among remote developers is challenging because trust typically grows through close face-to-face interaction. In this paper, we present the preparatory design of an empirical study aimed to assess whether affective trust, established through social communication between developers, is a predictor of successful collaboration in distributed projects. Specifically, we intend to measure affective trust through sentiment analysis of pull-request comments.
Embedded devices are becoming more widespread, interconnected, and web-enabled than ever. However, recent studies showed that embedded devices are far from being secure. Moreover, many embedded systems rely on web interfaces for user interaction or administration. Web security is still difficult and therefore the web interfaces of embedded systems represent a considerable attack surface. In this paper, we present the first fully automated framework that applies dynamic firmware analysis techniques to achieve, in a scalable manner, automated vulnerability discovery within embedded firmware images. We apply our framework to study the security of embedded web interfaces running in Commercial Off-The-Shelf (COTS) embedded devices, such as routers, DSL/cable modems, VoIP phones, IP/CCTV cameras. We introduce a methodology and implement a scalable framework for discovery of vulnerabilities in embedded web interfaces regardless of the devices' vendor, type, or architecture. To reach this goal, we perform full system emulation to achieve the execution of firmware images in a software-only environment, i.e., without involving any physical embedded devices. Then, we automatically analyze the web interfaces within the firmware using both static and dynamic analysis tools. We also present some interesting case-studies and discuss the main challenges associated with the dynamic analysis of firmware images and their web interfaces and network services. The observations we make in this paper shed light on an important aspect of embedded devices which was not previously studied at a large scale.
KP-ABE mechanism emerges as one of the most suitable security scheme for asymmetric encryption. It has been widely used to implement access control solutions. However, due to its expensive overhead, it is difficult to consider this cryptographic scheme in resource-limited networks, such as the IoT. As the cloud has become a key infrastructural support for IoT applications, it is interesting to exploit cloud resources to perform heavy operations. In this paper, a collaborative variant of KP-ABE named C-KP-ABE for cloud-based IoT applications is proposed. Our proposal is based on the use of computing power and storage capacities of cloud servers and trusted assistant nodes to run heavy operations. A performance analysis is conducted to show the effectiveness of the proposed solution.
Cooperative MIMO communication is a promising technology which enables realistic solution for improving communication performance with MIMO technique in wireless networks that are composed of size and cost constrained devices. However, the security problems inherent to cooperative communication also arise. Cryptography can ensure the confidentiality in the communication and routing between authorized participants, but it usually cannot prevent the attacks from compromised nodes which may corrupt communications by sending garbled signals. In this paper, we propose a cross-layered approach to enhance the security in query-based cooperative MIMO sensor networks. The approach combines efficient cryptographic technique implemented in upper layer with a novel information theory based compromised nodes detection algorithm in physical layer. In the detection algorithm, a cluster of K cooperative nodes are used to identify up to K - 1 active compromised nodes. When the compromised nodes are detected, the key revocation is performed to isolate the compromised nodes and reconfigure the cooperative MIMO sensor network. During this process, beamforming is used to avoid the information leaking. The proposed security scheme can be easily modified and applied to cognitive radio networks. Simulation results show that the proposed algorithm for compromised nodes detection is effective and efficient, and the accuracy of received information is significantly improved.
This paper design three distribution devices for the strong and smart grid, respectively are novel transformer with function of dc bias restraining, energy-saving contactor and controllable reactor with adjustable intrinsic magnetic state based on nanocomposite magnetic material core. The magnetic performance of this material was analyzed and the relationship between the remanence and coercivity was determined. The magnetization and demagnetization circuit for the nanocomposite core has been designed based on three-phase rectification circuit combined with a capacitor charging circuit. The remanence of the nanocomposite core can neutralize the dc bias flux occurred in transformer main core, can pull in the movable core of the contactor instead of the traditional fixed core and adjust the saturation degree of the reactor core. The electromagnetic design of the three distribution devices was conducted and the simulation, experiment results verify correctness of the design which provides intelligent and energy-saving power equipment for the smart power grids safe operation.
Mobile apps often collect and share personal data with untrustworthy third-party apps, which may lead to data misuse and privacy violations. Most of the collected data originates from sensors built into the mobile device, where some of the sensors are treated as sensitive by the mobile platform while others permit unconditional access. Examples of privacy-prone sensors are the microphone, camera and GPS system. Access to these sensors is always mediated by protected function calls. On the other hand, the light sensor, accelerometer and gyroscope are considered innocuous. All apps have unrestricted access to their data. Unfortunately, this gap is not always justified. State-of-the-art privacy mechanisms on Android provide inadequate access control and do not address the vulnerabilities that arise due to unmediated access to so-called innocuous sensors on smartphones. We have developed techniques to demonstrate these threats. As part of our demonstration, we illustrate possible attacks using the innocuous sensors on the phone. As a solution, we present ipShield, a framework that provides users with greater control over their resources at runtime so as to protect against such attacks. We have implemented ipShield by modifying the AOSP.
Databases have become one of the most important components in modern software systems. For example, web services, cloud computing systems, and online transaction processing systems all rely heavily on databases. To abstract the complexity of accessing a database, developers make use of Object-Relational Mapping (ORM) frameworks. ORM frameworks provide an abstraction layer between the application logic and the underlying database. Such abstraction layer automatically maps objects in Object-Oriented Languages to database records, which significantly reduces the amount of boilerplate code that needs to be written. Despite the advantages of using ORM frameworks, we observe several difficulties in maintaining ORM code (i.e., code that makes use of ORM frameworks) when cooperating with our industrial partner. After conducting studies on other open source systems, we find that such difficulties are common in other Java systems. Our study finds that i) ORM cannot completely encapsulate database accesses in objects or abstract the underlying database technology, thus may cause ORM code changes more scattered; ii) ORM code changes are more frequent than regular code, but there is a lack of tools that help developers verify ORM code at compilation time; iii) we find that changes to ORM code are more commonly due to performance or security reasons; however, traditional static code analyzers need to be extended to capture the peculiarities of ORM code in order to detect such problems. Our study highlights the hidden maintenance costs of using ORM frameworks, and provides some initial insights about potential approaches to help maintain ORM code. Future studies should carefully examine ORM code, especially given the rising use of ORM in modern software systems.
Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.