Biblio
This paper explores experiences with ring and bracelet activity tracker form factors. During the first week of a 2-week field study participants (n=6) wore non-functional mock-ups of ring and bracelet wellness trackers, and provided feedback on their experiences. During the second week, participants used a commercial wellness tracking ring, which collected physical exercise and sleep data and visualized it in a mobile application. Our salient findings based on 196 user diary entries suggest, that the ring form factor is considered beautiful, aesthetic and contributing to the wearer's image. However, the bracelet form factor is more practical for active lifestyle, and preferred in situations where the hands are performing tasks requiring gripping objects, such as sport activities, cleaning the car, cooking and washing dishes. Users strongly identified the ring form factor as jewellery that is intended to be seen, whereas bracelets were considered hidden and inconspicuous elements of the user's ensemble.
Face recognition has attained a greater importance in bio-metric authentication due to its non-intrusive property of identifying individuals at varying stand-off distance. Face recognition based on multi-spectral imaging has recently gained prime importance due to its ability to capture spatial and spectral information across the spectrum. Our first contribution in this paper is to use extended multi-spectral face recognition in two different age groups. The second contribution is to show empirically the performance of face recognition for two age groups. Thus, in this paper, we developed a multi-spectral imaging sensor to capture facial database for two different age groups (≤ 15years and ≥ 20years) at nine different spectral bands covering 530nm to 1000nm range. We then collected a new facial images corresponding to two different age groups comprises of 168 individuals. Extensive experimental evaluation is performed independently on two different age group databases using four different state-of-the-art face recognition algorithms. We evaluate the verification and identification rate across individual spectral bands and fused spectral band for two age groups. The obtained evaluation results shows higher recognition rate for age groups ≥ 20years than ≤ 15years, which indicates the variation in face recognition across the different age groups.
In this work we present a study that evaluates and compares two block ciphers, AES and PRESENT, in the context of lightweight cryptography for smartphones security applications. To the best of our knowledge, this is the first comparison between these ciphers using a smartphone as computing platform. AES is the standard for symmetric encryption and PRESENT is one of the first ultra-lightweight ciphers proposed in the literature and included in the ISO/IEC 29192-2. In our study, we consider execution time, voltage consumption and memory usage as metrics for comparison purposes. The two block ciphers were evaluated through several experiments in a low-cost smartphone using Android built in tools. From the results we conclude that, for general purpose encryption AES performs statistically better although block-to-block PRESENT delivers better results.
This paper explores the opportunities for incorporating shape changing properties into everyday home appliances. Throughout a design research approach the vacuum cleaner is used as a design case with the overall aim of enhancing the user experience by transforming the appliance into a sensing object. Three fully functional prototypes were developed in order to illustrate how shape change can fit into the context of our homes. The shape changing functionalities are: 1) a digital power button that supports dynamic affordances, 2) an analog handle that mediates the amount of dust particles through haptic feedback and 3) a body that behaves in a lifelike manner dependent on the user treatment. We report the development and implementation of the functional prototypes as well as technical limitations and initial user reactions on the prototypes.
Efficient implementation of double point multiplication is crucial for elliptic curve cryptographic systems. We propose efficient algorithms and architectures for the computation of double point multiplication on binary elliptic curves and provide a comparative analysis of their performance for 112-bit security level. To the best of our knowledge, this is the first work in the literature which considers the design and implementation of simultaneous computation of double point multiplication. We first provide algorithmics for the three main double point multiplication methods. Then, we perform data-flow analysis and propose hardware architectures for the presented algorithms. Finally, we implement the proposed state-of-the-art architectures on FPGA platform for the comparison purposes and report the area and timing results. Our results indicate that differential addition chain based algorithms are better suited to compute double point multiplication over binary elliptic curves for high performance applications.
Many applications of mobile computing require the computation of dot-product of two vectors. For examples, the dot-product of an individual's genome data and the gene biomarkers of a health center can help detect diseases in m-Health, and that of the interests of two persons can facilitate friend discovery in mobile social networks. Nevertheless, exposing the inputs of dot-product computation discloses sensitive information about the two participants, leading to severe privacy violations. In this paper, we tackle the problem of privacy-preserving dot-product computation targeting mobile computing applications in which secure channels are hardly established, and the computational efficiency is highly desirable. We first propose two basic schemes and then present the corresponding advanced versions to improve efficiency and enhance privacy-protection strength. Furthermore, we theoretically prove that our proposed schemes can simultaneously achieve privacy-preservation, non-repudiation, and accountability. Our numerical results verify the performance of the proposed schemes in terms of communication and computational overheads.
Standardization and harmonization efforts have reached a consensus towards using a special-purpose Vehicular Public-Key Infrastructure (VPKI) in upcoming Vehicular Communication (VC) systems. However, there are still several technical challenges with no conclusive answers; one such an important yet open challenge is the acquisition of short-term credentials, pseudonym: how should each vehicle interact with the VPKI, e.g., how frequently and for how long? Should each vehicle itself determine the pseudonym lifetime? Answering these questions is far from trivial. Each choice can affect both the user privacy and the system performance and possibly, as a result, its security. In this paper, we make a novel systematic effort to address this multifaceted question. We craft three generally applicable policies and experimentally evaluate the VPKI system performance, leveraging two large-scale mobility datasets. We consider the most promising, in terms of efficiency, pseudonym acquisition policies; we find that within this class of policies, the most promising policy in terms of privacy protection can be supported with moderate overhead. Moreover, in all cases, this work is the first to provide tangible evidence that the state-of-the-art VPKI can serve sizable areas or domain with modest computing resources.
Evolutionary Computation (EC) has been used with great success on various real-world problems. One domain abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography, that informally speaking considers methods how to ensure secrecy (but also authenticity, privacy, etc.), and cryptanalysis, that deals with methods how to break cryptographic systems. Although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience (therefore, omitting proofs and mathematics behind many concepts). Afterwards, we concentrate on several topics from cryptography that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization, etc) for various problems and evaluate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we will give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community. To conclude, we will present a number of topics we consider to be a strong research choice that can have a real-world impact. In that part, we give a special attention to cryptographic problems where cryptographic community successfully applied EC, but where those problems remained out of the focus of EC community. This tutorial will also present some live demos of EC in action when dealing with cryptographic problems. We will present several problems, ways of encoding solutions, impact of the algorithms choice and finally, we will run some experiments to show the results and discuss how to assess them from cryptographic perspective.
Media streaming has largely dominated the Internet traffic and the trend will keep increasing in the next years. To efficiently distribute the media content, Information-Centric Networking (ICN) has attracted many researchers. Since end users usually obtain content from indeterminate caches in ICN, the publisher cannot reinforce data security and access control depending on the caches. Hence, the ability of self-contained protection is important for the cached contents. Attribute-based encryption (ABE) is considered the preferred solution to achieve this goal. However, the existing ABE schemes usually have problems regarding efficiency. The exponentiation in key generation and pairing operation in decryption respectively increases linearly with the number of attributes involved, which make it costly. In this paper, we propose an efficient key-policy ABE with fast key generation and decryption (FKP-ABE). In the key generation, we get rid of exponentiation and only require multiplications/divisions for each attribute in the access policy. And in the decryption, we reduce the pairing operations to a constant number, no matter how many attributes are used. The efficiency analysis indicates that our scheme has better performance than the existing KP-ABE schemes. Finally, we present an implementation framework that incorporates the proposed FKP-ABE with the ICN architecture.
Information Systems curricula require on-going and frequent review [2] [11]. Furthermore, such curricula must be flexible because of the fast-paced, dynamic nature of the workplace. Such flexibility can be maintained through modernizing course content or, inclusively, exchanging hardware or software for newer versions. Alternatively, flexibility can arise from incorporating new information into curricula from other disciplines. One field where the pace of change is extremely high is cybersecurity [3]. Students are left with outdated skills when curricula lag behind the pace of change in industry. For example, cryptography is a required learning objective in the DHS/NSA Center of Academic Excellence (CAE) knowledge criteria [1]. However, the overarching curriculum associated with basic ciphers has gone unchanged for decades. Indeed, a general problem in cybersecurity education is that students lack fundamental knowledge in areas such as ciphers [5]. In response, researchers have developed a variety of interactive classroom visualization tools [5] [8] [9]. Such tools visualize the standard approach to frequency analysis of simple substitution ciphers that includes review of most common, single letters in ciphertext. While fundamental ciphers such as the monoalphabetic substitution cipher have not been updated (these are historical ciphers), collective understanding of how humans interact with language has changed. Updated understanding in both English language pedagogy [10] [12] and automated cryptanalysis of substitution ciphers [4] potentially renders the interactive classroom visualization tools incomplete or outdated. Classroom visualization tools are powerful teaching aids, particularly for abstract concepts. Existing research has established that such tools promote an active learning environment that translates to not only effective learning conditions but also higher student retention rates [7]. However, visualization tools require extensive planning and design when used to actively engage students with detailed, specific knowledge units such as ciphers [7] [8]. Accordingly, we propose a heatmap-based frequency analysis visualization solution that (a) incorporates digraph and trigraph language processing norms; (b) and enhances the active learning pedagogy inherent in visualization tools. Preliminary results indicate that study participants take approximately 15% longer to learn the heatmap-based frequency analysis technique compared to traditional frequency analysis but demonstrate a 50% increase in efficacy when tasked with solving simple substitution ciphers. Further, a heatmap-based solution contributes positively to the field insofar as educators have an additional tool to use in the classroom. As well, the heatmap visualization tool may allow researchers to comparatively examine efficacy of visualization tools in the cryptanalysis of mono-alphabetic substitution ciphers.
It is common practice for data scientists to acquire and integrate disparate data sources to achieve higher quality results. But even with a perfectly cleaned and merged data set, two fundamental questions remain: (1) is the integrated data set complete and (2) what is the impact of any unknown (i.e., unobserved) data on query results? In this work, we develop and analyze techniques to estimate the impact of the unknown data (a.k.a., unknown unknowns) on simple aggregate queries. The key idea is that the overlap between different data sources enables us to estimate the number and values of the missing data items. Our main techniques are parameter-free and do not assume prior knowledge about the distribution. Through a series of experiments, we show that estimating the impact of unknown unknowns is invaluable to better assess the results of aggregate queries over integrated data sources.
Systematic implementation of System-on-Chip (SoC) security policies typically involves smart wrappers extracting local security critical events of interest from Intellectual Property (IP) blocks, together with a control engine that communicates with the wrappers to analyze the events for policy adherence. However, developing customized wrappers at each IP for security requirements may incur significant overhead in area and hardware resources. In this paper, we address this problem by exploiting the extensive design-for-debug (DfD) instrumentation already available on-chip. In addition to reduction in the overall hardware overhead, the approach also adds flexibility to the security architecture itself, e.g., permitting use of on-field DfD instrumentation, survivability and control hooks to patch security policy implementation in response to bugs and attacks found at post-silicon or changing security requirements on-field. We show how to design scalable interface between security and debug architectures that provides the benefits of flexibility to security policy implementation without interfering with existing debug and survivability use cases and at minimal additional cost in energy and design complexity.
The current trend of large scientific computing problems is to align as much as possible to a Single Programming Multiple Data (or SPMD) scheme when the application algorithms are conducive to parallelization and vectorization. This reduces the complexity of code because the processors or (computational nodes) perform the same instructions which allows for better performance as algorithms work on local data sets instead of continuously transferring data from one locality to another. However, certain applications, such as stencil problems, demonstrate the need to move data to or from remote localities. This involves an additional degree of complexity, as one must know with which localities to exchange data. In order to solve this issue, Fortran has extended its scalar element indexing approach to distributed structures of elements. In this extension, a structure of scalar elements is attributed a ”co-index” and lives in a specific locality. A co-index provides the application with enough information to retrieve the corresponding data reference. In C++, containers present themselves as a ”smarter” alternative of Fortran arrays but there are still no corresponding standardized features similar to the Fortran co-indexing approach. In this paper, we present an implementation of such features in HPX, a general purpose C++ runtime system for applications of any scale. We describe how the combination of the HPX features and the actual C++ Standard makes it easy to define a high performance API similar to Co-Array Fortran.
Mobile apps often collect and share personal data with untrustworthy third-party apps, which may lead to data misuse and privacy violations. Most of the collected data originates from sensors built into the mobile device, where some of the sensors are treated as sensitive by the mobile platform while others permit unconditional access. Examples of privacy-prone sensors are the microphone, camera and GPS system. Access to these sensors is always mediated by protected function calls. On the other hand, the light sensor, accelerometer and gyroscope are considered innocuous. All apps have unrestricted access to their data. Unfortunately, this gap is not always justified. State-of-the-art privacy mechanisms on Android provide inadequate access control and do not address the vulnerabilities that arise due to unmediated access to so-called innocuous sensors on smartphones. We have developed techniques to demonstrate these threats. As part of our demonstration, we illustrate possible attacks using the innocuous sensors on the phone. As a solution, we present ipShield, a framework that provides users with greater control over their resources at runtime so as to protect against such attacks. We have implemented ipShield by modifying the AOSP.
Databases have become one of the most important components in modern software systems. For example, web services, cloud computing systems, and online transaction processing systems all rely heavily on databases. To abstract the complexity of accessing a database, developers make use of Object-Relational Mapping (ORM) frameworks. ORM frameworks provide an abstraction layer between the application logic and the underlying database. Such abstraction layer automatically maps objects in Object-Oriented Languages to database records, which significantly reduces the amount of boilerplate code that needs to be written. Despite the advantages of using ORM frameworks, we observe several difficulties in maintaining ORM code (i.e., code that makes use of ORM frameworks) when cooperating with our industrial partner. After conducting studies on other open source systems, we find that such difficulties are common in other Java systems. Our study finds that i) ORM cannot completely encapsulate database accesses in objects or abstract the underlying database technology, thus may cause ORM code changes more scattered; ii) ORM code changes are more frequent than regular code, but there is a lack of tools that help developers verify ORM code at compilation time; iii) we find that changes to ORM code are more commonly due to performance or security reasons; however, traditional static code analyzers need to be extended to capture the peculiarities of ORM code in order to detect such problems. Our study highlights the hidden maintenance costs of using ORM frameworks, and provides some initial insights about potential approaches to help maintain ORM code. Future studies should carefully examine ORM code, especially given the rising use of ORM in modern software systems.
We created detailed profiles of the energy consumed by common operations done on Java List, Map, and Set abstractions. The results show that the alternative data types for these abstractions differ significantly in terms of energy consumption depending on the operations. For example, an ArrayList consumes less energy than a LinkedList if items are inserted at the middle or at the end, but consumes more energy than a LinkedList if items are inserted at the start of the list. To explain the results, we explored the memory usage and the bytecode executed during an operation. Expensive computation tasks in the analyzed bytecode traces appeared to have an energy impact, but memory usage did not contribute. We evaluated our profiles by using them to selectively replace Collections types used in six applications and libraries. We found that choosing the wrong Collections type, as indicated by our profiles, can cost even 300% more energy than the most efficient choice. Our work shows that the usage context of a data structure and our measured energy profiles can be used to decide between alternative Collections implementations.
We consider how the I-V characteristics of emerging transistors (particularly those sponsored by STARnet) might be employed to enhance hardware security. An emphasis of this work is to move beyond hardware implementations of physically unclonable functions (PUFs) and random num- ber generators (RNGs). We highlight how new devices (i) may enable more sophisticated logic obfuscation for IP protection, (ii) could help to prevent fault injection attacks, (iii) prevent differential power analysis in lightweight cryptographic systems, etc.
In this paper, we focus on the definition of estimators to predict method calls in Android apps. Estimation models are based on information from requirements specification documents (e.g., number of actors, number of use cases, and number of classes in the conceptual model). We have used a dataset containing information on 23 Android apps. After performing data-cleaning, we applied linear regression to build estimation models on 21 data points. Results suggest that measures gathered from requirements specification documents can be considered good predictors to estimate the number of internal calls (i.e., methods invoking other methods present in the app) and external calls (i.e., invocations to API) as well as their sum.
Physical consequences to power systems of false data injection cyber-attacks are considered. Prior work has shown that the worst-case consequences of such an attack can be determined using a bi-level optimization problem, wherein an attack is chosen to maximize the physical power flow on a target line subsequent to re-dispatch. This problem can be solved as a mixed-integer linear program, but it is difficult to scale to large systems due to numerical challenges. Three new computationally efficient algorithms to solve this problem are presented. These algorithms provide lower and upper bounds on the system vulnerability measured as the maximum power flow subsequent to an attack. Using these techniques, vulnerability assessments are conducted for IEEE 118-bus system and Polish system with 2383 buses.
With the world population becoming increasingly urban and the multiplication of mega cities, urban leaders have responded with plans calling for so called smart cities relying on instantaneous access to information using mobile devices for an intelligent management of resources. Coupled with the advent of the smartphone as the main platform for accessing the Internet, this has created the conditions for the looming wireless bandwidth crunch. This paper presents a content delivery infrastructure relying on off-the-shelf technology and the public transportation network (PTN) aimed at relieving the wireless bandwidth crunch in urban centers. Our solution proposes installing WiFi access points on selected public bus stations and buses and using the latter as data mules, creating a delay tolerant network capable of carrying content users can access while using the public transportation. Building such an infrastructure poses several challenges, including congestion points in major hubs and the cost of additional hardware necessary for secure communications. To address these challenges we propose a 3-Tier architecture that guarantees end-to-end delivery and minimizes hardware cost. Trace-based simulations from three major European cities of Paris, Helsinki and Toulouse demonstrate the viability of our design choices. In particular, the 3-Tier architecture is shown to guarantee end-to-end connectivity and reduce the deployment cost by several times while delivering at least as many packets as a baseline architecture.
As the number of small, battery-operated, wireless-enabled devices deployed in various applications of Internet of Things (IoT), Wireless Sensor Networks (WSN), and Cyber-physical Systems (CPS) is rapidly increasing, so is the number of data streams that must be processed. In cases where data do not need to be archived, centrally processed, or federated, in-network data processing is becoming more common. For this purpose, various platforms like DRAGON, Innet, and CJF were proposed. However, these platforms assume that all nodes in the network are the same, i.e. the network is homogeneous. As Moore's law still applies, nodes are becoming smaller, more powerful, and more energy efficient each year; which will continue for the foreseeable future. Therefore, we can expect that as sensor networks are extended and updated, hardware heterogeneity will soon be common in networks - the same trend as can be seen in cloud computing infrastructures. This heterogeneity introduces new challenges in terms of choosing an in-network data processing node, as not only its location, but also its capabilities, must be considered. This paper introduces a new methodology to tackle this challenge, comprising three new algorithms - Request, Traverse, and Mixed - for efficiently locating an in-network data processing node, while taking into account not only position within the network but also hardware capabilities. The proposed algorithms are evaluated against a naïve approach and achieve up to 90% reduction in network traffic during long-term data processing, while spending a similar amount time in the discovery phase.