Biblio
Wearables, such as Fitbit, Apple Watch, and Microsoft Band, with their rich collection of sensors, facilitate the tracking of healthcare- and wellness-related metrics. However, the assessment of the physiological metrics collected by these devices could also be useful in identifying the user of the wearable, e.g., to detect unauthorized use or to correctly associate the data to a user if wearables are shared among multiple users. Further, researchers and healthcare providers often rely on these smart wearables to monitor research subjects and patients in their natural environments over extended periods of time. Here, it is important to associate the sensed data with the corresponding user and to detect if a device is being used by an unauthorized individual, to ensure study compliance. Existing one-time authentication approaches using credentials (e.g., passwords, certificates) or trait-based biometrics (e.g., face, fingerprints, iris, voice) might fail, since such credentials can easily be shared among users. In this paper, we present a continuous and reliable wearable-user authentication mechanism using coarse-grain minute-level physical activity (step counts) and physiological data (heart rate, calorie burn, and metabolic equivalent of task). From our analysis of 421 Fitbit users from a two-year long health study, we are able to statistically distinguish nearly 100% of the subject-pairs and to identify subjects with an average accuracy of 92.97%.
This paper develops a model for Wells turbine using Xilinx system generator (XSG)toolbox of Matlab. The Wells turbine is very popular in oscillating water column (OWC) wave energy converters. Mostly, the turbine behavior is emulated in a controlled DC or AC motor coupled with a generator. Therefore, it is required to model the OWC and Wells turbine in real time software like XSG. It generates the OWC turbine behavior in real time. Next, a PI control scheme is suggested for controlling the DC motor so as to emulate the Wells turbine efficiently. The overall performance of the system is tested with asquirrel cage induction generator (SCIG). The Pierson-Moskowitz and JONSWAP irregular wave models have been applied to validate the OWC model. Finally, the simulation results for Wells turbine and PI controller have beendiscussed.
Even though build automation tools help to reduce errors and rapid releases of software changes, use of build automation tools is not widespread amongst software practitioners. Software practitioners perceive build automation tools as complex, which can hinder the adoption of these tools. How well founded such perception is, can be determined by systematic exploration of adoption factors that influence usage of build automation tools. The goal of this paper is to aid software practitioners in increasing their usage of build automation tools by identifying the adoption factors that influence usage of these tools. We conducted a survey to empirically identify the adoption factors that influence usage of build automation tools. We obtained survey responses from 268 software professionals who work at NestedApps, Red Hat, as well as contribute to open source software. We observe that adoption factors related to complexity do not have the strongest influence on usage of build automation tools. Instead, we observe compatibility-related adoption factors, such as adjustment with existing tools, and adjustment with practitioner's existing workflow, to have influence on usage of build automation tools with greater importance. Findings from our paper suggest that usage of build automation tools might increase if: build automation tools fit well with practitioners' existing workflow and tool usage; and usage of build automation tools are made more visible among practitioners' peers.
End-users in emerging markets experience poor web performance due to a combination of three factors: high server response time, limited edge bandwidth and the complexity of web pages. The absence of cloud infrastructure in developing regions and the limited bandwidth experienced by edge nodes constrain the effectiveness of conventional caching solutions for these contexts. This paper describes the design, implementation and deployment of xCache, a cloud-managed Internet caching architecture that aims to proactively profile popular web pages and maintain the liveness of popular content at software defined edge caches to enhance the cache hit rate with minimal bandwidth overhead. xCache uses a Cloud Controller that continuously analyzes active cloud-managed web pages and derives an object-group representation of web pages based on the objects of a page. Using this object-group representation, xCache computes a bandwidth-aware utility measure to derive the most valuable configuration for each edge cache. Our preliminary real-world deployment across university campuses in three developing regions demonstrates its potential compared to conventional caching by improving cache hit rates by about 15%. Our evaluations of xCache have also shown that it can be applied in conjunction with other web optimizations solutions like Shandian, and can improve page load times by more than 50%.
Intrusion Detection Systems (IDS) have been in existence for many years now, but they fall short in efficiently detecting zero-day attacks. This paper presents an organic combination of Semantic Link Networks (SLN) and dynamic semantic graph generation for the on the fly discovery of zero-day attacks using the Spark Streaming platform for parallel detection. In addition, a minimum redundancy maximum relevance (MRMR) feature selection algorithm is deployed to determine the most discriminating features of the dataset. Compared to previous studies on zero-day attack identification, the described method yields better results due to the semantic learning and reasoning on top of the training data and due to the use of collaborative classification methods. We also verified the scalability of our method in a distributed environment.
Large-scale mobile traffic analytics is becoming essential to digital infrastructure provisioning, public transportation, events planning, and other domains. Monitoring city-wide mobile traffic is however a complex and costly process that relies on dedicated probes. Some of these probes have limited precision or coverage, others gather tens of gigabytes of logs daily, which independently offer limited insights. Extracting fine-grained patterns involves expensive spatial aggregation of measurements, storage, and post-processing. In this paper, we propose a mobile traffic super-resolution technique that overcomes these problems by inferring narrowly localised traffic consumption from coarse measurements. We draw inspiration from image processing and design a deep-learning architecture tailored to mobile networking, which combines Zipper Network (ZipNet) and Generative Adversarial neural Network (GAN) models. This enables to uniquely capture spatio-temporal relations between traffic volume snapshots routinely monitored over broad coverage areas ('low-resolution') and the corresponding consumption at 0.05 km2 level ('high-resolution') usually obtained after intensive computation. Experiments we conduct with a real-world data set demonstrate that the proposed ZipNet(-GAN) infers traffic consumption with remarkable accuracy and up to 100X higher granularity as compared to standard probing, while outperforming existing data interpolation techniques. To our knowledge, this is the first time super-resolution concepts are applied to large-scale mobile traffic analysis and our solution is the first to infer fine-grained urban traffic patterns from coarse aggregates.
In the last years, networking scenarios have been evolving, hand-in-hand with new and varied applications with heterogeneous Quality of Service (QoS) requirements. These requirements must be efficiently and effectively delivered. Given its static layered structure and almost complete lack of built-in QoS support, the current TCP/IP-based Internet hinders such an evolution. In contrast, the clean-slate Recursive InterNetwork Architecture (RINA) proposes a new recursive and programmable networking model capable of evolving with the network requirements, solving in this way most, if not all, TCP/IP protocol stack limitations. Network providers can better deliver communication services across their networks by taking advantage of the RINA architecture and its support for QoS. This support allows providing complete information of the QoS needs of the supported traffic flows, and thus, fulfilment of these needs becomes possible. In this work, we focus on the importance of path selection to better ensure QoS guarantees in long-haul RINA networks. We propose and evaluate a programmable strategy for path selection based on flow QoS parameters, such as the maximum allowed latency and packet losses, comparing its performance against simple shortest-path, fastest-path and connection-oriented solutions.
Attack graphs used in network security analysis are analyzed to determine sequences of exploits that lead to successful acquisition of privileges or data at critical assets. An attack graph edge corresponds to a vulnerability, tacitly assuming a connection exists and tacitly assuming the vulnerability is known to exist. In this paper we explore use of uncertain graphs to extend the paradigm to include lack of certainty in connection and/or existence of a vulnerability. We extend the standard notion of uncertain graph (where the existence of each edge is probabilistically independent) however, as significant correlations on edge existence probabilities exist in practice, owing to common underlying causes for dis-connectivity and/or presence of vulnerabilities. Our extension describes each edge probability as a Boolean expression of independent indicator random variables. This paper (i) shows that this formalism is maximally descriptive in the sense that it can describe any joint probability distribution function of edge existence, (ii) shows that when these Boolean expressions are monotone then we can easily perform uncertainty analysis of edge probabilities, and (iii) uses these results to model a partial attack graph of the Stuxnet worm and a small enterprise network and to answer important security-related questions in a probabilistic manner.
A key use of software-defined networking is to enable scaleout of network data plane elements. Naively scaling networking elements, however, can cause incorrect behavior. For example, we show that an IDS system which operates correctly as a single network element can erroneously and permanently block hosts when it is replicated.
In this paper, we provide a system, COCONUT, for seamless scale-out of network forwarding elements; that is, an SDN application programmer can program to what functionally appears to be a single forwarding element, but whichmay be replicated behind the scenes. To do this, we identifythe key property for seamless scale out, weak causality,and guarantee it through a practical and scalable implementation of vector clocks in the data plane. We prove that COCONUT enables seamless scale out of networking elements, i.e., the user-perceived behavior of any COCONUT element implemented with a distributed set of concurrent replicas is provably indistinguishable from its singleton implementation. Finally, we build a prototype of COCONUT and experimentally demonstrate its correct behavior. We also show that its abstraction enables a more efficient implementation of seamless scale-out compared to a naive baseline.
Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching correctness specification, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level correctness.
Authentication and encryption within an embedded system environment using cameras, sensors, thermostats, autonomous vehicles, medical implants, RFID, etc. is becoming increasing important with ubiquitious wireless connectivity. Hardware-based authentication and encryption offer several advantages in these types of resource-constrained applications, including smaller footprints and lower energy consumption. Bitstring and key generation implemented with Physical Unclonable Functions or PUFs can further reduce resource utilization for authentication and encryption operations and reduce overall system cost by eliminating on-chip non-volatile-memory (NVM). In this paper, we propose a dynamic partial reconfiguration (DPR) strategy for implementing both authentication and encryption using a PUF for bitstring and key generation on FPGAs as a means of optimizing the utilization of the limited area resources. We show that the time and energy penalties associated with DPR are small in modern SoC-based architectures, such as the Xilinx Zynq SoC, and therefore, the overall approach is very attractive for emerging resource-constrained IoT applications.
While we have long had principles describing how access control enforcement should be implemented, such as the reference monitor concept, imprecision in access control mechanisms and access control policies leads to risks that may enable exploitation. In practice, least privilege access control policies often allow information flows that may enable exploits. In addition, the implementation of access control mechanisms often tries to balance security with ease of use implicitly (e.g., with respect to determining where to place authorization hooks) and approaches to tighten access control, such as accounting for program context, are ad hoc. In this paper, we define four types of risks in access control enforcement and explore possible approaches and challenges in tracking those types of risks. In principle, we advocate runtime tracking to produce risk estimates for each of these types of risk. To better understand the potential of risk estimation for authorization, we propose risk estimate functions for each of the four types of risk, finding that benign program deployments accumulate risks in each of the four areas for ten Android programs examined. As a result, we find that tracking of relative risk may be useful for guiding changes to security choices, such as authorized unsafe operations or placement of authorization checks, when risk differs from that expected.
We present in this paper a security analysis of electronic devices which considers the lifecycle properties of embedded systems. We first define a generic model of electronic devices lifecycle showing the complex interactions between the numerous assets and the actors. The method is illustrated through a case study: a connected insulin pump. The lifecycle induced vulnerabilities are analyzed using the EBIOS methodology. An analysis of associated countermeasures points out the lack of consideration of the life cycle in order to provide an acceptable security level of each assets of the device.
Internet-connected embedded systems have limited capabilities to defend themselves against remote hacking attacks. The potential effects of such attacks, however, can have a significant impact in the context of the Internet of Things, industrial control systems, smart health systems, etc. Embedded systems cannot effectively utilize existing software-based protection mechanisms due to limited processing capabilities and energy resources. We propose a novel hardware-based monitoring technique that can detect if the embedded operating system or any running application deviates from the originally programmed behavior due to an attack. We present an FPGA-based prototype implementation that shows the effectiveness of such a security approach.
Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are
writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper
is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications
Power grid infrastructures have been exposed to several terrorists and cyber attacks from different perspectives and have resulted in critical system failures. Among different attack strategies, simultaneous attack is feasible for the attacker if enough resources are available at the moment. In this paper, vulnerability analysis for simultaneous attack is investigated, using a modified cascading failure simulator with reduced calculation time than the existing methods. A new damage measurement matrix is proposed with the loss of generation power and time to reach the steady-state condition. The combination of attacks that can result in a total blackout in the shortest time are considered as the strongest simultaneous attack for the system from attacker's viewpoint. The proposed approach can be used for general power system test cases. In this paper, we conducted the experiments on W&W 6 bus system and IEEE 30 bus system for demonstration of the result. The modified simulator can automatically find the strongest attack combinations for reaching maximum damage in terms of generation power loss and time to reach black-out.
The evolution of cloud-computing imposes many challenges on performance testing and requires not only a different approach and methodology of performance evaluation and analysis, but also specialized tools and frameworks to support such work. In traditional performance testing, typically a single workload was run against a static test configuration. The main metrics derived from such experiments included throughput, response times, and system utilization at steady-state. While this may have been sufficient in the past, where in many cases a single application was run on dedicated hardware, this approach is no longer suitable for cloud-based deployments. Whether private or public cloud, such environments typically host a variety of applications on distributed shared hardware resources, simultaneously accessed by a large number of tenants running heterogeneous workloads. The number of tenants as well as their activity and resource needs dynamically change over time, and the cloud infrastructure reacts to this by reallocating existing or provisioning new resources. Besides metrics such as the number of tenants and overall resource utilization, performance testing in the cloud must be able to answer many more questions: How is the quality of service of a tenant impacted by the constantly changing activity of other tenants? How long does it take the cloud infrastructure to react to changes in demand, and what is the effect on tenants while it does so? How well are service level agreements met? What is the resource consumption of individual tenants? How can global performance metrics on application- and system-level in a distributed system be correlated to an individual tenant's perceived performance? In this paper we present CloudPerf, a performance test framework specifically designed for distributed and dynamic multi-tenant environments, capable of answering all of the above questions, and more. CloudPerf consists of a distributed harness, a protocol-independent load generator and workload modeling framework, an extensible statistics framework with live-monitoring and post-analysis tools, interfaces for cloud deployment operations, and a rich set of both low-level as well as high-level workloads from different domains.
Cloud systems offer a diversity of security mechanisms with potentially complex configuration options. So far, security engineering has focused on achievable security levels, but not on the costs associated with a specific security mechanism and its configuration. Through a series of experiments with a variety of cloud datastores conducted over the last years, we gained substantial knowledge on how one desired quality like security can have a significant impact on other system qualities like performance. In this paper, we report on select findings related to security-performance trade-offs for three prominent cloud datastores, focusing on data in transit encryption, and propose a simple, structured approach for making trade-off decisions based on factual evidence gained through experimentation. Our approach allows to rationally reason about security trade-offs.
The privacy of information is an increasing concern of software applications users. This concern was caused by attacks to cloud services over the last few years, that have leaked confidential information such as passwords, emails and even private pictures. Once the information is leaked, the users and software applications are powerless to contain the spread of information and its misuse. With databases as a central component of applications that store almost all of their data, they are one of the most common targets of attacks. However, typical deployments of databases do not leverage security mechanisms to stop attacks and do not apply cryptographic schemes to protect data. This issue has been tackled by multiple secure databases that provide trade-offs between security, query capabilities and performance. Despite providing stronger security guarantees, the proposed solutions still entrust their data to a single entity that can be corrupted or hacked. Secret sharing can solve this problem by dividing data in multiple secrets and storing each secret at a different location. The division is done in such a way that if one location is hacked, no information can be leaked. Depending on the protocols used to divide data, functions can be computed over this data through secure protocols that do not disclose information or actually know which values are being calculated. We propose a SQL database prototype capable of offering a trade-off between security and query latency by using a different secure protocol. An evaluation of the protocols is also performed, showing that our most relaxed protocol has an improvement of 5+ on the query latency time over the original protocol.
The ability to discover patterns of interest in criminal networks can support and ease the investigation tasks by security and law enforcement agencies. By considering criminal networks as a special case of social networks, we can properly reuse most of the state-of-the-art techniques to discover patterns of interests, i.e., hidden and potential links. Nevertheless, in time-sensible scenarios, like the one involving criminal actions, the ability to discover patterns in a (near) real-time manner can be of primary importance.In this paper, we investigate the identification of patterns for link detection and prediction on an evolving criminal network. To extract valuable information as soon as data is generated, we exploit a stream processing approach. To this end, we also propose three new similarity social network metrics, specifically tailored for criminal link detection and prediction. Then, we develop a flexible data stream processing application relying on the Apache Flink framework; this solution allows us to deploy and evaluate the newly proposed metrics as well as the ones existing in literature. The experimental results show that the new metrics we propose can reach up to 83% accuracy in detection and 82% accuracy in prediction, resulting competitive with the state of the art metrics.
True random numbers have a fair role in modern digital transactions. In order to achieve secured authentication, true random numbers are generated as security keys which are highly unpredictable and non-repetitive. True random number generators are used mainly in the field of cryptography to generate random cryptographic keys for secure data transmission. The proposed work aims at the generation of true random numbers based on CMOS Boolean Chaotic Oscillator. As a part of this work, ASIC approach of CMOS Boolean Chaotic Oscillator is modelled and simulated using Cadence Virtuoso tool based on 45nm CMOS technology. Besides, prototype model has been implemented with circuit components and analysed using NI ELVIS platform. The strength of the generated random numbers was ensured by NIST (National Institute of Standards and Technology) Test Suite and ASIC approach was validated through various parameters by performing various analyses such as frequency, delay and power.
Robust Trojans are inserted in outsourced products resulting in security vulnerabilities. Post-silicon testing is done mandatorily to detect such malicious inclusions. Logic testing becomes obsolete for larger circuits with sequential Trojans. For such cases, side channel analysis is an effective approach. The major challenge with the side channel analysis is reduction in hardware Trojan detection sensitivity due to process variation (process variation could lead to false positives and false negatives and it is unavoidable during a manufacturing stage). In this paper Self Referencing method is proposed that measures leakage power of the circuit at four different time windows that hammers the Trojan into triggering and also help to identify/eliminate false positives/false negatives due to process variation.