Visible to the public Biblio

Found 15086 results

Filters: Keyword is pubcrawl  [Clear All Filters]
2017-05-17
Dutt, Nikil, Jantsch, Axel, Sarma, Santanu.  2016.  Toward Smart Embedded Systems: A Self-aware System-on-Chip (SoC) Perspective. ACM Trans. Embed. Comput. Syst.. 15:22:1–22:27.

Embedded systems must address a multitude of potentially conflicting design constraints such as resiliency, energy, heat, cost, performance, security, etc., all in the face of highly dynamic operational behaviors and environmental conditions. By incorporating elements of intelligence, the hope is that the resulting “smart” embedded systems will function correctly and within desired constraints in spite of highly dynamic changes in the applications and the environment, as well as in the underlying software/hardware platforms. Since terms related to “smartness” (e.g., self-awareness, self-adaptivity, and autonomy) have been used loosely in many software and hardware computing contexts, we first present a taxonomy of “self-x” terms and use this taxonomy to relate major “smart” software and hardware computing efforts. A major attribute for smart embedded systems is the notion of self-awareness that enables an embedded system to monitor its own state and behavior, as well as the external environment, so as to adapt intelligently. Toward this end, we use a System-on-Chip perspective to show how the CyberPhysical System-on-Chip (CPSoC) exemplar platform achieves self-awareness through a combination of cross-layer sensing, actuation, self-aware adaptations, and online learning. We conclude with some thoughts on open challenges and research directions.

Huang, Zhenqi, Wang, Yu, Mitra, Sayan, Dullerud, Geir.  2016.  Controller Synthesis for Linear Dynamical Systems with Adversaries. Proceedings of the Symposium and Bootcamp on the Science of Security. :53–62.

We present a controller synthesis algorithm for a reach-avoid problem in the presence of adversaries. Our model of the adversary abstractly captures typical malicious attacks envisioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formulating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. With this decomposition, the synthesis problem eliminates the universal quantifier on the adversary's choices and the symbolic controller actions can be effectively solved using an SMT solver. The constraints induced by the adversary are computed by solving second-order cone programmings. The algorithm is later extended to synthesize state-dependent controller and to generate attacks for the adversary. We present preliminary experimental results that show the effectiveness of this approach on several example problems.

Kang, Eunsuk, Adepu, Sridhar, Jackson, Daniel, Mathur, Aditya P..  2016.  Model-based Security Analysis of a Water Treatment System. Proceedings of the 2Nd International Workshop on Software Engineering for Smart Cyber-Physical Systems. :22–28.

An approach to analyzing the security of a cyber-physical system (CPS) is proposed, where the behavior of a physical plant and its controller are captured in approximate models, and their interaction is rigorously checked to discover potential attacks that involve a varying number of compromised sensors and actuators. As a preliminary study, this approach has been applied to a fully functional water treatment testbed constructed at the Singapore University of Technology and Design. The analysis revealed previously unknown attacks that were confirmed to pose serious threats to the safety of the testbed, and suggests a number of research challenges and opportunities for applying a similar type of formal analysis to cyber-physical security.

Wang, Kun, Du, Miao, Yang, Dejun, Zhu, Chunsheng, Shen, Jian, Zhang, Yan.  2016.  Game-Theory-Based Active Defense for Intrusion Detection in Cyber-Physical Embedded Systems. ACM Trans. Embed. Comput. Syst.. 16:18:1–18:21.

Cyber-Physical Embedded Systems (CPESs) are distributed embedded systems integrated with various actuators and sensors. When it comes to the issue of CPES security, the most significant problem is the security of Embedded Sensor Networks (ESNs). With the continuous growth of ESNs, the security of transferring data from sensors to their destinations has become an important research area. Due to the limitations in power, storage, and processing capabilities, existing security mechanisms for wired or wireless networks cannot apply directly to ESNs. Meanwhile, ESNs are likely to be attacked by different kinds of attacks in industrial scenarios. Therefore, there is a need to develop new techniques or modify the current security mechanisms to overcome these problems. In this article, we focus on Intrusion Detection (ID) techniques and propose a new attack-defense game model to detect malicious nodes using a repeated game approach. As a direct consequence of the game model, attackers and defenders make different strategies to achieve optimal payoffs. Importantly, error detection and missing detection are taken into consideration in Intrusion Detection Systems (IDSs), where a game tree model is introduced to solve this problem. In addition, we analyze and prove the existence of pure Nash equilibrium and mixed Nash equilibrium. Simulations show that the proposed model can both reduce energy consumption by up to 50% compared with the existing All Monitor (AM) model and improve the detection rate by up to 10% to 15% compared with the existing Cluster Head (CH) monitor model.

Ali, Sk Subidh, Ibrahim, Mohamed, Sinanoglu, Ozgur, Chakrabarty, Krishnendu, Karri, Ramesh.  2016.  Security Assessment of Cyberphysical Digital Microfluidic Biochips. IEEE/ACM Trans. Comput. Biol. Bioinformatics. 13:445–458.

A digital microfluidic biochip (DMFB) is an emerging technology that enables miniaturized analysis systems for point-of-care clinical diagnostics, DNA sequencing, and environmental monitoring. A DMFB reduces the rate of sample and reagent consumption, and automates the analysis of assays. In this paper, we provide the first assessment of the security vulnerabilities of DMFBs. We identify result-manipulation attacks on a DMFB that maliciously alter the assay outcomes. Two practical result-manipulation attacks are shown on a DMFB platform performing enzymatic glucose assay on serum. In the first attack, the attacker adjusts the concentration of the glucose sample and thereby modifies the final result. In the second attack, the attacker tampers with the calibration curve of the assay operation. We then identify denial-of-service attacks, where the attacker can disrupt the assay operation by tampering either with the droplet-routing algorithm or with the actuation sequence. We demonstrate these attacks using a digital microfluidic synthesis simulator. The results show that the attacks are easy to implement and hard to detect. Therefore, this work highlights the need for effective protections against malicious modifications in DMFBs.

Bertino, Elisa, Choo, Kim-Kwang Raymond, Georgakopolous, Dimitrios, Nepal, Surya.  2016.  Internet of Things (IoT): Smart and Secure Service Delivery. ACM Trans. Internet Technol.. 16:22:1–22:7.

The Internet of Things (IoT) is the latest Internet evolution that incorporates a diverse range of things such as sensors, actuators, and services deployed by different organizations and individuals to support a variety of applications. The information captured by IoT present an unprecedented opportunity to solve large-scale problems in those application domains to deliver services; example applications include precision agriculture, environment monitoring, smart health, smart manufacturing, and smart cities. Like all other Internet based services in the past, IoT-based services are also being developed and deployed without security consideration. By nature, IoT devices and services are vulnerable to malicious cyber threats as they cannot be given the same protection that is received by enterprise services within an enterprise perimeter. While IoT services will play an important role in our daily life resulting in improved productivity and quality of life, the trend has also “encouraged” cyber-exploitation and evolution and diversification of malicious cyber threats. Hence, there is a need for coordinated efforts from the research community to address resulting concerns, such as those presented in this special section. Several potential research topics are also identified in this special section.

Fremantle, Paul.  2016.  Privacy-enhancing Federated Middleware for the Internet of Things. Proceedings of the Doctoral Symposium of the 17th International Middleware Conference. :4:1–4:4.

The Internet of Things (IoT) offers new opportunities, but alongside those come many challenges for security and privacy. Most IoT devices offer no choice to users of where data is published, which data is made available and what identities are used for both devices and users. The aim of this work is to explore new middleware models and techniques that can provide users with more choice as well as enhance privacy and security. This paper outlines a new model and a prototype of a middleware system that implements this model.

2017-05-16
Fiore, Dario, Fournet, Cédric, Ghosh, Esha, Kohlweiss, Markulf, Ohrimenko, Olga, Parno, Bryan.  2016.  Hash First, Argue Later: Adaptive Verifiable Computations on Outsourced Data. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1304–1316.

Proof systems for verifiable computation (VC) have the potential to make cloud outsourcing more trustworthy. Recent schemes enable a verifier with limited resources to delegate large computations and verify their outcome based on succinct arguments: verification complexity is linear in the size of the inputs and outputs (not the size of the computation). However, cloud computing also often involves large amounts of data, which may exceed the local storage and I/O capabilities of the verifier, and thus limit the use of VC. In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access. Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions.

Rieser, Denise Christine, Bernhard, Orlando.  2016.  Measuring Trust: The Simpler the Better? Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :2940–2946.

To this date the majority of the existing instruments to measure trustworthiness in an online context are based on Likert scaling [1,3,11]. These however are somewhat restricted in applicability. Statements formed in Likert scaling are typically addressing one specific website. Therefore, adjusting these statements for other websites can be accompanied with a loss of validity. To meet these limitations, we propose to use semantic differential. Research has shown that using semantic differential is appropriate to measure multidimensional constructs [8,12] such as trust. Our novel approach in measuring trustworthiness exceeds Likert based scaling in its effortless application in different online context and its better translatability. After one pre-study and two online-studies with a total of 554 participants we achieved to develop a questionnaire with nine items which is comparable to other existing questionnaires in terms of reliability and internal consistency. But it overcomes the limitation of Likert scale based questionnaire.

Wan, Mengting, Chen, Xiangyu, Kaplan, Lance, Han, Jiawei, Gao, Jing, Zhao, Bo.  2016.  From Truth Discovery to Trustworthy Opinion Discovery: An Uncertainty-Aware Quantitative Modeling Approach. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1885–1894.

In this era of information explosion, conflicts are often encountered when information is provided by multiple sources. Traditional truth discovery task aims to identify the truth the most trustworthy information, from conflicting sources in different scenarios. In this kind of tasks, truth is regarded as a fixed value or a set of fixed values. However, in a number of real-world cases, objective truth existence cannot be ensured and we can only identify single or multiple reliable facts from opinions. Different from traditional truth discovery task, we address this uncertainty and introduce the concept of trustworthy opinion of an entity, treat it as a random variable, and use its distribution to describe consistency or controversy, which is particularly difficult for data which can be numerically measured, i.e. quantitative information. In this study, we focus on the quantitative opinion, propose an uncertainty-aware approach called Kernel Density Estimation from Multiple Sources (KDEm) to estimate its probability distribution, and summarize trustworthy information based on this distribution. Experiments indicate that KDEm not only has outstanding performance on the classical numeric truth discovery task, but also shows good performance on multi-modality detection and anomaly detection in the uncertain-opinion setting.

Mirzamohammadi, Saeed, Amiri Sani, Ardalan.  2016.  Viola: Trustworthy Sensor Notifications for Enhanced Privacy on Mobile Systems. Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. :263–276.

Modern mobile systems such as smartphones, tablets, and wearables contain a plethora of sensors such as camera, microphone, GPS, and accelerometer. Moreover, being mobile, these systems are with the user all the time, e.g., in user's purse or pocket. Therefore, mobile sensors can capture extremely sensitive and private information about the user including daily conversations, photos, videos, and visited locations. Such a powerful sensing capability raises important privacy concerns. To address these concerns, we believe that mobile systems must be equipped with trustworthy sensor notifications, which use indicators such as LED to inform the user unconditionally when the sensors are on. We present Viola, our design and implementation of trustworthy sensor notifications, in which we leverage two novel solutions. First, we deploy a runtime monitor in low-level system software, e.g., in the operating system kernel or in the hypervisor. The monitor intercepts writes to the registers of sensors and indicators, evaluates them against checks on sensor notification invariants, and rejects those that fail the checks. Second, we use formal verification methods to prove the functional correctness of the compilation of our invariant checks from a high-level language. We demonstrate the effectiveness of Viola on different mobile systems, such as Nexus 5, Galaxy Nexus, and ODROID XU4, and for various sensors and indicators, such as camera, microphone, LED, and vibrator. We demonstrate that Viola incurs almost no overhead to the sensor's performance and incurs only small power consumption overhead.

Torii, Naoya, Yamamoto, Dai, Matsumoto, Tsutomu.  2016.  Evaluation of Latch-based Physical Random Number Generator Implementation on 40 Nm ASICs. Proceedings of the 6th International Workshop on Trustworthy Embedded Devices. :23–30.

In the age of the IoT (Internet of Things), a random number generator plays an important role of generating encryption keys and authenticating a piece of an embedded equipment. The random numbers are required to be uniformly distributed statistically and unpredictable. To satisfy the requirements, a physical true random number generator (TR-NG) is used. In this paper, we implement a TRNG using an SR latch on 40 nm CMOS ASIC. This TRNG generates the random number by exclusive ORing (XORing) the outputs of 256 SR latches. We evaluate the random number generated using statistical tests in accordance with BSI AIS 20/31 and using an IID (Independent and Identically Distributed) test, and the entropy estimation in accordance with NIST SP800-90B changing the supply voltage and environmental temperature within its rated values. As a result, the TRNG passed all the tests except in a few cases. From this experiment, we found that the TRNG has a robustness against environmental change. The power consumption is 18.8 micro Watt at 2.5 MHz. This TRNG is suitable for embedded systems to improve security in IoT systems.

Matsui, Tetsuya, Yamada, Seiji.  2016.  Building Trust in PRVAs by User Inner State Transition Through Agent State Transition. Proceedings of the Fourth International Conference on Human Agent Interaction. :111–114.

In this research, we aim to suggest a method for designing trustworthy PRVAs (product recommendation virtual agents). We define an agent's trustworthiness as being operated by user emotion and knowledgeableness perceived by humans. Also, we suggest a user inner state transition model for increasing trust. To increase trust, we aim to cause user emotion to transition to positive by using emotional contagion and to cause user knowledgeableness perceived to become higher by increasing an agent's knowledge. We carried out two experiments to inspect this model. In experiment 1, the PRVAs recommended package tours and became highly knowledgeable in the latter half of ten recommendations. In experiment 2, the PRVAs recommended the same package tours and expressed a positive emotion in the latter half. As a result, participants' inner states transitioned as we expected, and it was proved that this model was valuable for PRVA recommendation.

Stephen, Julian James, Savvides, Savvas, Sundaram, Vinaitheerthan, Ardekani, Masoud Saeida, Eugster, Patrick.  2016.  STYX: Stream Processing with Trustworthy Cloud-based Execution. Proceedings of the Seventh ACM Symposium on Cloud Computing. :348–360.

With the advent of the Internet of Things (IoT), billions of devices are expected to continuously collect and process sensitive data (e.g., location, personal health). Due to limited computational capacity available on IoT devices, the current de facto model for building IoT applications is to send the gathered data to the cloud for computation. While private cloud infrastructures for handling large amounts of data streams are expensive to build, using low cost public (untrusted) cloud infrastructures for processing continuous queries including on sensitive data leads to concerns over data confidentiality. This paper presents STYX, a novel programming abstraction and managed runtime system, that ensures confidentiality of IoT applications whilst leveraging the public cloud for continuous query processing. The key idea is to intelligently utilize partially homomorphic encryption to perform as many computationally intensive operations as possible in the untrusted cloud. STYX provides a simple abstraction to the IoT developer to hide the complexities of (1) applying complex cryptographic primitives, (2) reasoning about performance of such primitives, (3) deciding which computations can be executed in an untrusted tier, and (4) optimizing cloud resource usage. An empirical evaluation with benchmarks and case studies shows the feasibility of our approach.

Oswald, David F..  2016.  Wireless Attacks on Automotive Remote Keyless Entry Systems. Proceedings of the 6th International Workshop on Trustworthy Embedded Devices. :43–44.

Modern vehicles rely on a variety of electronic systems and components. One of those components is the vehicle key. Today, a key typically implements at least three functions: mechanical locking with a key blade, the electronic immobilizer to autorise the start of the engine, and the remote keyless entry (RKE) system that allows to wirelessly (un)lock the doors and disable the alarm system. These main components of a vehicle key are shown in Figure 1. For the mechanical part of the vehicle key, it is well known that the key blade can be easily copied and that the locking cylinder can be bypassed with other means (using so-called "decoders" or simply a screwdriver). In contrast, immobilizer and RKE rely on wireless protocols to cryptographically authenticate the vehicle key to the car. Immobilizers employ radio frequency identification (RFID) transponders to carry out a challenge-response protocol over a low-range bidirectional link at a frequency of 125 kHz. In the past, researchers have revealed severe aws in the cryptography and protocols used by immobilizers, leading to the break of the major systems Megamos, Hitag2, and DST40 [7, 6, 1]. In contrast to the immobilizer, the RKE part uses unidirectional communication (the vehicle only receives, the key fob only transmits) over a high-range wireless link with operating distances of tens to one hundred meters. These systems are based on rolling codes, which essentially transmit a counter (that is incremented on each button press) in a cryptographically authenticated manner. Until recently, the security of automotive RKE had been scrutinized to a lesser degree than that of immobilizers, even though vulnerabilities in similar systems have been known since 2008 with the attacks on KeeLoq [3]. Other results reported in the literature include an analytical attack on a single, outdated vehicle [2] and the so-called "RollJam" technique [5], which is based on a combination of replay and selective jamming. In 2016, it was shown that severe aws exist in the RKE systems of major automotive manufacturers [4]. On the one hand, the VWgroup (Volkswagen, Seat, Skoda, Audi) based the security of their RKE system on a few global cryptographic keys, potentially affecting hundreds of million vehicles world-wide. By extracting these global keys from the firmware of electronic controls units (ECUs) once, an adversary is able to create a duplicate of the owner's RKE fob by eavesdropping a single rolling code. The second case study in [4] exposes new cryptographic weaknesses in the Hitag2 cipher when used for RKE. Applying a correlation-based attack, an adversary can recover the 48-bit cryptographic key by eavesdropping four to eight rolling codes and performing a one-minute computation on a standard laptop. Again, this attack affects millions of vehicle world-wide. Manufacturers that used Hitag2 in their RKE system include Alfa Romeo, Peugeot, Lancia, Opel, Renault, and Ford among others. In this keynote talk, we will present the results of [4] and put them in into a broader context by revisiting the history of attacks on RKE systems and automotive electronics.

Pandey, Shishir, Vaze, Rahul.  2016.  Trustworthiness of t-Distributed Stochastic Neighbour Embedding. Proceedings of the 3rd IKDD Conference on Data Science, 2016. :17:1–17:2.

A well known technique for embedding high dimensional objects in two or three dimensional space is the t-distributed stochastic neighbour embedding (t-SNE). The t-SNE minimizes the Kullback-Liebler (KL) divergence between two probability distributions, one induced on points in the high dimensional space and the other induced on points in the low dimensional embedding space. In this work, we consider a more general framework of using Rényi divergence which is parametrized by the order α, the KL-divergence is a special case when α → 1.We study how various Rényi divergences perform when compared to the KL-divergence. We show that in terms of the metrics of trustworthiness and neighbourhood preservation, the embedding becomes better as Rényi divergence approaches the KL-divergence.

Ren, Kun, Diamond, Thaddeus, Abadi, Daniel J., Thomson, Alexander.  2016.  Low-Overhead Asynchronous Checkpointing in Main-Memory Database Systems. Proceedings of the 2016 International Conference on Management of Data. :1539–1551.

As it becomes increasingly common for transaction processing systems to operate on datasets that fit within the main memory of a single machine or a cluster of commodity machines, traditional mechanisms for guaranteeing transaction durability–-which typically involve synchronous log flushes–-incur increasingly unappealing costs to otherwise lightweight transactions. Many applications have turned to periodically checkpointing full database state. However, existing checkpointing methods–-even those which avoid freezing the storage layer–-often come with significant costs to operation throughput, end-to-end latency, and total memory usage. This paper presents Checkpointing Asynchronously using Logical Consistency (CALC), a lightweight, asynchronous technique for capturing database snapshots that does not require a physical point of consistency to create a checkpoint, and avoids conspicuous latency spikes incurred by other database snapshotting schemes. Our experiments show that CALC can capture frequent checkpoints across a variety of transactional workloads with extremely small cost to transactional throughput and low additional memory usage compared to other state-of-the-art checkpointing systems.

Mendizabal, Odorico M., Dotti, Fernando Luís, Pedone, Fernando.  2016.  Analysis of Checkpointing Overhead in Parallel State Machine Replication. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :534–537.

State machine replication (SMR) is a well-established technique to fault-tolerant systems. In part, this is explained by the simplicity of the approach and its strong consistency guarantees. Recently, several proposals have suggested parallelizing the execution of state machine replicas to achieve high throughput. Concurrent execution of commands has many implications, including the recovery of replicas from failures. Conventional checkpointing techniques, for example, must be revisited in parallelized models. In this paper, we review parallel variations of state machine replication and discuss how checkpointing procedures apply to these models. Moreover, we evaluate the impact caused by checkpointing techniques on recovery through simulations.

Wu, Hao, Mao, Jiangyun, Sun, Weiwei, Zheng, Baihua, Zhang, Hanyuan, Chen, Ziyang, Wang, Wei.  2016.  Probabilistic Robust Route Recovery with Spatio-Temporal Dynamics. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1915–1924.

Vehicle trajectories are one of the most important data in location-based services. The quality of trajectories directly affects the services. However, in the real applications, trajectory data are not always sampled densely. In this paper, we study the problem of recovering the entire route between two distant consecutive locations in a trajectory. Most existing works solve the problem without using those informative historical data or solve it in an empirical way. We claim that a data-driven and probabilistic approach is actually more suitable as long as data sparsity can be well handled. We propose a novel route recovery system in a fully probabilistic way which incorporates both temporal and spatial dynamics and addresses all the data sparsity problem introduced by the probabilistic method. It outperforms the existing works with a high accuracy (over 80%) and shows a strong robustness even when the length of routes to be recovered is very long (about 30 road segments) or the data is very sparse.

Najafi, Ali, Rudell, Jacques C., Sathe, Visvesh.  2016.  Regenerative Breaking: Recovering Stored Energy from Inactive Voltage Domains for Energy-efficient Systems-on-Chip. Proceedings of the 2016 International Symposium on Low Power Electronics and Design. :94–99.

Modern Systems-on-Chip(SoCs) frequently power-off individual voltage domains to save leakage power across a variety of applications, from large-scale heterogeneous computing to ultra-low power systems in IoT applications. However, the considerable energy stored within the capacitance of the powered-off domain is lost through leakage. In this paper, we present an approach to leverage existing voltage regulators to recover this energy from the disabled voltage-domain back into the supply using a low-overhead all-digital runtime control system. Simulation experiments conducted in an industrial 65nm CMOS process indicate that over 90% of the stored energy can be recovered across a range of operating system voltages from 0.4V–1V.

Yao, Chang, Agrawal, Divyakant, Chen, Gang, Ooi, Beng Chin, Wu, Sai.  2016.  Adaptive Logging: Optimizing Logging and Recovery Costs in Distributed In-memory Databases. Proceedings of the 2016 International Conference on Management of Data. :1119–1134.

By maintaining the data in main memory, in-memory databases dramatically reduce the I/O cost of transaction processing. However, for recovery purposes, in-memory systems still need to flush the log to disk, which incurs a substantial number of I/Os. Recently, command logging has been proposed to replace the traditional data log (e.g., ARIES logging) in in-memory databases. Instead of recording how the tuples are updated, command logging only tracks the transactions that are being executed, thereby effectively reducing the size of the log and improving the performance. However, when a failure occurs, all the transactions in the log after the last checkpoint must be redone sequentially and this significantly increases the cost of recovery. In this paper, we first extend the command logging technique to a distributed system, where all the nodes can perform their recovery in parallel. We show that in a distributed system, the only bottleneck of recovery caused by command logging is the synchronization process that attempts to resolve the data dependency among the transactions. We then propose an adaptive logging approach by combining data logging and command logging. The percentage of data logging versus command logging becomes a tuning knob between the performance of transaction processing and recovery to meet different OLTP requirements, and a model is proposed to guide such tuning. Our experimental study compares the performance of our proposed adaptive logging, ARIES-style data logging and command logging on top of H-Store. The results show that adaptive logging can achieve a 10x boost for recovery and a transaction throughput that is comparable to that of command logging.

Lee, Sungwon, Moon, Eunbae, Kim, Dongkyun.  2016.  Consistency of Path Based Upward Path Recovery Method to Reduce Path Recovery Delay for RPL. Proceedings of the International Conference on Research in Adaptive and Convergent Systems. :117–120.

In IoT (Internet of Things) networks, RPL (IPv6 Routing protocol for Low Power and Lossy Networks) is preferred for reducing routing overhead. In RPL, a node selects one parent node which includes the lowest routing metric among its neighbors and the other neighbors are stored as immediate successors. If the selected parent node is lost, the node selects a new parent node among the immediate successors. However, if the new path also includes the same intermediate node which is lost in previous path, it also fails to transmit upward packets. This procedure might be repeated until the new path is selected which does not include the lost immediate node. In this paper, we therefore propose a new path recovery method to reduce the unnecessary repetition for upward path recovery. When a node receives routing message, it calculates the hash value and sets 1 to a new field in the routing message. Based on the field, the node estimates an approximate number of ancestors that are shared between each paths. When loss of upward path is detected, the node selects a new path according to both approximate number and the routing metric. Therefore, a new path which dose not include same ancestors with the previous path is selected and data packet can be resumed immediately.

Mokhtar, Maizura, Hunt, Ian, Burns, Stephen, Ross, Dave.  2016.  Optimising a Waste Heat Recovery System Using Multi-Objective Evolutionary Algorithm. Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion. :913–920.

A waste heat recovery system (WHRS) on a process with variable output, is an example of an intermittent renewable process. WHRS recycles waste heat into usable energy. As an example, waste heat produced from refrigeration can be used to provide hot water. However, consistent with most intermittent renewable energy systems, the likelihood of waste heat availability at times of demand is low. For this reason, the WHRS may be coupled with a hot water reservoir (HWR) acting as the energy storage system that aims to maintain desired hot water temperature Td (and therefore energy) at time of demand. The coupling of the WHRS and the HWR must be optimised to ensure higher efficiency given the intermittent mismatch of demand and heat availability. Efficiency of an WHRS can be defined as achieving multiple objectives, including to minimise the need for back-up energy to achieve Td, and to minimise waste heat not captured (when the reservoir volume Vres is too small). This paper investigates the application of a Multi Objective Evolutionary Algorithm (MOEA) to optimise the parameters of the WHRS, including the Vres and depth of discharge (DoD), that affect the WHRS efficiency. Results show that one of the optimum solutions obtained requires the combination of high Vres, high DoD, low water feed in rate, low power external back-up heater and high excess temperature for the HWR to ensure efficiency of the WHRS.

Koskinen, Eric, Yang, Junfeng.  2016.  Reducing Crash Recoverability to Reachability. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. :97–108.

Software applications run on a variety of platforms (filesystems, virtual slices, mobile hardware, etc.) that do not provide 100% uptime. As such, these applications may crash at any unfortunate moment losing volatile data and, when re-launched, they must be able to correctly recover from potentially inconsistent states left on persistent storage. From a verification perspective, crash recovery bugs can be particularly frustrating because, even when it has been formally proved for a program that it satisfies a property, the proof is foiled by these external events that crash and restart the program. In this paper we first provide a hierarchical formal model of what it means for a program to be crash recoverable. Our model captures the recoverability of many real world programs, including those in our evaluation which use sophisticated recovery algorithms such as shadow paging and write-ahead logging. Next, we introduce a novel technique capable of automatically proving that a program correctly recovers from a crash via a reduction to reachability. Our technique takes an input control-flow automaton and transforms it into an encoding that blends the capture of snapshots of pre-crash states into a symbolic search for a proof that recovery terminates and every recovered execution simulates some crash-free execution. Our encoding is designed to enable one to apply existing abstraction techniques in order to do the work that is necessary to prove recoverability. We have implemented our technique in a tool called Eleven82, capable of analyzing C programs to detect recoverability bugs or prove their absence. We have applied our tool to benchmark examples drawn from industrial file systems and databases, including GDBM, LevelDB, LMDB, PostgreSQL, SQLite, VMware and ZooKeeper. Within minutes, our tool is able to discover bugs or prove that these fragments are crash recoverable.

Gu, Tianxiao, Sun, Chengnian, Ma, Xiaoxing, Lü, Jian, Su, Zhendong.  2016.  Automatic Runtime Recovery via Error Handler Synthesis. Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. :684–695.

Software systems are often subject to unexpected runtime errors. Automatic runtime recovery (ARR) techniques aim at recovering them from erroneous states and maintaining them functional in the field. This paper proposes Ares , a novel, practical approach to performing ARR. Our key insight is to leverage a system's already built-in error handling support to recover from unexpected errors. To this end, we synthesize error handlers via two methods: error transformation and early return. We also equip Ares with a lightweight in-vivo testing infrastructure to select the right synthesis methods and avoid potentially dangerous error handlers. Unlike existing ARR techniques based on heavyweight mechanisms (e.g., checkpoint-restart and runtime monitoring), our approach expands the intrinsic capability of runtime error resilience already existing in software systems to handle unexpected errors. Ares's lightweight mechanism makes it practical and easy to be integrated into production environments. We have implemented Ares on top of both the Java HotSpot VM and Android ART, and applied it to 52 real-world bugs. The results are promising — Ares successfully recovers from 39 of them and incurs low overhead.