Visible to the public Biblio

Found 2859 results

Filters: First Letter Of Last Name is H  [Clear All Filters]
2017-05-19
Ho, Grant, Leung, Derek, Mishra, Pratyush, Hosseini, Ashkan, Song, Dawn, Wagner, David.  2016.  Smart Locks: Lessons for Securing Commodity Internet of Things Devices. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :461–472.

We examine the security of home smart locks: cyber-physical devices that replace traditional door locks with deadbolts that can be electronically controlled by mobile devices or the lock manufacturer's remote servers. We present two categories of attacks against smart locks and analyze the security of five commercially-available locks with respect to these attacks. Our security analysis reveals that flaws in the design, implementation, and interaction models of existing locks can be exploited by several classes of adversaries, allowing them to learn private information about users and gain unauthorized home access. To guide future development of smart locks and similar Internet of Things devices, we propose several defenses that mitigate the attacks we present. One of these defenses is a novel approach to securely and usably communicate a user's intended actions to smart locks, which we prototype and evaluate. Ultimately, our work takes a first step towards illuminating security challenges in the system design and novel functionality introduced by emerging IoT systems.

Peng, Qiuyu, Walid, Anwar, Hwang, Jaehyun, Low, Steven H..  2016.  Multipath TCP: Analysis, Design, and Implementation. IEEE/ACM Trans. Netw.. 24:596–609.

Multipath TCP (MP-TCP) has the potential to greatly improve application performance by using multiple paths transparently. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate our algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new algorithm to existing MP-TCP algorithms.

Xia, Lixue, Tang, Tianqi, Huangfu, Wenqin, Cheng, Ming, Yin, Xiling, Li, Boxun, Wang, Yu, Yang, Huazhong.  2016.  Switched by Input: Power Efficient Structure for RRAM-based Convolutional Neural Network. Proceedings of the 53rd Annual Design Automation Conference. :125:1–125:6.

Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging metal-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80% area and more than 95% energy while maintaining the same or comparable classification accuracy of CNN on MNIST.

He, Zhezhi, Fan, Deliang.  2016.  A Low Power Current-Mode Flash ADC with Spin Hall Effect Based Multi-Threshold Comparator. Proceedings of the 2016 International Symposium on Low Power Electronics and Design. :314–319.

Current-mode Analog-to-Digital Converter (ADC) has drawn many attentions due to its high operating speed, power and ground noise immunity, and etc. However, 2n – 1 comparators are required in traditional n-bit current-mode ADC design, leading to inevitable high power consumption and large chip area. In this work, we propose a low power and compact current mode Multi-Threshold Comparator (MTC) based on giant Spin Hall Effect (SHE). The two threshold currents of the proposed SHE-MTC are 200μA and 250μA with 1ns switching time, respectively. The proposed current-mode hybrid spin-CMOS flash ADC based on SHE-MTC reduces the number of comparators almost by half (2n-1), thus correspondingly reducing the required current mirror branches, total power consumption and chip area. Moreover, due to the non-volatility of SHE-MTC, the front-end analog circuits can be switched off when it is not required to further increase power efficiency. The device dynamics of SHE-MTC is simulated using a numerical device model based on Landau-Lifshitz-Gilbert (LLG) equation with Spin-Transfer Torque (STT) term and SHE term. The device-circuit co-simulation in SPICE (45nm CMOS technology) have shown that the average power dissipation of proposed ADC is 1.9mW, operating at 500MS/s with 1.2 V power supply. The INL and DNL are in the range of 0.23LSB and 0.32LSB, respectively.

Hoque, Enamul.  2016.  Visual Text Analytics for Online Conversations: Design, Evaluation, and Applications. Companion Publication of the 21st International Conference on Intelligent User Interfaces. :122–125.

Analyzing and gaining insights from a large amount of textual conversations can be quite challenging for a user, especially when the discussions become very long. During my doctoral research, I have focused on integrating Information Visualization (InfoVis) with Natural Language Processing (NLP) techniques to better support the user's task of exploring and analyzing conversations. For this purpose, I have designed a visual text analytics system that supports the user exploration, starting from a possibly large set of conversations, then narrowing down to a subset of conversations, and eventually drilling-down to a set of comments of one conversation. While so far our approach is evaluated mainly based on lab studies, in my on-going and future work I plan to evaluate our approach via online longitudinal studies.

Hoque, Enamul, Carenini, Giuseppe.  2016.  MultiConVis: A Visual Text Analytics System for Exploring a Collection of Online Conversations. Proceedings of the 21st International Conference on Intelligent User Interfaces. :96–107.

Online conversations, such as blogs, provide rich amount of information and opinions about popular queries. Given a query, traditional blog sites return a set of conversations often consisting of thousands of comments with complex thread structure. Since the interfaces of these blog sites do not provide any overview of the data, it becomes very difficult for the user to explore and analyze such a large amount of conversational data. In this paper, we present MultiConVis, a visual text analytics system designed to support the exploration of a collection of online conversations. Our system tightly integrates NLP techniques for topic modeling and sentiment analysis with information visualizations, by considering the unique characteristics of online conversations. The resulting interface supports the user exploration, starting from a possibly large set of conversations, then narrowing down to the subset of conversations, and eventually drilling-down to the set of comments of one conversation. Our evaluations through case studies with domain experts and a formal user study with regular blog readers illustrate the potential benefits of our approach, when compared to a traditional blog reading interface.

Ben- Adar Bessos, Mai, Birnbach, Simon, Herzberg, Amir, Martinovic, Ivan.  2016.  Exposing Transmitters in Mobile Multi-Agent Games. Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy. :125–136.

We study the trade-off between the benefits obtained by communication, vs. the risks due to exposure of the location of the transmitter. To study this problem, we introduce a game between two teams of mobile agents, the P-bots team and the E-bots team. The E-bots attempt to eavesdrop and collect information, while evading the P-bots; the P-bots attempt to prevent this by performing patrol and pursuit. The game models a typical use-case of micro-robots, i.e., their use for (industrial) espionage. We evaluate strategies for both teams, using analysis and simulations.

Liu, Xiaomei, Sun, Yong, Huang, Caiyun, Zou, Xueqiang, Qin, Zhiguang.  2016.  Fast and Accurate Identification of Active Recursive Domain Name Servers in High-speed Network. Proceedings of the 2016 ACM International on Workshop on Traffic Measurements for Cybersecurity. :40–49.

Fast and accurate identification of active recursive domain name servers (RDNS) is a fundamental step to evaluate security risk degrees of DNS systems. Much identification work have been proposed based on network traffic measurement technology. Even though identifying RDNS accurately, they waste huge network resources, and fail to obtain host activity and distinguish between direct and indirect RDNS. In this paper, we proposed an approach to identify direct and forward RDNS based on our three key insights on their request-response behaviors, and proposed an approach to identify indirect RDNS based on CNAME redirect behaviors. To work in high-speed backbone networks, we further proposed an online connectivity estimation algorithm to obtain estimated values used in our identification approaches. According to our experiments, we can identify RDNS with a high accuracy by selecting the reasonable thresholds. The accuracy of identifying direct and forward RDNS can reach 89%.The accuracy of identifying indirect RDNS can reach 90%.Moreover, our work is capable of real-time analyzing high speed backbone traffics.

Bhatia, Jaspreet, Breaux, Travis D., Friedberg, Liora, Hibshi, Hanan, Smullen, Daniel.  2016.  Privacy Risk in Cybersecurity Data Sharing. Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security. :57–64.

As information systems become increasingly interdependent, there is an increased need to share cybersecurity data across government agencies and companies, and within and across industrial sectors. This sharing includes threat, vulnerability and incident reporting data, among other data. For cyberattacks that include sociotechnical vectors, such as phishing or watering hole attacks, this increased sharing could expose customer and employee personal data to increased privacy risk. In the US, privacy risk arises when the government voluntarily receives data from companies without meaningful consent from individuals, or without a lawful procedure that protects an individual's right to due process. In this paper, we describe a study to examine the trade-off between the need for potentially sensitive data, which we call incident data usage, and the perceived privacy risk of sharing that data with the government. The study is comprised of two parts: a data usage estimate built from a survey of 76 security professionals with mean eight years' experience; and a privacy risk estimate that measures privacy risk using an ordinal likelihood scale and nominal data types in factorial vignettes. The privacy risk estimate also factors in data purposes with different levels of societal benefit, including terrorism, imminent threat of death, economic harm, and loss of intellectual property. The results show which data types are high-usage, low-risk versus those that are low-usage, high-risk. We discuss the implications of these results and recommend future work to improve privacy when data must be shared despite the increased risk to privacy.

Hellman, Martin E..  2016.  Cybersecurity, Nuclear Security, Alan Turing, and Illogical Logic. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1–2.

My work that is being recognized by the 2015 ACM A. M. Turing Award is in cybersecurity, while my primary interest for the last thirty-five years is concerned with reducing the risk that nuclear deterrence will fail and destroy civilization. This Turing Lecture draws connections between those seemingly disparate areas as well as Alan Turing's elegant proof that the computable real numbers, while denumerable, are not effectively denumerable.

2017-05-18
Halderman, J. Alex, Schoen, Seth D., Heninger, Nadia, Clarkson, William, Paul, William, Calandrino, Joseph A., Feldman, Ariel J., Appelbaum, Jacob, Felten, Edward W..  2009.  Lest We Remember: Cold-boot Attacks on Encryption Keys. Commun. ACM. 52:91–98.

Contrary to widespread assumption, dynamic RAM (DRAM), the main memory in most modern computers, retains its contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard. Although DRAM becomes less reliable when it is not refreshed, it is not immediately erased, and its contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access to a machine. It poses a particular threat to laptop users who rely on disk encryption: we demonstrate that it could be used to compromise several popular disk encryption products without the need for any special devices or materials. We experimentally characterize the extent and predictability of memory retention and report that remanence times can be increased dramatically with simple cooling techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.

Awad, Amro, Manadhata, Pratyusa, Haber, Stuart, Solihin, Yan, Horne, William.  2016.  Silent Shredder: Zero-Cost Shredding for Secure Non-Volatile Main Memory Controllers. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :263–276.

As non-volatile memory (NVM) technologies are expected to replace DRAM in the near future, new challenges have emerged. For example, NVMs have slow and power-consuming writes, and limited write endurance. In addition, NVMs have a data remanence vulnerability, i.e., they retain data for a long time after being powered off. NVM encryption alleviates the vulnerability, but exacerbates the limited endurance by increasing the number of writes to memory. We observe that, in current systems, a large percentage of main memory writes result from data shredding in operating systems, a process of zeroing out physical pages before mapping them to new processes, in order to protect previous processes' data. In this paper, we propose Silent Shredder, which repurposes initialization vectors used in standard counter mode encryption to completely eliminate the data shredding writes. Silent Shredder also speeds up reading shredded cache lines, and hence reduces power consumption and improves overall performance. To evaluate our design, we run three PowerGraph applications and 26 multi-programmed workloads from the SPEC 2006 suite, on a gem5-based full system simulator. Silent Shredder eliminates an average of 48.6% of the writes in the initialization and graph construction phases. It speeds up main memory reads by 3.3 times, and improves the number of instructions per cycle (IPC) by 6.4% on average. Finally, we discuss several use cases, including virtual machines' data isolation and user-level large data initialization, where Silent Shredder can be used effectively at no extra cost.

Hester, Josiah, Tobias, Nicole, Rahmati, Amir, Sitanayah, Lanny, Holcomb, Daniel, Fu, Kevin, Burleson, Wayne P., Sorber, Jacob.  2016.  Persistent Clocks for Batteryless Sensing Devices. ACM Trans. Embed. Comput. Syst.. 15:77:1–77:28.

Sensing platforms are becoming batteryless to enable the vision of the Internet of Things, where trillions of devices collect data, interact with each other, and interact with people. However, these batteryless sensing platforms—that rely purely on energy harvesting—are rarely able to maintain a sense of time after a power failure. This makes working with sensor data that is time sensitive especially difficult. We propose two novel, zero-power timekeepers that use remanence decay to measure the time elapsed between power failures. Our approaches compute the elapsed time from the amount of decay of a capacitive device, either on-chip Static Random-Access Memory (SRAM) or a dedicated capacitor. This enables hourglass-like timers that give intermittently powered sensing devices a persistent sense of time. Our evaluation shows that applications using either timekeeper can keep time accurately through power failures as long as 45s with low overhead.

Hosseinzadeh, Shohreh, Virtanen, Seppo, Díaz-Rodríguez, Natalia, Lilius, Johan.  2016.  A Semantic Security Framework and Context-aware Role-based Access Control Ontology for Smart Spaces. Proceedings of the International Workshop on Semantic Big Data. :8:1–8:6.

Smart Spaces are composed of heterogeneous sensors and devices that collect and share information. This information may contain personal information of the users. Thus, securing the data and preserving the privacy are of paramount importance. In this paper, we propose techniques for information security and privacy protection for Smart Spaces based on the Smart-M3 platform. We propose a) a security framework, and b) a context-aware role-based access control scheme. We model our access control scheme using ontological techniques and Web Ontology Language (OWL), and implement it via CLIPS rules. To evaluate the efficiency of our access control scheme, we measure the time it takes to check the access rights of the access requests. The results demonstrate that the highest response time is approximately 0.2 seconds in a set of 100000 triples. We conclude that the proposed access control scheme produces low overhead and is therefore, an efficient approach for Smart Spaces.

Hsu, Daniel, Sabato, Sivan.  2016.  Loss Minimization and Parameter Estimation with Heavy Tails. J. Mach. Learn. Res.. 17:543–582.

This work studies applications and generalizations of a simple estimation technique that provides exponential concentration under heavy-tailed distributions, assuming only bounded low-order moments. We show that the technique can be used for approximate minimization of smooth and strongly convex losses, and specifically for least squares linear regression. For instance, our d-dimensional estimator requires just O(d log(1/δ)) random samples to obtain a constant factor approximation to the optimal least squares loss with probability 1-δ, without requiring the covariates or noise to be bounded or subgaussian. We provide further applications to sparse linear regression and low-rank covariance matrix estimation with similar allowances on the noise and covariate distributions. The core technique is a generalization of the median-of-means estimator to arbitrary metric spaces.

Honig, William L., Noda, Natsuko, Takada, Shingo.  2016.  Lack of Attention to Singular (or Atomic) Requirements Despite Benefits for Quality, Metrics and Management. SIGSOFT Softw. Eng. Notes. 41:1–5.

There are seemingly many advantages to being able to identify, document, test, and trace single or "atomic" requirements. Why then has there been little attention to the topic and no widely used definition or process on how to define atomic requirements? Definitions of requirements and standards focus on user needs, system capabilities or functions; some definitions include making individual requirements singular or without the use of conjunctions. In a few cases there has been a description of atomic system events or requirements. This work is surveyed here although there is no well accepted and used best practice for generating atomic requirements. Due to their importance in software engineering, quality and metrics for requirements have received considerable attention. In the seminal paper on software requirements quality, Davis et al. proposed specific metrics including the "unambiguous quality factor" and the "verifiable quality factor"; these and other metrics work best with a clearly enumerable list of single requirements. Atomic requirements are defined here as a natural language statement that completely describes a single system function, feature, need, or capability, including all information, details, limits, and characteristics. A typical user login screen is used as an example of an atomic requirement which can include both functional and nonfunctional requirements. Individual atomic requirements are supported by a system glossary, references to applicable industry standards, mock ups of the user interface, etc. One way to identify such atomic requirements is from use case or system event analysis. This definition of atomic requirements is still a work in progress and offered to prompt discussion. Atomic requirements allow clear naming or numbering of requirements for traceability, change management, and importance ranking. Further, atomic requirements defined in this manner are suitable for rapid implementation approaches (implementing one requirement at a time), enable good test planning (testing can clearly indicate pass or fail of the whole requirement), and offer other management advantages in project control.

Flores, Huber, Sharma, Rajesh, Ferreira, Denzil, Luo, Chu, Kostakos, Vassilis, Tarkoma, Sasu, Hui, Pan, Li, Yong.  2016.  Social-aware Device-to-device Communication: A Contribution for Edge and Fog Computing? Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. :1466–1471.

The exploitation of the opportunistic infrastructure via Device-to-Device (D2D) communication is a critical component towards the adoption of new paradigms such as edge and fog computing. While a lot of work has demonstrated the great potential of D2D communication, it is still unclear whether the benefits of the D2D approach can really be leveraged in practice. In this paper, we develop a software sensor, namely Detector, which senses the infrastructure in proximity of a mobile user. We analyze and evaluate D2D on the wild, i.e., not in simulations. We found that in a realistic environment, a mobile is always co-located in proximity to at least one other mobile device throughout the day. This suggests that a device can schedule tasks processing in coordination with other devices, potentially more powerful, instead of handling the processing of the tasks by itself.

Saurez, Enrique, Hong, Kirak, Lillethun, Dave, Ramachandran, Umakishore, Ottenwälder, Beate.  2016.  Incremental Deployment and Migration of Geo-distributed Situation Awareness Applications in the Fog. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :258–269.

Geo-distributed Situation Awareness applications are large in scale and are characterized by 24/7 data generation from mobile and stationary sensors (such as cameras and GPS devices); latency-sensitivity for converting sensed data to actionable knowledge; and elastic and bursty needs for computational resources. Fog computing [7] envisions providing computational resources close to the edge of the network, consequently reducing the latency for the sense-process-actuate cycle that exists in these applications. We propose Foglets, a programming infrastructure for the geo-distributed computational continuum represented by fog nodes and the cloud. Foglets provides APIs for a spatio-temporal data abstraction for storing and retrieving application generated data on the local nodes, and primitives for communication among the resources in the computational continuum. Foglets manages the application components on the Fog nodes. Algorithms are presented for launching application components and handling the migration of these components between Fog nodes, based on the mobility pattern of the sensors and the dynamic computational needs of the application. Evaluation results are presented for a Fog network consisting of 16 nodes using a simulated vehicular network as the workload. We show that the discovery and deployment protocol can be executed in 0.93 secs, and joining an already deployed application can be as quick as 65 ms. Also, QoS-sensitive proactive migration can be accomplished in 6 ms.

Hawkins, Byron, Demsky, Brian, Taylor, Michael B..  2016.  BlackBox: Lightweight Security Monitoring for COTS Binaries. Proceedings of the 2016 International Symposium on Code Generation and Optimization. :261–272.

After a software system is compromised, it can be difficult to understand what vulnerabilities attackers exploited. Any information residing on that machine cannot be trusted as attackers may have tampered with it to cover their tracks. Moreover, even after an exploit is known, it can be difficult to determine whether it has been used to compromise a given machine. Aviation has long-used black boxes to better understand the causes of accidents, enabling improvements that reduce the likelihood of future accidents. Many attacks introduce abnormal control flows to compromise systems. In this paper, we present BlackBox, a monitoring system for COTS software. Our techniques enable BlackBox to efficiently monitor unexpected and potentially harmful control flow in COTS binaries. BlackBox constructs dynamic profiles of an application's typical control flows to filter the vast majority of expected control flow behavior, leaving us with a manageable amount of data that can be logged across the network to remote devices. Modern applications make extensive use of dynamically generated code, some of which varies greatly between executions. We introduce support for code generators that can detect security-sensitive behaviors while allowing BlackBox to avoid logging the majority of ordinary behaviors. We have implemented BlackBox in DynamoRIO. We evaluate the runtime overhead of BlackBox, and show that it can effectively monitor recent versions of Microsoft Office and Google Chrome. We show that in ROP, COOP, and state- of-the-art JIT injection attacks, BlackBox logs the pivotal actions by which the attacker takes control, and can also blacklist those actions to prevent repeated exploits.

Hamlet, Jason R., Lamb, Christopher C..  2016.  Dependency Graph Analysis and Moving Target Defense Selection. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :105–116.

Moving target defense (MTD) is an emerging paradigm in which system defenses dynamically mutate in order to decrease the overall system attack surface. Though the concept is promising, implementations have not been widely adopted. The field has been actively researched for over ten years, and has only produced a small amount of extensively adopted defenses, most notably, address space layout randomization (ASLR). This is despite the fact that there currently exist a variety of moving target implementations and proofs-of-concept. We suspect that this results from the moving target controls breaking critical system dependencies from the perspectives of users and administrators, as well as making things more difficult for attackers. As a result, the impact of the controls on overall system security is not sufficient to overcome the inconvenience imposed on legitimate system users. In this paper, we analyze a successful MTD approach. We study the control's dependency graphs, showing how we use graph theoretic and network properties to predict the effectiveness of the selected control.

Huang, Waylon.  2016.  Discovering Additional Violations of Java API Invariants. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :1145–1147.

In the absence of formal specifications or test oracles, automating testing is made possible by the fact that a program must satisfy certain requirements set down by the programming language. This work describes Randoop, an automatic unit test generator which checks for invariants specified by the Java API. Randoop is able to detect violations to invariants as specified by the Java API and create error tests that reveal related bugs. Randoop is also able to produce regression tests, meant to be added to regression test suites, that capture expected behavior. We discuss additional extensions that we have made to Randoop which expands its capability for the detection of violation of specified invariants. We also examine an optimization and a heuristic for making the invariant checking process more efficient.

Nguyen, Anh Tuan, Hilton, Michael, Codoban, Mihai, Nguyen, Hoan Anh, Mast, Lily, Rademacher, Eli, Nguyen, Tien N., Dig, Danny.  2016.  API Code Recommendation Using Statistical Learning from Fine-grained Changes. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :511–522.

Learning and remembering how to use APIs is difficult. While code-completion tools can recommend API methods, browsing a long list of API method names and their documentation is tedious. Moreover, users can easily be overwhelmed with too much information. We present a novel API recommendation approach that taps into the predictive power of repetitive code changes to provide relevant API recommendations for developers. Our approach and tool, APIREC, is based on statistical learning from fine-grained code changes and from the context in which those changes were made. Our empirical evaluation shows that APIREC correctly recommends an API call in the first position 59% of the time, and it recommends the correct API call in the top five positions 77% of the time. This is a significant improvement over the state-of-the-art approaches by 30-160% for top-1 accuracy, and 10-30% for top-5 accuracy, respectively. Our result shows that APIREC performs well even with a one-time, minimal training dataset of 50 publicly available projects.

Hasan, Samir, King, Zachary, Hafiz, Munawar, Sayagh, Mohammed, Adams, Bram, Hindle, Abram.  2016.  Energy Profiles of Java Collections Classes. Proceedings of the 38th International Conference on Software Engineering. :225–236.

We created detailed profiles of the energy consumed by common operations done on Java List, Map, and Set abstractions. The results show that the alternative data types for these abstractions differ significantly in terms of energy consumption depending on the operations. For example, an ArrayList consumes less energy than a LinkedList if items are inserted at the middle or at the end, but consumes more energy than a LinkedList if items are inserted at the start of the list. To explain the results, we explored the memory usage and the bytecode executed during an operation. Expensive computation tasks in the analyzed bytecode traces appeared to have an energy impact, but memory usage did not contribute. We evaluated our profiles by using them to selectively replace Collections types used in six applications and libraries. We found that choosing the wrong Collections type, as indicated by our profiles, can cost even 300% more energy than the most efficient choice. Our work shows that the usage context of a data structure and our measured energy profiles can be used to decide between alternative Collections implementations.

Lin, Jerry Chun-Wei, Liu, Qiankun, Fournier-Viger, Philippe, Hong, Tzung-Pei, Zhan, Justin, Voznak, Miroslav.  2016.  An Efficient Anonymous System for Transaction Data. Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics 2016, Data Science 2016. :28:1–28:6.

k-anonymity is an efficient way to anonymize the relational data to protect privacy against re-identification attacks. For the purpose of k-anonymity on transaction data, each item is considered as the quasi-identifier attribute, thus increasing high dimension problem as well as the computational complexity and information loss for anonymity. In this paper, an efficient anonymity system is designed to not only anonymize transaction data with lower information loss but also reduce the computational complexity for anonymity. An extensive experiment is carried to show the efficiency of the designed approach compared to the state-of-the-art algorithms for anonymity in terms of runtime and information loss. Experimental results indicate that the proposed anonymous system outperforms the compared algorithms in all respects.

Haitzer, Thomas, Navarro, Elena, Zdun, Uwe.  2015.  Architecting for Decision Making About Code Evolution. Proceedings of the 2015 European Conference on Software Architecture Workshops. :52:1–52:7.

During software evolution, it is important to evolve not only the source code, but also its architecture to prevent architecture drift and architecture erosion. This is a complex activity, especially for large software projects, with multiple development teams that might be located in different countries or on different continents. To ease this kind of evolution, we have developed a domain-specific language for making decisions about the evolution. It supports the definition of architectural changes based on multiple implementation tasks that can have temporal dependencies among each other. Then, by means of a model-to-model transformation, we automatically create a constraint model that we use to generate, by means of the Alloy model analyzer, the possible alternative decisions for executing the implementation tasks. The tight integration with architecture abstractions enables architects to automatically check the changes related to an implementation task in relation to the architecture description. This helps keeping architecture and code in sync, avoiding drift and erosion.