Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2017-05-18
Das, Subhasis, Aamodt, Tor M., Dally, William J..  2015.  Reuse Distance-Based Probabilistic Cache Replacement. ACM Trans. Archit. Code Optim.. 12:33:1–33:22.

This article proposes Probabilistic Replacement Policy (PRP), a novel replacement policy that evicts the line with minimum estimated hit probability under optimal replacement instead of the line with maximum expected reuse distance. The latter is optimal under the independent reference model of programs, which does not hold for last-level caches (LLC). PRP requires 7% and 2% metadata overheads in the cache and DRAM respectively. Using a sampling scheme makes DRAM overhead negligible, with minimal performance impact. Including detailed overhead modeling and equal cache areas, PRP outperforms SHiP, a state-of-the-art LLC replacement algorithm, by 4% for memory-intensive SPEC-CPU2006 benchmarks.

Huang, Waylon.  2016.  Discovering Additional Violations of Java API Invariants. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :1145–1147.

In the absence of formal specifications or test oracles, automating testing is made possible by the fact that a program must satisfy certain requirements set down by the programming language. This work describes Randoop, an automatic unit test generator which checks for invariants specified by the Java API. Randoop is able to detect violations to invariants as specified by the Java API and create error tests that reveal related bugs. Randoop is also able to produce regression tests, meant to be added to regression test suites, that capture expected behavior. We discuss additional extensions that we have made to Randoop which expands its capability for the detection of violation of specified invariants. We also examine an optimization and a heuristic for making the invariant checking process more efficient.

Nadi, Sarah, Krüger, Stefan, Mezini, Mira, Bodden, Eric.  2016.  Jumping Through Hoops: Why Do Java Developers Struggle with Cryptography APIs? Proceedings of the 38th International Conference on Software Engineering. :935–946.

To protect sensitive data processed by current applications, developers, whether security experts or not, have to rely on cryptography. While cryptography algorithms have become increasingly advanced, many data breaches occur because developers do not correctly use the corresponding APIs. To guide future research into practical solutions to this problem, we perform an empirical investigation into the obstacles developers face while using the Java cryptography APIs, the tasks they use the APIs for, and the kind of (tool) support they desire. We triangulate data from four separate studies that include the analysis of 100 StackOverflow posts, 100 GitHub repositories, and survey input from 48 developers. We find that while developers find it difficult to use certain cryptographic algorithms correctly, they feel surprisingly confident in selecting the right cryptography concepts (e.g., encryption vs. signatures). We also find that the APIs are generally perceived to be too low-level and that developers prefer more task-based solutions.

Amani, Sven, Nadi, Sarah, Nguyen, Hoan A., Nguyen, Tien N., Mezini, Mira.  2016.  MUBench: A Benchmark for API-misuse Detectors. Proceedings of the 13th International Conference on Mining Software Repositories. :464–467.

Over the last few years, researchers proposed a multitude of automated bug-detection approaches that mine a class of bugs that we call API misuses. Evaluations on a variety of software products show both the omnipresence of such misuses and the ability of the approaches to detect them. This work presents MuBench, a dataset of 89 API misuses that we collected from 33 real-world projects and a survey. With the dataset we empirically analyze the prevalence of API misuses compared to other types of bugs, finding that they are rare, but almost always cause crashes. Furthermore, we discuss how to use it to benchmark and compare API-misuse detectors.

Nguyen, Trong Duc, Nguyen, Anh Tuan, Nguyen, Tien N..  2016.  Mapping API Elements for Code Migration with Vector Representations. Proceedings of the 38th International Conference on Software Engineering Companion. :756–758.

Problem. Code migration between languages is challenging partly because different languages require developers to use different software libraries and frameworks. For example, in Java, Java Development Kit library (JDK) is a popular toolkit while .NET is the main framework used in C\# software development. Code migration requires not only the mappings between the language constructs (e.g., statements, expressions) but also the mappings among the APIs of the libraries/frameworks used in two languages. For example, in Java, to write to a file, one can use FileWriter.write of FileWriter, and in C\#, one can achieve the same function with StreamWriter.Write of StreamWriter. Such mapping is called API mapping.

Indela, Soumya, Kulkarni, Mukul, Nayak, Kartik, Dumitras, Tudor.  2016.  Helping Johnny Encrypt: Toward Semantic Interfaces for Cryptographic Frameworks. Proceedings of the 2016 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. :180–196.

Several mature cryptographic frameworks are available, and they have been utilized for building complex applications. However, developers often use these frameworks incorrectly and introduce security vulnerabilities. This is because current cryptographic frameworks erode abstraction boundaries, as they do not encapsulate all the framework-specific knowledge and expect developers to understand security attacks and defenses. Starting from the documented misuse cases of cryptographic APIs, we infer five developer needs and we show that a good API design would address these needs only partially. Building on this observation, we propose APIs that are semantically meaningful for developers, we show how these interfaces can be implemented consistently on top of existing frameworks using novel and known design patterns, and we propose build management hooks for isolating security workarounds needed during the development and test phases. Through two case studies, we show that our APIs can be utilized to implement non-trivial client-server protocols and that they provide a better separation of concerns than existing frameworks. We also discuss the challenges and potential approaches for evaluating our solution. Our semantic interfaces represent a first step toward preventing misuses of cryptographic APIs.

Nguyen, Anh Tuan, Hilton, Michael, Codoban, Mihai, Nguyen, Hoan Anh, Mast, Lily, Rademacher, Eli, Nguyen, Tien N., Dig, Danny.  2016.  API Code Recommendation Using Statistical Learning from Fine-grained Changes. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :511–522.

Learning and remembering how to use APIs is difficult. While code-completion tools can recommend API methods, browsing a long list of API method names and their documentation is tedious. Moreover, users can easily be overwhelmed with too much information. We present a novel API recommendation approach that taps into the predictive power of repetitive code changes to provide relevant API recommendations for developers. Our approach and tool, APIREC, is based on statistical learning from fine-grained code changes and from the context in which those changes were made. Our empirical evaluation shows that APIREC correctly recommends an API call in the first position 59% of the time, and it recommends the correct API call in the top five positions 77% of the time. This is a significant improvement over the state-of-the-art approaches by 30-160% for top-1 accuracy, and 10-30% for top-5 accuracy, respectively. Our result shows that APIREC performs well even with a one-time, minimal training dataset of 50 publicly available projects.

Tan, Antoine Tran, Kaiser, Hartmut.  2016.  Extending C++ with Co-array Semantics. Proceedings of the 3rd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming. :63–68.

The current trend of large scientific computing problems is to align as much as possible to a Single Programming Multiple Data (or SPMD) scheme when the application algorithms are conducive to parallelization and vectorization. This reduces the complexity of code because the processors or (computational nodes) perform the same instructions which allows for better performance as algorithms work on local data sets instead of continuously transferring data from one locality to another. However, certain applications, such as stencil problems, demonstrate the need to move data to or from remote localities. This involves an additional degree of complexity, as one must know with which localities to exchange data. In order to solve this issue, Fortran has extended its scalar element indexing approach to distributed structures of elements. In this extension, a structure of scalar elements is attributed a ”co-index” and lives in a specific locality. A co-index provides the application with enough information to retrieve the corresponding data reference. In C++, containers present themselves as a ”smarter” alternative of Fortran arrays but there are still no corresponding standardized features similar to the Fortran co-indexing approach. In this paper, we present an implementation of such features in HPX, a general purpose C++ runtime system for applications of any scale. We describe how the combination of the HPX features and the actual C++ Standard makes it easy to define a high performance API similar to Co-Array Fortran.

Hasan, Samir, King, Zachary, Hafiz, Munawar, Sayagh, Mohammed, Adams, Bram, Hindle, Abram.  2016.  Energy Profiles of Java Collections Classes. Proceedings of the 38th International Conference on Software Engineering. :225–236.

We created detailed profiles of the energy consumed by common operations done on Java List, Map, and Set abstractions. The results show that the alternative data types for these abstractions differ significantly in terms of energy consumption depending on the operations. For example, an ArrayList consumes less energy than a LinkedList if items are inserted at the middle or at the end, but consumes more energy than a LinkedList if items are inserted at the start of the list. To explain the results, we explored the memory usage and the bytecode executed during an operation. Expensive computation tasks in the analyzed bytecode traces appeared to have an energy impact, but memory usage did not contribute. We evaluated our profiles by using them to selectively replace Collections types used in six applications and libraries. We found that choosing the wrong Collections type, as indicated by our profiles, can cost even 300% more energy than the most efficient choice. Our work shows that the usage context of a data structure and our measured energy profiles can be used to decide between alternative Collections implementations.

Lin, Ziyi, Zhong, Hao, Chen, Yuting, Zhao, Jianjun.  2016.  LockPeeker: Detecting Latent Locks in Java APIs. Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. :368–378.

Detecting lock-related defects has long been a hot research topic in software engineering. Many efforts have been spent on detecting such deadlocks in concurrent software systems. However, latent locks may be hidden in application programming interface (API) methods whose source code may not be accessible to developers. Many APIs have latent locks. For example, our study has shown that J2SE alone can have 2,000+ latent locks. As latent locks are less known by developers, they can cause deadlocks that are hard to perceive or diagnose. Meanwhile, the state-of-the-art tools mostly handle API methods as black boxes, and cannot detect deadlocks that involve such latent locks. In this paper, we propose a novel black-box testing approach, called LockPeeker, that reveals latent locks in Java APIs. The essential idea of LockPeeker is that latent locks of a given API method can be revealed by testing the method and summarizing the locking effects during testing execution. We have evaluated LockPeeker on ten real-world Java projects. Our evaluation results show that (1) LockPeeker detects 74.9% of latent locks in API methods, and (2) it enables state-of-the-art tools to detect deadlocks that otherwise cannot be detected.

Gyori, Alex, Lambeth, Ben, Shi, August, Legunsen, Owolabi, Marinov, Darko.  2016.  NonDex: A Tool for Detecting and Debugging Wrong Assumptions on Java API Specifications. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :993–997.

We present NonDex, a tool for detecting and debugging wrong assumptions on Java APIs. Some APIs have underdetermined specifications to allow implementations to achieve different goals, e.g., to optimize performance. When clients of such APIs assume stronger-than-specified guarantees, the resulting client code can fail. For example, HashSet’s iteration order is underdetermined, and code assuming some implementation-specific iteration order can fail. NonDex helps to proactively detect and debug such wrong assumptions. NonDex performs detection by randomly exploring different behaviors of underdetermined APIs during test execution. When a test fails during exploration, NonDex searches for the invocation instance of the API that caused the failure. NonDex is open source, well-integrated with Maven, and also runs from the command line. During our experiments with the NonDex Maven plugin, we detected 21 new bugs in eight Java projects from GitHub, and, using the debugging feature of NonDex, we identified the underlying wrong assumptions for these 21 new bugs and 54 previously detected bugs. We opened 13 pull requests; developers already accepted 12, and one project changed the continuous-integration configuration to run NonDex on every push. The demo video is at: https://youtu.be/h3a9ONkC59c

Fowkes, Jaroslav, Sutton, Charles.  2016.  Parameter-free Probabilistic API Mining Across GitHub. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :254–265.

Existing API mining algorithms can be difficult to use as they require expensive parameter tuning and the returned set of API calls can be large, highly redundant and difficult to understand. To address this, we present PAM (Probabilistic API Miner), a near parameter-free probabilistic algorithm for mining the most interesting API call patterns. We show that PAM significantly outperforms both MAPO and UPMiner, achieving 69% test-set precision, at retrieving relevant API call sequences from GitHub. Moreover, we focus on libraries for which the developers have explicitly provided code examples, yielding over 300,000 LOC of hand-written API example code from the 967 client projects in the data set. This evaluation suggests that the hand-written examples actually have limited coverage of real API usages.

Gu, Xiaodong, Zhang, Hongyu, Zhang, Dongmei, Kim, Sunghun.  2016.  Deep API Learning. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :631–642.

Developers often wonder how to implement a certain functionality (e.g., how to parse XML files) using APIs. Obtaining an API usage sequence based on an API-related natural language query is very helpful in this regard. Given a query, existing approaches utilize information retrieval models to search for matching API sequences. These approaches treat queries and APIs as bags-of-words and lack a deep understanding of the semantics of the query. We propose DeepAPI, a deep learning based approach to generate API usage sequences for a given natural language query. Instead of a bag-of-words assumption, it learns the sequence of words in a query and the sequence of associated APIs. DeepAPI adapts a neural language model named RNN Encoder-Decoder. It encodes a word sequence (user query) into a fixed-length context vector, and generates an API sequence based on the context vector. We also augment the RNN Encoder-Decoder by considering the importance of individual APIs. We empirically evaluate our approach with more than 7 million annotated code snippets collected from GitHub. The results show that our approach generates largely accurate API sequences and outperforms the related approaches.

Venkat, Ashish, Shamasunder, Sriskanda, Shacham, Hovav, Tullsen, Dean M..  2016.  HIPStR: Heterogeneous-ISA Program State Relocation. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :727–741.

Heterogeneous Chip Multiprocessors have been shown to provide significant performance and energy efficiency gains over homogeneous designs. Recent research has expanded the dimensions of heterogeneity to include diverse Instruction Set Architectures, called Heterogeneous-ISA Chip Multiprocessors. This work leverages such an architecture to realize substantial new security benefits, and in particular, to thwart Return-Oriented Programming. This paper proposes a novel security defense called HIPStR – Heterogeneous-ISA Program State Relocation – that performs dynamic randomization of run-time program state, both within and across ISAs. This technique outperforms the state-of-the-art just-in-time code reuse (JIT-ROP) defense by an average of 15.6%, while simultaneously providing greater security guarantees against classic return-into-libc, ROP, JOP, brute force, JIT-ROP, and several evasive variants.

Karimian, Nima, Wortman, Paul A., Tehranipoor, Fatemeh.  2016.  Evolving Authentication Design Considerations for the Internet of Biometric Things (IoBT). Proceedings of the Eleventh IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. :10:1–10:10.

The Internet of Things (IoT) is a design implementation of embedded system design that connects a variety of devices, sensors, and physical objects to a larger connected network (e.g. the Internet) which requires human-to-human or human-to-computer interaction. While the IoT is expected to expand the user's connectivity and everyday convenience, there are serious security considerations that come into account when using the IoT for distributed authentication. Furthermore the incorporation of biometrics to IoT design brings about concerns of cost and implementing a 'user-friendly' design. In this paper, we focus on the use of electrocardiogram (ECG) signals to implement distributed biometrics authentication within an IoT system model. Our observations show that ECG biometrics are highly reliable, more secure, and easier to implement than other biometrics.

Lee, Seung Ji.  2016.  Citywide Management of Media Facades: Case Study of Seoul City. Proceedings of the 3rd Conference on Media Architecture Biennale. :11:1–11:4.

Due to the evolution of LED lighting and information technology, the application of media facades has expanded rapidly. Despite the positive aspects of media facades, the growth of them can cause light pollution and add to the confusion of the city. This study analyzes the Seoul case which implements citywide management with a master plan for media facades. Through this, the study aims to investigate the meaning of citywide management of media facades installed on individual buildings. Firstly, it investigates the conditions of media facades in Seoul City. The identified problems prove the necessity of the citywide management for media facades. Secondly, it analyzed the progress of media facades regulation of Seoul City. Management target has changed from the indiscreet installation for the individual media facades to further inducing the attractive media facade for overall Seoul City. For this, the 'Seoul Media Facade Management MasterPlan' was drafted to establish citywide management by the Seoul government. Thirdly, it analyzed the MasterPlan. The management tools in the MasterPlan are classified into regional management, elemental management, and specialization plans, each having detailed approaches. Finally, the study discussed the meaning of citywide management in the aspect that media facades are the cultural asset to the city, that the regional differentiation is adopted, and that the continuous maintenance for both of the hardware and content) is important. Media facades utilizing the facade of buildings are recognized as an element of urban landscapes securing the publicness, contributing to the vitalization of the area, and finally providing pleasure to the citizens.

Fedosov, Anton, Ojala, Jarno, Niforatos, Evangelos, Olsson, Thomas, Langheinrich, Marc.  2016.  Mobile First?: Understanding Device Usage Practices in Novel Content Sharing Services Proceedings of the 20th International Academic Mindtrek Conference. :198–207.

Today's mobile app economy has greatly expanded the types of "things" people can share –- spanning from new types of digital content like physiological data (e.g., workouts) to physical things like apartments and work tools ("sharing economy"). To understand whether mobile platforms provide adequate support for such novel sharing services, we surveyed 200 participants about their experiences with six types of emergent sharing services. For each domain we elicited device usage practices and identified corresponding device selection criteria. Our analysis suggests that, despite contemporary mobile first design efforts, desktop interfaces of emergent content sharing services are often considered more efficient and easier to use –- both for sharing and access control tasks (i.e., privacy). Based on our findings, we outline device-related design and research opportunities in this space.

Schweitzer, Nadav, Stulman, Ariel, Shabtai, Asaf.  2016.  Neighbor Contamination to Achieve Complete Bottleneck Control. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :247–253.

Black-holes, gray-holes and, wormholes, are devastating to the correct operation of any network. These attacks (among others) are based on the premise that packets will travel through compromised nodes, and methods exist to coax routing into these traps. Detection of these attacks are mainly centered around finding the subversion in action. In networks, bottleneck nodes -- those that sit on many potential routes between sender and receiver -- are an optimal location for compromise. Finding naturally occurring path bottlenecks, however, does not entitle network subversion, and as such are more difficult to detect. The dynamic nature of mobile ad-hoc networks (manets) causes ubiquitous routing algorithms to be even more susceptible to this class of attacks. Finding perceived bottlenecks in an olsr based manet, is able to capture between 50%-75% of data. In this paper we propose a method of subtly expanding perceived bottlenecks into complete bottlenecks, raising capture rate up to 99%; albeit, at high cost. We further tune the method to reduce cost, and measure the corresponding capture rate.

Dupuis, Marc, Khadeer, Samreen.  2016.  Curiosity Killed the Organization: A Psychological Comparison Between Malicious and Non-Malicious Insiders and the Insider Threat. Proceedings of the 5th Annual Conference on Research in Information Technology. :35–40.

Insider threats remain a significant problem within organizations, especially as industries that rely on technology continue to grow. Traditionally, research has been focused on the malicious insider; someone that intentionally seeks to perform a malicious act against the organization that trusts him or her. While this research is important, more commonly organizations are the victims of non-malicious insiders. These are trusted employees that are not seeking to cause harm to their employer; rather, they misuse systems-either intentional or unintentionally-that results in some harm to the organization. In this paper, we look at both by developing and validating instruments to measure the behavior and circumstances of a malicious insider versus a non-malicious insider. We found that in many respects their psychological profiles are very similar. The results are also consistent with other research on the malicious insider from a personality standpoint. We expand this and also find that trait negative affect, both its higher order dimension and the lower order dimensions, are highly correlated with insider threat behavior and circumstances. This paper makes four significant contributions: 1) Development and validation of survey instruments designed to measure the insider threat; 2) Comparison of the malicious insider with the non-malicious insider; 3) Inclusion of trait affect as part of the psychological profile of an insider; 4) Inclusion of a measure for financial well-being, and 5) The successful use of survey research to examine the insider threat problem.

Landwehr, Carl E..  2016.  How Can We Enable Privacy in an Age of Big Data Analytics? Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :47–47.

Even though some seem to think privacy is dead, we are all still wearing clothes, as Bruce Schneier observed at a recent conference on surveillance[1]. Yet big data and big data analytics are leaving some of us feeling a bit more naked than before. This talk will provide some personal observations on privacy today and then outline some research areas where progress is needed to enable society to gain the benefits of analyzing large datasets without giving up more privacy than necessary. Not since the early 1970s, when computing pioneer Willis Ware chaired the committee that produced the initial Fair Information Practice Principles [2] has privacy been so much in the U.S. public eye. Snowden's revelations, as well as a growing awareness that merely living our lives seems to generate an expanding "digital exhaust." Have triggered many workshops and meetings. A national strategy for privacy research is in preparation by a Federal interagency group. The ability to analyze large datasets rapidly and to extract commercially useful insights from them is spawning new industries. Must this industrial growth come at the cost of substantial privacy intrusions?

Boehm, Hans-J., Chakrabarti, Dhruva R..  2016.  Persistence Programming Models for Non-volatile Memory. Proceedings of the 2016 ACM SIGPLAN International Symposium on Memory Management. :55–67.

It is expected that DRAM memory will be augmented, and perhaps eventually replaced, by one of several up-and-coming memory technologies. These are all non-volatile, in that they retain their contents without power. This allows primary memory to be used as a fast disk replacement. It also enables more aggressive programming models that directly leverage persistence of primary memory. However, it is challenging to maintain consistency of memory in such an environment. There is no consensus on the right programming model for doing so, and subtle differences can have large, and sometimes surprising, effects on the implementation and its performance. The existing literature describes multiple programming systems that provide point solutions to the selective persistence for user data structures. Real progress in this area requires a choice of programming model, which we cannot reasonably make without a real understanding of the design space. Point solutions are insufficient. We systematically explore what we consider to be the most promising part of the space, precisely defining semantics and identifying implementation costs. This allows us to be much more explicit and precise about semantic and implementation trade-offs that were usually glossed over in prior work. It also exposes some promising new design alternatives.

Lin, Jerry Chun-Wei, Liu, Qiankun, Fournier-Viger, Philippe, Hong, Tzung-Pei, Zhan, Justin, Voznak, Miroslav.  2016.  An Efficient Anonymous System for Transaction Data. Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics 2016, Data Science 2016. :28:1–28:6.

k-anonymity is an efficient way to anonymize the relational data to protect privacy against re-identification attacks. For the purpose of k-anonymity on transaction data, each item is considered as the quasi-identifier attribute, thus increasing high dimension problem as well as the computational complexity and information loss for anonymity. In this paper, an efficient anonymity system is designed to not only anonymize transaction data with lower information loss but also reduce the computational complexity for anonymity. An extensive experiment is carried to show the efficiency of the designed approach compared to the state-of-the-art algorithms for anonymity in terms of runtime and information loss. Experimental results indicate that the proposed anonymous system outperforms the compared algorithms in all respects.

Casillo, Mario, Colace, Francesco, De Santo, Massimo, Lemma, Saverio, Lombardi, Marco, Pietrosanto, Antonio.  2016.  An Ontological Approach to Digital Storytelling. Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics 2016, Data Science 2016. :27:1–27:8.

In order to identify a personalized story, suitable for the needs of large masses of visitors and tourists, our work has been aimed at the definition of appropriate models and solutions of fruition that make the visit experience more appealing and immersive. This paper proposes the characteristic functionalities of narratology and of the techniques of storytelling for the dynamic creation of experiential stories on a sematic basis. Therefore, it represents a report about sceneries, implementation models and architectural and functional specifications of storytelling for the dynamic creation of functional contents for the visit. Our purpose is to indicate an approach for the realization of a dynamic storytelling engine that can allow the dynamic supply of narrative contents, not necessarily predetermined and pertinent to the needs and the dynamic behaviors of the users. In particular, we have chosen to employ an adaptive, social and mobile approach, using an ontological model in order to realize a dynamic digital storytelling system, able to collect and elaborate social information and contents about the users giving them a personalized story on the basis of the place they are visiting. A case of study and some experimental results are presented and discussed.

Musto, Cataldo, Lops, Pasquale, Basile, Pierpaolo, de Gemmis, Marco, Semeraro, Giovanni.  2016.  Semantics-aware Graph-based Recommender Systems Exploiting Linked Open Data. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. :229–237.

The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.

[Anonymous].  2016.  Heterogeneous Computing: Hardware and Software Perspectives. Applicative 2016. :–.

In the beginning was the single core ... Then we moved to multicore, before we are fully ready for it! Then GPUs appear in the scene, giving us very high performance for some type of applications ... What is next? How can we get more performance? The very near future will be the era of heterogeneous computing. We already have a glimpse of it now; you write code for multicore and GPUs together, right? As computer systems become more and more heterogeneous (cores of different capabilities, GPUs, application specific hardware, ...), writing efficient code for it becomes more and more challenging. What type of heterogeneity are we talking about? Why do we need this heterogeneity? How can we write software that makes the best use of that? ... These are the topics we will discuss in this talk.