Visible to the public Biblio

Found 7504 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-05-18
Saurez, Enrique, Hong, Kirak, Lillethun, Dave, Ramachandran, Umakishore, Ottenwälder, Beate.  2016.  Incremental Deployment and Migration of Geo-distributed Situation Awareness Applications in the Fog. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :258–269.

Geo-distributed Situation Awareness applications are large in scale and are characterized by 24/7 data generation from mobile and stationary sensors (such as cameras and GPS devices); latency-sensitivity for converting sensed data to actionable knowledge; and elastic and bursty needs for computational resources. Fog computing [7] envisions providing computational resources close to the edge of the network, consequently reducing the latency for the sense-process-actuate cycle that exists in these applications. We propose Foglets, a programming infrastructure for the geo-distributed computational continuum represented by fog nodes and the cloud. Foglets provides APIs for a spatio-temporal data abstraction for storing and retrieving application generated data on the local nodes, and primitives for communication among the resources in the computational continuum. Foglets manages the application components on the Fog nodes. Algorithms are presented for launching application components and handling the migration of these components between Fog nodes, based on the mobility pattern of the sensors and the dynamic computational needs of the application. Evaluation results are presented for a Fog network consisting of 16 nodes using a simulated vehicular network as the workload. We show that the discovery and deployment protocol can be executed in 0.93 secs, and joining an already deployed application can be as quick as 65 ms. Also, QoS-sensitive proactive migration can be accomplished in 6 ms.

Cozzolino, Vittorio.  2016.  Exploiting Scattered Data in Smart Systems. Proceedings of on MobiSys 2016 PhD Forum. :19–20.

The Internet of Things (IoT) is slowly, but steadily, changing the way we interact with our surrounding. Smart cities, smart environments, smart buildings are just a few macroscopic examples of how smart ecosystems are increasingly involved in our daily life, each one offering a different set of information. This information's decentralization and scattering can be exploited, optimizing mobile nodes on-demand information retrieval process. We propose an approach focused on defining competence domains in smart systems where the responsibility of providing a specific information to a mobile node is defined by spatial constraints. By exploiting the interplay and duality of Cloud Computing and Fog Computing we introduce an approach to exploit data spatial allocation in smart systems to optimize mobile nodes information retrieval.

Giang, Nam Ky, Leung, Victor C.M., Lea, Rodger.  2016.  On Developing Smart Transportation Applications in Fog Computing Paradigm. Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :91–98.

Smart Transportation applications by nature are examples of Vehicular Ad-hoc Network (VANETs) applications where mobile vehicles, roadside units and transportation infrastructure interplay with one another to provide value added services. While there are abundant researches that focused on the communication aspect of such Mobile Ad-hoc Networks, there are few research bodies that target the development of VANET applications. Among the popular VANET applications, a dominant direction is to leverage Cloud infrastructure to execute and deliver applications and services. Recent studies showed that Cloud Computing is not sufficient for many VANET applications due to the mobility of vehicles and the latency sensitive requirements they impose. To this end, Fog Computing has been proposed to leverage computation infrastructure that is closer to the network edge to compliment Cloud Computing in providing latency-sensitive applications and services. However, applications development in Fog environment is much more challenging than in the Cloud due to the distributed nature of Fog systems. In this paper, we investigate how Smart Transportation applications are developed following Fog Computing approach, their challenges and possible mitigation from the state of the arts.

Banerjee, Suman.  2016.  Edge Computing in the Extreme and Its Applications. Proceedings of the Eighth Wireless of the Students, by the Students, and for the Students Workshop. :2–2.

The notion of edge computing introduces new computing functions away from centralized locations and closer to the network edge and thus facilitating new applications and services. This enhanced computing paradigm is provides new opportunities to applications developers, not available otherwise. In this talk, I will discuss why placing computation functions at the extreme edge of our network infrastructure, i.e., in wireless Access Points and home set-top boxes, is particularly beneficial for a large class of emerging applications. I will discuss a specific approach, called ParaDrop, to implement such edge computing functionalities, and use examples from different domains – smarter homes, sustainability, and intelligent transportation – to illustrate the new opportunities around this concept.

Lin, Hsin-Peng, Shih, Yuan-Yao, Pang, Ai-Chun, Lou, Yuan-Yao.  2016.  A Virtual Local-hub Solution with Function Module Sharing for Wearable Devices. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :278–286.

Wearable devices, which are small electronic devices worn on a human body, are equipped with low level of processing and storage capacities and offer some types of integrated functionalities. Recently, wearable device is becoming increasingly popular, various kinds of wearable device are launched in the market; however, wearable devices require a powerful local-hub, most are smartphone, to replenish processing and storage capacities for advanced functionalities. Sometime it may be inconvenient to carry the local-hub (smartphone); thus, many wearable devices are equipped with Wi-Fi interface, enabling them to exchange data with local-hub though the Internet when the local-hub is not nearby. However, this results in long response time and restricted functionalities. In this paper, we present a virtual local-hub solution, which utilizes network equipment nearby (e.g., Wi-Fi APs) as the local-hub. Since migrating all applications serving the wearable devices respectively takes too much networking and storage resources, the proposed solution deploys function modules to multiple network nodes and enables remote function module sharing among different users and applications. To reduce the impact of the solution on the network bandwidth, we propose a heuristic algorithm for function module allocation with the objective of minimizing total bandwidth consumption. We conduct series of experiments, and the results show that the proposed solution can reduce the bandwidth consumption by up to half and still serve all requests given a large number of service requests.

Kattepur, Ajay, Dohare, Harshit, Mushunuri, Visali, Rath, Hemant Kumar, Simha, Anantha.  2016.  Resource Constrained Offloading in Fog Computing. Proceedings of the 1st Workshop on Middleware for Edge Clouds & Cloudlets. :1:1–1:6.

When focusing on the Internet of Things (IoT), communicating and coordinating sensor–actuator data via the cloud involves inefficient overheads and reduces autonomous behavior. The Fog Computing paradigm essentially moves the compute nodes closer to sensing entities by exploiting peers and intermediary network devices. This reduces centralized communication with the cloud and entails increased coordination between sensing entities and (possibly available) smart network gateway devices. In this paper, we analyze the utility of offloading computation among peers when working in fog based deployments. It is important to study the trade-offs involved with such computation offloading, as we deal with resource (energy, computation capacity) limited devices. Devices computing in a distributed environment may choose to locally compute part of their data and communicate the remainder to their peers. An optimization formulation is presented that is applied to various deployment scenarios, taking the computation and communication overheads into account. Our technique is demonstrated on a network of robotic sensor–actuators developed on the ROS (Robot Operating System) platform, that coordinate over the fog to complete a task. We demonstrate 77.8% latency and 54% battery usage improvements over large computation tasks, by applying this optimal offloading.

Corsaro, Angelo.  2016.  Cloudy, Foggy and Misty Internet of Things. Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering. :261–261.

Early Internet of Things(IoT) applications have been build around cloud-centric architectures where information generated at the edge by the "things" in conveyed and processed in a cloud infrastructure. These architectures centralise processing and decision on the data-centre assuming sufficient connectivity, bandwidth and latency. As applications of the Internet of Things extend to industrial and more demanding consumer applications, the assumptions underlying cloud-centric architectures start to be violated as for several of these applications connectivity, bandwidth and latency to the data-centre are a challenge. Fog and Mist computing have emerged as forms of "Cloud Computing" closer to the "Edge" and to the "Things" that should alleviate the connectivity, bandwidth and latency challenges faced by Industrial and extremely demanding Consumer Internet of Things Applications. This keynote, will (1) introduce Cloud, Fog and Mist Computing architectures for the Internet of Things, (2) motivate their need and explain their applicability with real-world use cases, and (3) assess their technological maturity and highlight the areas that require further academic and industrial research.

Dey, Swarnava, Mukherjee, Arijit.  2016.  Robotic SLAM: A Review from Fog Computing and Mobile Edge Computing Perspective. Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services. :153–158.

Offloading computationally expensive Simultaneous Localization and Mapping (SLAM) task for mobile robots have attracted significant attention during the last few years. Lack of powerful on-board compute capability in these energy constrained mobile robots and rapid advancement in compute cloud access technologies laid the foundation for development of several Cloud Robotics platforms that enabled parallel execution of computationally expensive robotic algorithms, especially involving multiple robots. In this work the Cloud Robotics concept is extended to include the current emphasis of computing at the network edge nodes along with the Cloud. The requirements and advantages of using edge nodes for computation offloading over remote cloud or local robot clusters are discussed with reference to the ETSI 'Mobile-Edge Computing' initiative and OpenFog Consortium's 'OpenFog Architecture'. A Particle Filter algorithm for SLAM is modified and implemented for offloading in a multi-tier edge+cloud setup. Additionally a model is proposed for offloading decision in such a setup with experiments and results demonstrating the efficacy of the proposed dynamic offloading scheme over static offloading strategies.

Hawkins, Byron, Demsky, Brian, Taylor, Michael B..  2016.  BlackBox: Lightweight Security Monitoring for COTS Binaries. Proceedings of the 2016 International Symposium on Code Generation and Optimization. :261–272.

After a software system is compromised, it can be difficult to understand what vulnerabilities attackers exploited. Any information residing on that machine cannot be trusted as attackers may have tampered with it to cover their tracks. Moreover, even after an exploit is known, it can be difficult to determine whether it has been used to compromise a given machine. Aviation has long-used black boxes to better understand the causes of accidents, enabling improvements that reduce the likelihood of future accidents. Many attacks introduce abnormal control flows to compromise systems. In this paper, we present BlackBox, a monitoring system for COTS software. Our techniques enable BlackBox to efficiently monitor unexpected and potentially harmful control flow in COTS binaries. BlackBox constructs dynamic profiles of an application's typical control flows to filter the vast majority of expected control flow behavior, leaving us with a manageable amount of data that can be logged across the network to remote devices. Modern applications make extensive use of dynamically generated code, some of which varies greatly between executions. We introduce support for code generators that can detect security-sensitive behaviors while allowing BlackBox to avoid logging the majority of ordinary behaviors. We have implemented BlackBox in DynamoRIO. We evaluate the runtime overhead of BlackBox, and show that it can effectively monitor recent versions of Microsoft Office and Google Chrome. We show that in ROP, COOP, and state- of-the-art JIT injection attacks, BlackBox logs the pivotal actions by which the attacker takes control, and can also blacklist those actions to prevent repeated exploits.

Dou, Yanzhi, Zeng, Kexiong(Curtis), Li, He, Yang, Yaling, Gao, Bo, Guan, Chaowen, Ren, Kui, Li, Shaoqian.  2016.  P2-SAS: Preserving Users' Privacy in Centralized Dynamic Spectrum Access Systems. Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing. :321–330.

Centralized spectrum management is one of the key dynamic spectrum access (DSA) mechanisms proposed to govern the spectrum sharing between government incumbent users (IUs) and commercial secondary users (SUs). In the current centralized DSA designs, the operation data of both government IUs and commercial SUs needs to be shared with a central server. However, the operation data of government IUs is often classified information and the SU operation data may also be commercial secret. The current system design dissatisfies the privacy requirement of both IUs and SUs since the central server is not necessarily trust-worthy for holding such sensitive operation data. To address the privacy issue, this paper presents a privacy-preserving centralized DSA system (P2-SAS), which realizes the complex spectrum allocation process of DSA through efficient secure multi-party computation. In P2-SAS, none of the IU or SU operation data would be exposed to any snooping party, including the central server itself. We formally prove the correctness and privacy-preserving property of P2-SAS and evaluate its scalability and practicality using experiments based on real-world data. Experiment results show that P2-SAS can respond an SU's spectrum request in 6.96 seconds with communication overhead of less than 4 MB.

Meinicke, Jens, Wong, Chu-Pan, Kästner, Christian, Thüm, Thomas, Saake, Gunter.  2016.  On Essential Configuration Complexity: Measuring Interactions in Highly-configurable Systems. Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. :483–494.

Quality assurance for highly-configurable systems is challenging due to the exponentially growing configuration space. Interactions among multiple options can lead to surprising behaviors, bugs, and security vulnerabilities. Analyzing all configurations systematically might be possible though if most options do not interact or interactions follow specific patterns that can be exploited by analysis tools. To better understand interactions in practice, we analyze program traces to characterize and identify where interactions occur on control flow and data. To this end, we developed a dynamic analysis for Java based on variability-aware execution and monitor executions of multiple small to medium-sized programs. We find that the essential configuration complexity of these programs is indeed much lower than the combinatorial explosion of the configuration space indicates. However, we also discover that the interaction characteristics that allow scalable and complete analyses are more nuanced than what is exploited by existing state-of-the-art quality assurance strategies.

Hamlet, Jason R., Lamb, Christopher C..  2016.  Dependency Graph Analysis and Moving Target Defense Selection. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :105–116.

Moving target defense (MTD) is an emerging paradigm in which system defenses dynamically mutate in order to decrease the overall system attack surface. Though the concept is promising, implementations have not been widely adopted. The field has been actively researched for over ten years, and has only produced a small amount of extensively adopted defenses, most notably, address space layout randomization (ASLR). This is despite the fact that there currently exist a variety of moving target implementations and proofs-of-concept. We suspect that this results from the moving target controls breaking critical system dependencies from the perspectives of users and administrators, as well as making things more difficult for attackers. As a result, the impact of the controls on overall system security is not sufficient to overcome the inconvenience imposed on legitimate system users. In this paper, we analyze a successful MTD approach. We study the control's dependency graphs, showing how we use graph theoretic and network properties to predict the effectiveness of the selected control.

Wang, Huangxin, Li, Fei, Chen, Songqing.  2016.  Towards Cost-Effective Moving Target Defense Against DDoS and Covert Channel Attacks. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :15–25.

Traditionally, network and system configurations are static. Attackers have plenty of time to exploit the system's vulnerabilities and thus they are able to choose when to launch attacks wisely to maximize the damage. An unpredictable system configuration can significantly lift the bar for attackers to conduct successful attacks. Recent years, moving target defense (MTD) has been advocated for this purpose. An MTD mechanism aims to introduce dynamics to the system through changing its configuration continuously over time, which we call adaptations. Though promising, the dynamic system reconfiguration introduces overhead to the applications currently running in the system. It is critical to determine the right time to conduct adaptations and to balance the overhead afforded and the security levels guaranteed. This problem is known as the MTD timing problem. Little prior work has been done to investigate the right time in making adaptations. In this paper, we take the first step to both theoretically and experimentally study the timing problem in moving target defenses. For a broad family of attacks including DDoS attacks and cloud covert channel attacks, we model this problem as a renewal reward process and propose an optimal algorithm in deciding the right time to make adaptations with the objective of minimizing the long-term cost rate. In our experiments, both DDoS attacks and cloud covert channel attacks are studied. Simulations based on real network traffic traces are conducted and we demonstrate that our proposed algorithm outperforms known adaptation schemes.

Stanciu, Valeriu-Daniel, Spolaor, Riccardo, Conti, Mauro, Giuffrida, Cristiano.  2016.  On the Effectiveness of Sensor-enhanced Keystroke Dynamics Against Statistical Attacks. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :105–112.

In recent years, simple password-based authentication systems have increasingly proven ineffective for many classes of real-world devices. As a result, many researchers have concentrated their efforts on the design of new biometric authentication systems. This trend has been further accelerated by the advent of mobile devices, which offer numerous sensors and capabilities to implement a variety of mobile biometric authentication systems. Along with the advances in biometric authentication, however, attacks have also become much more sophisticated and many biometric techniques have ultimately proven inadequate in face of advanced attackers in practice. In this paper, we investigate the effectiveness of sensor-enhanced keystroke dynamics, a recent mobile biometric authentication mechanism that combines a particularly rich set of features. In our analysis, we consider different types of attacks, with a focus on advanced attacks that draw from general population statistics. Such attacks have already been proven effective in drastically reducing the accuracy of many state-of-the-art biometric authentication systems. We implemented a statistical attack against sensor-enhanced keystroke dynamics and evaluated its impact on detection accuracy. On one hand, our results show that sensor-enhanced keystroke dynamics are generally robust against statistical attacks with a marginal equal-error rate impact (textless0.14%). On the other hand, our results show that, surprisingly, keystroke timing features non-trivially weaken the security guarantees provided by sensor features alone. Our findings suggest that sensor dynamics may be a stronger biometric authentication mechanism against recently proposed practical attacks.

Korczyński, Maciej, Król, Micha\textbackslashl, van Eeten, Michel.  2016.  Zone Poisoning: The How and Where of Non-Secure DNS Dynamic Updates. Proceedings of the 2016 Internet Measurement Conference. :271–278.

This paper illuminates the problem of non-secure DNS dynamic updates, which allow a miscreant to manipulate DNS entries in the zone files of authoritative name servers. We refer to this type of attack as to zone poisoning. This paper presents the first measurement study of the vulnerability. We analyze a random sample of 2.9 million domains and the Alexa top 1 million domains and find that at least 1,877 (0.065%) and 587 (0.062%) of domains are vulnerable, respectively. Among the vulnerable domains are governments, health care providers and banks, demonstrating that the threat impacts important services. Via this study and subsequent notifications to affected parties, we aim to improve the security of the DNS ecosystem.

Das, Subhasis, Aamodt, Tor M., Dally, William J..  2015.  Reuse Distance-Based Probabilistic Cache Replacement. ACM Trans. Archit. Code Optim.. 12:33:1–33:22.

This article proposes Probabilistic Replacement Policy (PRP), a novel replacement policy that evicts the line with minimum estimated hit probability under optimal replacement instead of the line with maximum expected reuse distance. The latter is optimal under the independent reference model of programs, which does not hold for last-level caches (LLC). PRP requires 7% and 2% metadata overheads in the cache and DRAM respectively. Using a sampling scheme makes DRAM overhead negligible, with minimal performance impact. Including detailed overhead modeling and equal cache areas, PRP outperforms SHiP, a state-of-the-art LLC replacement algorithm, by 4% for memory-intensive SPEC-CPU2006 benchmarks.

Schweitzer, Nadav, Stulman, Ariel, Shabtai, Asaf.  2016.  Neighbor Contamination to Achieve Complete Bottleneck Control. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :247–253.

Black-holes, gray-holes and, wormholes, are devastating to the correct operation of any network. These attacks (among others) are based on the premise that packets will travel through compromised nodes, and methods exist to coax routing into these traps. Detection of these attacks are mainly centered around finding the subversion in action. In networks, bottleneck nodes -- those that sit on many potential routes between sender and receiver -- are an optimal location for compromise. Finding naturally occurring path bottlenecks, however, does not entitle network subversion, and as such are more difficult to detect. The dynamic nature of mobile ad-hoc networks (manets) causes ubiquitous routing algorithms to be even more susceptible to this class of attacks. Finding perceived bottlenecks in an olsr based manet, is able to capture between 50%-75% of data. In this paper we propose a method of subtly expanding perceived bottlenecks into complete bottlenecks, raising capture rate up to 99%; albeit, at high cost. We further tune the method to reduce cost, and measure the corresponding capture rate.

Boehm, Hans-J., Chakrabarti, Dhruva R..  2016.  Persistence Programming Models for Non-volatile Memory. Proceedings of the 2016 ACM SIGPLAN International Symposium on Memory Management. :55–67.

It is expected that DRAM memory will be augmented, and perhaps eventually replaced, by one of several up-and-coming memory technologies. These are all non-volatile, in that they retain their contents without power. This allows primary memory to be used as a fast disk replacement. It also enables more aggressive programming models that directly leverage persistence of primary memory. However, it is challenging to maintain consistency of memory in such an environment. There is no consensus on the right programming model for doing so, and subtle differences can have large, and sometimes surprising, effects on the implementation and its performance. The existing literature describes multiple programming systems that provide point solutions to the selective persistence for user data structures. Real progress in this area requires a choice of programming model, which we cannot reasonably make without a real understanding of the design space. Point solutions are insufficient. We systematically explore what we consider to be the most promising part of the space, precisely defining semantics and identifying implementation costs. This allows us to be much more explicit and precise about semantic and implementation trade-offs that were usually glossed over in prior work. It also exposes some promising new design alternatives.

Lin, Jerry Chun-Wei, Liu, Qiankun, Fournier-Viger, Philippe, Hong, Tzung-Pei, Zhan, Justin, Voznak, Miroslav.  2016.  An Efficient Anonymous System for Transaction Data. Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics 2016, Data Science 2016. :28:1–28:6.

k-anonymity is an efficient way to anonymize the relational data to protect privacy against re-identification attacks. For the purpose of k-anonymity on transaction data, each item is considered as the quasi-identifier attribute, thus increasing high dimension problem as well as the computational complexity and information loss for anonymity. In this paper, an efficient anonymity system is designed to not only anonymize transaction data with lower information loss but also reduce the computational complexity for anonymity. An extensive experiment is carried to show the efficiency of the designed approach compared to the state-of-the-art algorithms for anonymity in terms of runtime and information loss. Experimental results indicate that the proposed anonymous system outperforms the compared algorithms in all respects.

Casillo, Mario, Colace, Francesco, De Santo, Massimo, Lemma, Saverio, Lombardi, Marco, Pietrosanto, Antonio.  2016.  An Ontological Approach to Digital Storytelling. Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics 2016, Data Science 2016. :27:1–27:8.

In order to identify a personalized story, suitable for the needs of large masses of visitors and tourists, our work has been aimed at the definition of appropriate models and solutions of fruition that make the visit experience more appealing and immersive. This paper proposes the characteristic functionalities of narratology and of the techniques of storytelling for the dynamic creation of experiential stories on a sematic basis. Therefore, it represents a report about sceneries, implementation models and architectural and functional specifications of storytelling for the dynamic creation of functional contents for the visit. Our purpose is to indicate an approach for the realization of a dynamic storytelling engine that can allow the dynamic supply of narrative contents, not necessarily predetermined and pertinent to the needs and the dynamic behaviors of the users. In particular, we have chosen to employ an adaptive, social and mobile approach, using an ontological model in order to realize a dynamic digital storytelling system, able to collect and elaborate social information and contents about the users giving them a personalized story on the basis of the place they are visiting. A case of study and some experimental results are presented and discussed.

Musto, Cataldo, Lops, Pasquale, Basile, Pierpaolo, de Gemmis, Marco, Semeraro, Giovanni.  2016.  Semantics-aware Graph-based Recommender Systems Exploiting Linked Open Data. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. :229–237.

The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.

[Anonymous].  2016.  Heterogeneous Computing: Hardware and Software Perspectives. Applicative 2016. :–.

In the beginning was the single core ... Then we moved to multicore, before we are fully ready for it! Then GPUs appear in the scene, giving us very high performance for some type of applications ... What is next? How can we get more performance? The very near future will be the era of heterogeneous computing. We already have a glimpse of it now; you write code for multicore and GPUs together, right? As computer systems become more and more heterogeneous (cores of different capabilities, GPUs, application specific hardware, ...), writing efficient code for it becomes more and more challenging. What type of heterogeneity are we talking about? Why do we need this heterogeneity? How can we write software that makes the best use of that? ... These are the topics we will discuss in this talk.

Haitzer, Thomas, Navarro, Elena, Zdun, Uwe.  2015.  Architecting for Decision Making About Code Evolution. Proceedings of the 2015 European Conference on Software Architecture Workshops. :52:1–52:7.

During software evolution, it is important to evolve not only the source code, but also its architecture to prevent architecture drift and architecture erosion. This is a complex activity, especially for large software projects, with multiple development teams that might be located in different countries or on different continents. To ease this kind of evolution, we have developed a domain-specific language for making decisions about the evolution. It supports the definition of architectural changes based on multiple implementation tasks that can have temporal dependencies among each other. Then, by means of a model-to-model transformation, we automatically create a constraint model that we use to generate, by means of the Alloy model analyzer, the possible alternative decisions for executing the implementation tasks. The tight integration with architecture abstractions enables architects to automatically check the changes related to an implementation task in relation to the architecture description. This helps keeping architecture and code in sync, avoiding drift and erosion.

Ananth, Prabhanjan, Gupta, Divya, Ishai, Yuval, Sahai, Amit.  2014.  Optimizing Obfuscation: Avoiding Barrington's Theorem. Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. :646–658.

In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs – this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.

Bhandari, Akshita, Gupta, Ashutosh, Das, Debasis.  2017.  Betweenness Centrality Updation and Community Detection in Streaming Graphs Using Incremental Algorithm. Proceedings of the 6th International Conference on Software and Computer Applications. :159–164.

Centrality measures have perpetually been helpful to find the foremost central or most powerful node within the network. There are numerous strategies to compute centrality of a node however in social networks betweenness centrality is the most widely used approach to bifurcate communities within the network, to find out the susceptibility within the complex networks and to generate the scale free networks whose degree distribution follows the power law. In this paper, we've computed betweenness centrality by identifying communities lying within the network. Our algorithm efficiently updates the centrality of the nodes whenever any edge or vertex addition or deletion takes place within the dynamic network by modifying solely a subset of vertices. For the vertex addition, Incremental Algorithm has been used in which Streaming graphs has also been considered. Brandes approach is the most widely used approach for finding out the betweenness centrality however it's still expensive for growing networks since it takes O(mn+n2logn) amount of time and O(n+m) space however our approach efficiently updates the centrality of the nodes by taking O(textbarStextbarn+textbarStextbarnlogn) amount of time where textbarStextbar is the subset of the vertices,m is the number of edges, n is the number of vertices and textbarStextbar≤n holds true.