Biblio
The Internet of Things (IoT) is slowly, but steadily, changing the way we interact with our surrounding. Smart cities, smart environments, smart buildings are just a few macroscopic examples of how smart ecosystems are increasingly involved in our daily life, each one offering a different set of information. This information's decentralization and scattering can be exploited, optimizing mobile nodes on-demand information retrieval process. We propose an approach focused on defining competence domains in smart systems where the responsibility of providing a specific information to a mobile node is defined by spatial constraints. By exploiting the interplay and duality of Cloud Computing and Fog Computing we introduce an approach to exploit data spatial allocation in smart systems to optimize mobile nodes information retrieval.
Smart Transportation applications by nature are examples of Vehicular Ad-hoc Network (VANETs) applications where mobile vehicles, roadside units and transportation infrastructure interplay with one another to provide value added services. While there are abundant researches that focused on the communication aspect of such Mobile Ad-hoc Networks, there are few research bodies that target the development of VANET applications. Among the popular VANET applications, a dominant direction is to leverage Cloud infrastructure to execute and deliver applications and services. Recent studies showed that Cloud Computing is not sufficient for many VANET applications due to the mobility of vehicles and the latency sensitive requirements they impose. To this end, Fog Computing has been proposed to leverage computation infrastructure that is closer to the network edge to compliment Cloud Computing in providing latency-sensitive applications and services. However, applications development in Fog environment is much more challenging than in the Cloud due to the distributed nature of Fog systems. In this paper, we investigate how Smart Transportation applications are developed following Fog Computing approach, their challenges and possible mitigation from the state of the arts.
The notion of edge computing introduces new computing functions away from centralized locations and closer to the network edge and thus facilitating new applications and services. This enhanced computing paradigm is provides new opportunities to applications developers, not available otherwise. In this talk, I will discuss why placing computation functions at the extreme edge of our network infrastructure, i.e., in wireless Access Points and home set-top boxes, is particularly beneficial for a large class of emerging applications. I will discuss a specific approach, called ParaDrop, to implement such edge computing functionalities, and use examples from different domains – smarter homes, sustainability, and intelligent transportation – to illustrate the new opportunities around this concept.
Wearable devices, which are small electronic devices worn on a human body, are equipped with low level of processing and storage capacities and offer some types of integrated functionalities. Recently, wearable device is becoming increasingly popular, various kinds of wearable device are launched in the market; however, wearable devices require a powerful local-hub, most are smartphone, to replenish processing and storage capacities for advanced functionalities. Sometime it may be inconvenient to carry the local-hub (smartphone); thus, many wearable devices are equipped with Wi-Fi interface, enabling them to exchange data with local-hub though the Internet when the local-hub is not nearby. However, this results in long response time and restricted functionalities. In this paper, we present a virtual local-hub solution, which utilizes network equipment nearby (e.g., Wi-Fi APs) as the local-hub. Since migrating all applications serving the wearable devices respectively takes too much networking and storage resources, the proposed solution deploys function modules to multiple network nodes and enables remote function module sharing among different users and applications. To reduce the impact of the solution on the network bandwidth, we propose a heuristic algorithm for function module allocation with the objective of minimizing total bandwidth consumption. We conduct series of experiments, and the results show that the proposed solution can reduce the bandwidth consumption by up to half and still serve all requests given a large number of service requests.
When focusing on the Internet of Things (IoT), communicating and coordinating sensor–actuator data via the cloud involves inefficient overheads and reduces autonomous behavior. The Fog Computing paradigm essentially moves the compute nodes closer to sensing entities by exploiting peers and intermediary network devices. This reduces centralized communication with the cloud and entails increased coordination between sensing entities and (possibly available) smart network gateway devices. In this paper, we analyze the utility of offloading computation among peers when working in fog based deployments. It is important to study the trade-offs involved with such computation offloading, as we deal with resource (energy, computation capacity) limited devices. Devices computing in a distributed environment may choose to locally compute part of their data and communicate the remainder to their peers. An optimization formulation is presented that is applied to various deployment scenarios, taking the computation and communication overheads into account. Our technique is demonstrated on a network of robotic sensor–actuators developed on the ROS (Robot Operating System) platform, that coordinate over the fog to complete a task. We demonstrate 77.8% latency and 54% battery usage improvements over large computation tasks, by applying this optimal offloading.
Early Internet of Things(IoT) applications have been build around cloud-centric architectures where information generated at the edge by the "things" in conveyed and processed in a cloud infrastructure. These architectures centralise processing and decision on the data-centre assuming sufficient connectivity, bandwidth and latency. As applications of the Internet of Things extend to industrial and more demanding consumer applications, the assumptions underlying cloud-centric architectures start to be violated as for several of these applications connectivity, bandwidth and latency to the data-centre are a challenge. Fog and Mist computing have emerged as forms of "Cloud Computing" closer to the "Edge" and to the "Things" that should alleviate the connectivity, bandwidth and latency challenges faced by Industrial and extremely demanding Consumer Internet of Things Applications. This keynote, will (1) introduce Cloud, Fog and Mist Computing architectures for the Internet of Things, (2) motivate their need and explain their applicability with real-world use cases, and (3) assess their technological maturity and highlight the areas that require further academic and industrial research.
Offloading computationally expensive Simultaneous Localization and Mapping (SLAM) task for mobile robots have attracted significant attention during the last few years. Lack of powerful on-board compute capability in these energy constrained mobile robots and rapid advancement in compute cloud access technologies laid the foundation for development of several Cloud Robotics platforms that enabled parallel execution of computationally expensive robotic algorithms, especially involving multiple robots. In this work the Cloud Robotics concept is extended to include the current emphasis of computing at the network edge nodes along with the Cloud. The requirements and advantages of using edge nodes for computation offloading over remote cloud or local robot clusters are discussed with reference to the ETSI 'Mobile-Edge Computing' initiative and OpenFog Consortium's 'OpenFog Architecture'. A Particle Filter algorithm for SLAM is modified and implemented for offloading in a multi-tier edge+cloud setup. Additionally a model is proposed for offloading decision in such a setup with experiments and results demonstrating the efficacy of the proposed dynamic offloading scheme over static offloading strategies.
Black-holes, gray-holes and, wormholes, are devastating to the correct operation of any network. These attacks (among others) are based on the premise that packets will travel through compromised nodes, and methods exist to coax routing into these traps. Detection of these attacks are mainly centered around finding the subversion in action. In networks, bottleneck nodes -- those that sit on many potential routes between sender and receiver -- are an optimal location for compromise. Finding naturally occurring path bottlenecks, however, does not entitle network subversion, and as such are more difficult to detect. The dynamic nature of mobile ad-hoc networks (manets) causes ubiquitous routing algorithms to be even more susceptible to this class of attacks. Finding perceived bottlenecks in an olsr based manet, is able to capture between 50%-75% of data. In this paper we propose a method of subtly expanding perceived bottlenecks into complete bottlenecks, raising capture rate up to 99%; albeit, at high cost. We further tune the method to reduce cost, and measure the corresponding capture rate.
It is expected that DRAM memory will be augmented, and perhaps eventually replaced, by one of several up-and-coming memory technologies. These are all non-volatile, in that they retain their contents without power. This allows primary memory to be used as a fast disk replacement. It also enables more aggressive programming models that directly leverage persistence of primary memory. However, it is challenging to maintain consistency of memory in such an environment. There is no consensus on the right programming model for doing so, and subtle differences can have large, and sometimes surprising, effects on the implementation and its performance. The existing literature describes multiple programming systems that provide point solutions to the selective persistence for user data structures. Real progress in this area requires a choice of programming model, which we cannot reasonably make without a real understanding of the design space. Point solutions are insufficient. We systematically explore what we consider to be the most promising part of the space, precisely defining semantics and identifying implementation costs. This allows us to be much more explicit and precise about semantic and implementation trade-offs that were usually glossed over in prior work. It also exposes some promising new design alternatives.
k-anonymity is an efficient way to anonymize the relational data to protect privacy against re-identification attacks. For the purpose of k-anonymity on transaction data, each item is considered as the quasi-identifier attribute, thus increasing high dimension problem as well as the computational complexity and information loss for anonymity. In this paper, an efficient anonymity system is designed to not only anonymize transaction data with lower information loss but also reduce the computational complexity for anonymity. An extensive experiment is carried to show the efficiency of the designed approach compared to the state-of-the-art algorithms for anonymity in terms of runtime and information loss. Experimental results indicate that the proposed anonymous system outperforms the compared algorithms in all respects.
In order to identify a personalized story, suitable for the needs of large masses of visitors and tourists, our work has been aimed at the definition of appropriate models and solutions of fruition that make the visit experience more appealing and immersive. This paper proposes the characteristic functionalities of narratology and of the techniques of storytelling for the dynamic creation of experiential stories on a sematic basis. Therefore, it represents a report about sceneries, implementation models and architectural and functional specifications of storytelling for the dynamic creation of functional contents for the visit. Our purpose is to indicate an approach for the realization of a dynamic storytelling engine that can allow the dynamic supply of narrative contents, not necessarily predetermined and pertinent to the needs and the dynamic behaviors of the users. In particular, we have chosen to employ an adaptive, social and mobile approach, using an ontological model in order to realize a dynamic digital storytelling system, able to collect and elaborate social information and contents about the users giving them a personalized story on the basis of the place they are visiting. A case of study and some experimental results are presented and discussed.
The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.
In the beginning was the single core ... Then we moved to multicore, before we are fully ready for it! Then GPUs appear in the scene, giving us very high performance for some type of applications ... What is next? How can we get more performance? The very near future will be the era of heterogeneous computing. We already have a glimpse of it now; you write code for multicore and GPUs together, right? As computer systems become more and more heterogeneous (cores of different capabilities, GPUs, application specific hardware, ...), writing efficient code for it becomes more and more challenging. What type of heterogeneity are we talking about? Why do we need this heterogeneity? How can we write software that makes the best use of that? ... These are the topics we will discuss in this talk.
During software evolution, it is important to evolve not only the source code, but also its architecture to prevent architecture drift and architecture erosion. This is a complex activity, especially for large software projects, with multiple development teams that might be located in different countries or on different continents. To ease this kind of evolution, we have developed a domain-specific language for making decisions about the evolution. It supports the definition of architectural changes based on multiple implementation tasks that can have temporal dependencies among each other. Then, by means of a model-to-model transformation, we automatically create a constraint model that we use to generate, by means of the Alloy model analyzer, the possible alternative decisions for executing the implementation tasks. The tight integration with architecture abstractions enables architects to automatically check the changes related to an implementation task in relation to the architecture description. This helps keeping architecture and code in sync, avoiding drift and erosion.
In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs – this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.
Centrality measures have perpetually been helpful to find the foremost central or most powerful node within the network. There are numerous strategies to compute centrality of a node however in social networks betweenness centrality is the most widely used approach to bifurcate communities within the network, to find out the susceptibility within the complex networks and to generate the scale free networks whose degree distribution follows the power law. In this paper, we've computed betweenness centrality by identifying communities lying within the network. Our algorithm efficiently updates the centrality of the nodes whenever any edge or vertex addition or deletion takes place within the dynamic network by modifying solely a subset of vertices. For the vertex addition, Incremental Algorithm has been used in which Streaming graphs has also been considered. Brandes approach is the most widely used approach for finding out the betweenness centrality however it's still expensive for growing networks since it takes O(mn+n2logn) amount of time and O(n+m) space however our approach efficiently updates the centrality of the nodes by taking O(textbarStextbarn+textbarStextbarnlogn) amount of time where textbarStextbar is the subset of the vertices,m is the number of edges, n is the number of vertices and textbarStextbar≤n holds true.
Security is playing a very important and crucial role in the field of network communication system and Internet. Kerberos Authentication Protocol is designed and developed by Massachusetts Institute of Technology (MIT) and it provides authentication by encrypting information and allow clients to access servers in a secure manner. This paper describes the design and implementation of Kerberos using Data Encryption Standard (DES). Data encryption standard (DES) is a private key cryptography system that provides the security in the communication system. Java Development Tool Kit as the front end and ms access as the back end are used for implementation.
TLS and SSH are two of the most commonly used protocols for securing Internet traffic. Many of the implementations of these protocols rely on the cryptographic primitives provided in the OpenSSL library. In this work we disclose a vulnerability in OpenSSL, affecting all versions and forks (e.g. LibreSSL and BoringSSL) since roughly October 2005, which renders the implementation of the DSA signature scheme vulnerable to cache-based side-channel attacks. Exploiting the software defect, we demonstrate the first published cache-based key-recovery attack on these protocols: 260 SSH-2 handshakes to extract a 1024/160-bit DSA host key from an OpenSSH server, and 580 TLS 1.2 handshakes to extract a 2048/256-bit DSA key from an stunnel server.
The Physical Web is a project announced by Google's Chrome team that essentially provides a framework to discover "smart" physical objects (e.g. vending machines, classroom, conference room, cafeteria etc.) and interact with specific, contextual content without having to resort to downloading a specific app. A common app such as the open source and freely available Physical Web app on the Google Play Store or the BKON Browser on the Apple App Store, can access nearby beacons. A current work-in-progress at the University of Maui College is developing a campus-wide prototype of beacon technology using Eddystone-URL and EID protocol from various beacon vendors.
Many technologies have been developed to provide effective opportunities to enhance the safety of roads and improve transportation system. In face of that, the concept of Vehicular Ad-Hoc Networks (VANET) was introduced to provide intelligent transportation systems. In this work, we propose the use of an OBD Bluetooth adapter and a smartphone to gather data from two cars, then we analyze the relationships between RPM and speed data to identify if this reflects the vehicle's current gear. As a result, we found a coefficient that indicates the behavior of each gear along the time in a trace. We conclude that these analysis, although in the beginning, suggests a way to determine the gear state. Therefore, many services can be developed using this information as, recommendation of gear shift time, eco-driving support, security patterns and entertainment applications.
Secure location sensing has the potential to improve healthcare processes regarding security, efficiency, and safety. For example, enforcing close physical proximity to a patient when using a barcode medication administration system (BCMA) can mitigate the consequences of unsafe barcode scanning workarounds. We present Beacon+, a Bluetooth Low Energy (BLE) device that extends the design of Apple's popular iBeacon specification with unspoofable, temporal, and authenticated advertisements. Our prototype Beacon+ design enables secure location sensing applications such as real-time tracking of hospital assets (e.g., infusion pumps). We implement this exact real-time tracking system and use it as a foundation for a novel application that applies location-based restrictions on access control.
In this paper, we present an architecture and implementation of a secure, automated, proximity-based access control that we refer to as Context-Aware System to Secure Enterprise Content (CASSEC). Using the pervasive WiFi and Bluetooth wireless devices as components in our underlying positioning infrastructure, CASSEC addresses two proximity-based scenarios often encountered in enterprise environments: Separation of Duty and Absence of Other Users. The first scenario is achieved by using Bluetooth MAC addresses of nearby occupants as authentication tokens. The second scenario exploits the interference of WiFi received signal strength when an occupant crosses the line of sight (LOS). Regardless of the scenario, information about the occupancy of a particular location is periodically extracted to support continuous authentication. To the best of our knowledge, our approach is the first to incorporate WiFi signal interference caused by occupants as part of proximity-based access control system. Our results demonstrate that it is feasible to achieve great accuracy in localization of occupants in a monitored room.
Bluetooth reliant devices are increasingly proliferating into various industry and consumer sectors as part of a burgeoning wearable market that adds convenience and awareness to everyday life. Relying primarily on a constantly changing hop pattern to reduce data sniffing during transmission, wearable devices routinely disconnect and reconnect with their base station (typically a cell phone), causing a connection repair each time. These connection repairs allow an adversary to determine what local wearable devices are communicating to what base stations. In addition, data transmitted to a base station as part of a wearable app may be forwarded onward to an awaiting web API even if the base station is in an insecure environment (e.g. a public Wi-Fi). In this paper, we introduce an approach to increase the security and privacy associated with using wearable devices by imposing transmission changes given situational awareness of the base station. These changes are asserted via policy rules based on the sensor information from the wearable devices collected and aggregated by the base system. The rules are housed in an application on the base station that adapts the base station to a state in which it prevents data from being transmitted by the wearable devices without disconnecting the devices. The policies can be updated manually or through an over the air update as determined by the user.
Wearable tracking devices have gained widespread usage and popularity because of the valuable services they offer, monitoring human's health parameters and, in general, assisting persons to take a better care of themselves. Nevertheless, the security risks associated with such devices can represent a concern among consumers, because of the sensitive information these devices deal with, like sleeping patterns, eating habits, heart rate and so on. In this paper, we analyse the key security and privacy features of two entry level health trackers from leading vendors (Jawbone and Fitbit), exploring possible attack vectors and vulnerabilities at several system levels. The results of the analysis show how these devices are vulnerable to several attacks (perpetrated with consumer-level devices equipped with just bluetooth and Wi-Fi) that can compromise users' data privacy and security, and eventually call the tracker vendors to raise the stakes against such attacks.
There has been a tremendous increase in popularity and adoption of wearable fitness trackers. These fitness trackers predominantly use Bluetooth Low Energy (BLE) for communicating and syncing the data with user's smartphone. This paper presents a measurement-driven study of possible privacy leakage from BLE communication between the fitness tracker and the smartphone. Using real BLE traffic traces collected in the wild and in controlled experiments, we show that majority of the fitness trackers use unchanged BLE address while advertising, making it feasible to track them. The BLE traffic of the fitness trackers is found to be correlated with the intensity of user's activity, making it possible for an eavesdropper to determine user's current activity (walking, sitting, idle or running) through BLE traffic analysis. Furthermore, we also demonstrate that the BLE traffic can represent user's gait which is known to be distinct from user to user. This makes it possible to identify a person (from a small group of users) based on the BLE traffic of her fitness tracker. As BLE-based wearable fitness trackers become widely adopted, our aim is to identify important privacy implications of their usage and discuss prevention strategies.