Visible to the public Biblio

Found 918 results

Filters: First Letter Of Title is T  [Clear All Filters]
2017-05-16
Worthy, Peter, Matthews, Ben, Viller, Stephen.  2016.  Trust Me: Doubts and Concerns Living with the Internet of Things. Proceedings of the 2016 ACM Conference on Designing Interactive Systems. :427–434.

An increasing number of everyday objects are now connected to the internet, collecting and sharing information about us: the "Internet of Things" (IoT). However, as the number of "social" objects increases, human concerns arising from this connected world are starting to become apparent. This paper presents the results of a preliminary qualitative study in which five participants lived with an ambiguous IoT device that collected and shared data about their activities at home for a week. In analyzing this data, we identify the nature of human and socio-technical concerns that arise when living with IoT technologies. Trust is identified as a critical factor - as trust in the entity/ies that are able to use their collected information decreases, users are likely to demand greater control over information collection. Addressing these concerns may support greater engagement of users with IoT technology. The paper concludes with a discussion of how IoT systems might be designed to better foster trust with their owners.

Bandyopadhyay, Bortik, Fuhry, David, Chakrabarti, Aniket, Parthasarathy, Srinivasan.  2016.  Topological Graph Sketching for Incremental and Scalable Analytics. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :1231–1240.

We propose a novel, scalable, and principled graph sketching technique based on minwise hashing of local neighborhood. For an n-node graph with e-edges (e textgreatertextgreater n), we incrementally maintain in real-time a minwise neighbor sampled subgraph using k hash functions in O(n x k) memory, limit being user-configurable by the parameter k. Symmetrization and similarity based techniques can recover from these data structures a significant portion of the original graph. We present theoretical analysis of the minwise sampling strategy and also derive unbiased estimators for important graph properties such as triangle count and neighborhood overlap. We perform an extensive empirical evaluation of our graph sketch and it's derivatives on a wide variety of real-world graph data sets drawn from different application domains using important large network analysis algorithms: local and global clustering coefficient, PageRank, and local graph sparsification. With bounded memory, the quality of results using the sketch representation is competitive against baselines which use the full graph, and the computational performance is often better. Our framework is flexible and configurable to be leveraged by numerous other graph analytics algorithms, potentially reducing the information mining time on large streamed graphs for a variety of applications.

Shrivastava, Anshumali, Konig, Arnd Christian, Bilenko, Mikhail.  2016.  Time Adaptive Sketches (Ada-Sketches) for Summarizing Data Streams. Proceedings of the 2016 International Conference on Management of Data. :1417–1432.

Obtaining frequency information of data streams, in limited space, is a well-recognized problem in literature. A number of recent practical applications (such as those in computational advertising) require temporally-aware solutions: obtaining historical count statistics for both time-points as well as time-ranges. In these scenarios, accuracy of estimates is typically more important for recent instances than for older ones; we call this desirable property Time Adaptiveness. With this observation, [20] introduced the Hokusai technique based on count-min sketches for estimating the frequency of any given item at any given time. The proposed approach is problematic in practice, as its memory requirements grow linearly with time, and it produces discontinuities in the estimation accuracy. In this work, we describe a new method, Time-adaptive Sketches, (Ada-sketch), that overcomes these limitations, while extending and providing a strict generalization of several popular sketching algorithms. The core idea of our method is inspired by the well-known digital Dolby noise reduction procedure that dates back to the 1960s. The theoretical analysis presented could be of independent interest in itself, as it provides clear results for the time-adaptive nature of the errors. An experimental evaluation on real streaming datasets demonstrates the superiority of the described method over Hokusai in estimating point and range queries over time. The method is simple to implement and offers a variety of design choices for future extensions. The simplicity of the procedure and the method's generalization of classic sketching techniques give hope for wide applicability of Ada-sketches in practice.

Andersson, Björn.  2016.  Turning Compositionality into Composability. SIGBED Rev.. 13:25–30.

Compositional theories and technologies facilitate the decomposition of a complex system into components, as well as their integration via interfaces. Component interfaces hide the internal details of the components, thereby reducing integration complexity. A system is said to be composable if the properties established and validated for components in isolation hold once the components are integrated to form the system. This brings us the question: "Is compositionality related to composability?" This paper answers this question in the affirmative; it considers a previously known interface for compositionality and shows that it can be used for composability. It also presents a run-time policing mechanism for this interface.

2017-04-24
Kuang, Liwei, Yang, Laurence T., Rho, Seungmin(Charlie), Yan, Zheng, Qiu, Kai.  2016.  A Tensor-Based Framework for Software-Defined Cloud Data Center. ACM Trans. Multimedia Comput. Commun. Appl.. 12:74:1–74:23.

Multimedia has been exponentially increasing as the biggest big data, which consist of video clips, images, and audio files. Processing and analyzing them on a cloud data center have become a preferred solution that can utilize the large pool of cloud resources to address the problems caused by the tremendous amount of unstructured multimedia data. However, there exist many challenges in processing multimedia big data on a cloud data center, such as multimedia data representation approach, an efficient networking model, and an estimation method for traffic patterns. The primary purpose of this article is to develop a novel tensor-based software-defined networking model on a cloud data center for multimedia big-data computation and communication. First, an overview of the proposed framework is provided, in which the functions of the representative modules are briefly illustrated. Then, three models,—forwarding tensor, control tensor, and transition tensor—are proposed for management of networking devices and prediction of network traffic patterns. Finally, two algorithms about single-mode and multimode tensor eigen-decomposition are developed, and the incremental method is employed for efficiently updating the generated eigen-vector and eigen-tensor. Experimental results reveal that the proposed framework is feasible and efficient to handle multimedia big data on a cloud data center.

Sivakorn, Suphannee, Keromytis, Angelos D., Polakis, Jason.  2016.  That's the Way the Cookie Crumbles: Evaluating HTTPS Enforcing Mechanisms. Proceedings of the 2016 ACM on Workshop on Privacy in the Electronic Society. :71–81.

Recent incidents have once again brought the topic of encryption to public discourse, while researchers continue to demonstrate attacks that highlight the difficulty of implementing encryption even without the presence of "backdoors". However, apart from the threat of implementation flaws in encryption libraries, another significant threat arises when web services fail to enforce ubiquitous encryption. A recent study explored this phenomenon in popular services, and demonstrated how users are exposed to cookie hijacking attacks with severe privacy implications. Many security mechanisms purport to eliminate this problem, ranging from server-controlled options such as HSTS to user-controlled options such as HTTPS Everywhere and other browser extensions. In this paper, we create a taxonomy of available mechanisms and evaluate how they perform in practice. We design an automated testing framework for these mechanisms, and evaluate them using a dataset of 30 days of HTTP requests collected from the public wireless network of our university's campus. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain (including all subdomains), no mechanism can effectively protect users from cookie hijacking and information leakage.

Miao, Luwen, Liu, Kaikai.  2016.  Towards a Heterogeneous Internet-of-Things Testbed via Mesh Inside a Mesh: Poster Abstract. Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. :368–369.

Connectivity is at the heart of the future Internet-of-Things (IoT) infrastructure, which can control and communicate with remote sensors and actuators for the beacons, data collections, and forwarding nodes. Existing sensor network solutions cannot solve the bottleneck problems near the sink node; the tree-based Internet architecture has the single point of failure. To solve current deficiencies in multi-hop mesh network and cross-domain network design, we propose a mesh inside a mesh IoT network architecture. Our designed "edge router" incorporates these two mesh networks together and performs seamlessly transmission of multi-standard packets. The proposed IoT testbed interoperates with existing multi-standards (Wi-Fi, 6LoWPAN) and segments of networks, and provides both high-throughput Internet and resilient sensor coverage throughout the community.

Hauweele, David.  2016.  Tracking Inefficient Power Usages in WSN by Monitoring the Network Firmware: Ph.D. Forum Abstract. Proceedings of the 15th International Conference on Information Processing in Sensor Networks. :32:1–32:2.

The specification and implementation of network protocols are difficult tasks. This is particularly the case for WSN nodes which are typically implemented on low power microcontrollers with limited processing capabilities. Those platforms do not have the resources to run a full-fledged operating system but instead are programmed using a Real Time Operating System (RTOS) specialized in low-power wireless communications. The most popular are Contiki and TinyOS. Those RTOS support a fully-compliant IPv6 stack including the 6LoWPAN adaptation layer [1], several radio duty-cycling MAC protocols [2], [3] and multiple routing protocols [4].

2017-04-20
McCall, Roderick, McGee, Fintan, Meschtscherjakov, Alexander, Louveton, Nicolas, Engel, Thomas.  2016.  Towards A Taxonomy of Autonomous Vehicle Handover Situations. Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. :193–200.

This paper proposes a taxonomy of autonomous vehicle handover situations with a particular emphasis on situational awareness. It focuses on a number of research challenges such as: legal responsibility, the situational awareness level of the driver and the vehicle, the knowledge the vehicle must have of the driver's driving skills as well as the in-vehicle context. The taxonomy acts as a starting point for researchers and practitioners to frame the discussion on this complex problem.

Yang, Kai, Wang, Jing, Bao, Lixia, Ding, Mei, Wang, Jiangtao, Wang, Yasha.  2016.  Towards Future Situation-Awareness: A Conceptual Middleware Framework for Opportunistic Situation Identification. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :95–101.

Opportunistic Situation Identification (OSI) is new paradigms for situation-aware systems, in which contexts for situation identification are sensed through sensors that happen to be available rather than pre-deployed and application-specific ones. OSI extends the application usage scale and reduces system costs. However, designing and implementing OSI module of situation-aware systems encounters several challenges, including the uncertainty of context availability, vulnerable network connectivity and privacy threat. This paper proposes a novel middleware framework to tackle such challenges, and its intuition is that it facilitates performing the situation reasoning locally on a smartphone without needing to rely on the cloud, thus reducing the dependency on the network and being more privacy-preserving. To realize such intuitions, we propose a hybrid learning approach to maximize the reasoning accuracy using limited phone's storage space, with the combination of two the-state-the-art techniques. Specifically, this paper provides a genetic algorithm based optimization approach to determine which pre-computed models will be selected for storage under the storage constraints. Validation of the approach based on an open dataset indicates that the proposed approach achieves higher accuracy with comparatively small storage cost. Further, the proposed utility function for model selection performs better than three baseline utility functions.

Gomes, T., Salgado, F., Pinto, S., Cabral, J., Tavares, A..  2016.  Towards an FPGA-based network layer filter for the Internet of Things edge devices. 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA). :1–4.

In the near future, billions of new smart devices will connect the big network of the Internet of Things, playing an important key role in our daily life. Allowing IPv6 on the low-power resource constrained devices will lead research to focus on novel approaches that aim to improve the efficiency, security and performance of the 6LoWPAN adaptation layer. This work in progress paper proposes a hardware-based Network Packet Filtering (NPF) and an IPv6 Link-local address calculator which is able to filter the received IPv6 packets, offering nearly 18% overhead reduction. The goal is to obtain a System-on-Chip implementation that can be deployed in future IEEE 802.15.4 radio modules.

Rao, K. S., Jain, N., Limaje, N., Gupta, A., Jain, M., Menezes, B..  2016.  Two for the price of one: A combined browser defense against XSS and clickjacking. 2016 International Conference on Computing, Networking and Communications (ICNC). :1–6.
Cross Site Scripting (XSS) and clickjacking have been ranked among the top web application threats in recent times. This paper introduces XBuster - our client-side defence against XSS, implemented as an extension to the Mozilla Firefox browser. XBuster splits each HTTP request parameter into HTML and JavaScript contexts and stores them separately. It searches for both contexts in the HTTP response and handles each context type differently. It defends against all XSS attack vectors including partial script injection, attribute injection and HTML injection. Also, existing XSS filters may inadvertently disable frame busting code used in web pages as a defence against clickjacking. However, XBuster has been designed to detect and neutralize such attempts.
You, T..  2016.  Toward the future of internet architecture for IoE: Precedent research on evolving the identifier and locator separation schemes. 2016 International Conference on Information and Communication Technology Convergence (ICTC). :436–439.

Internet has been being becoming the most famous and biggest communication networks as social, industrial, and public infrastructure since Internet was invented at late 1960s. In a historical retrospect of Internet's evolution, the Internet architecture continues evolution repeatedly by going through various technical challenges, for instance, in early 1990s, Internet had encountered danger of scalability, after a short while it had been overcome and successfully evolved by applying emerging techniques such as CIDR, NAT, and IPv6. Especially this paper emphasizes scalability issues as technical challenges with forecasting that Internet of things era has come. Firstly, we describe the Identifier and locator separation scheme that can achieve dramatically architectural evolution in historical perspective. Additionally, it reviews various kinds of Identifier and locator separation scheme because recently the scheme can be the major design pillar towards future of Internet architecture such as both various clean-slated future Internet architectures and evolving Internet architectures. Lastly we show a result of analysis by analysis table for future of internet of everything where number of Internet connected devices will growth to more than 20 billion by 2020.

2017-04-07
2017-04-03
Savola, Reijo M., Savolainen, Pekka, Salonen, Jarno.  2016.  Towards Security Metrics-supported IP Traceback. Proccedings of the 10th European Conference on Software Architecture Workshops. :32:1–32:5.

The threat of DDOS and other cyberattacks has increased during the last decade. In addition to the radical increase in the number of attacks, they are also becoming more sophisticated with the targets ranging from ordinary users to service providers and even critical infrastructure. According to some resources, the sophistication of attacks is increasing faster than the mitigating actions against them. For example determining the location of the attack origin is becoming impossible as cyber attackers employ specific means to evade detection of the attack origin by default, such as using proxy services and source address spoofing. The purpose of this paper is to initiate discussion about effective Internet Protocol traceback mechanisms that are needed to overcome this problem. We propose an approach for traceback that is based on extensive use of security metrics before (proactive) and during (reactive) the attacks.

2017-03-27
Batselier, Kim, Chen, Zhongming, Liu, Haotian, Wong, Ngai.  2016.  A Tensor-based Volterra Series Black-box Nonlinear System Identification and Simulation Framework. Proceedings of the 35th International Conference on Computer-Aided Design. :17:1–17:7.

Tensors are a multi-linear generalization of matrices to their d-way counterparts, and are receiving intense interest recently due to their natural representation of high-dimensional data and the availability of fast tensor decomposition algorithms. Given the input-output data of a nonlinear system/circuit, this paper presents a nonlinear model identification and simulation framework built on top of Volterra series and its seamless integration with tensor arithmetic. By exploiting partially-symmetric polyadic decompositions of sparse Toeplitz tensors, the proposed framework permits a pleasantly scalable way to incorporate high-order Volterra kernels. Such an approach largely eludes the curse of dimensionality and allows computationally fast modeling and simulation beyond weakly nonlinear systems. The black-box nature of the model also hides structural information of the system/circuit and encapsulates it in terms of compact tensors. Numerical examples are given to verify the efficacy, efficiency and generality of this tensor-based modeling and simulation framework.

2017-03-20
Malecha, Gregory, Ricketts, Daniel, Alvarez, Mario M., Lerner, Sorin.  2016.  Towards foundational verification of cyber-physical systems. :1–5.

The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.

Malecha, Gregory, Ricketts, Daniel, Alvarez, Mario M., Lerner, Sorin.  2016.  Towards foundational verification of cyber-physical systems. :1–5.

The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.
 

2017-03-08
Alotaibi, S., Furnell, S., Clarke, N..  2015.  Transparent authentication systems for mobile device security: A review. 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST). :406–413.

Sensitive data such as text messages, contact lists, and personal information are stored on mobile devices. This makes authentication of paramount importance. More security is needed on mobile devices since, after point-of-entry authentication, the user can perform almost all tasks without having to re-authenticate. For this reason, many authentication methods have been suggested to improve the security of mobile devices in a transparent and continuous manner, providing a basis for convenient and secure user re-authentication. This paper presents a comprehensive analysis and literature review on transparent authentication systems for mobile device security. This review indicates a need to investigate when to authenticate the mobile user by focusing on the sensitivity level of the application, and understanding whether a certain application may require a protection or not.

Song, D., Liu, W., Ji, R., Meyer, D. A., Smith, J. R..  2015.  Top Rank Supervised Binary Coding for Visual Search. 2015 IEEE International Conference on Computer Vision (ICCV). :1922–1930.

In recent years, binary coding techniques are becoming increasingly popular because of their high efficiency in handling large-scale computer vision applications. It has been demonstrated that supervised binary coding techniques that leverage supervised information can significantly enhance the coding quality, and hence greatly benefit visual search tasks. Typically, a modern binary coding method seeks to learn a group of coding functions which compress data samples into binary codes. However, few methods pursued the coding functions such that the precision at the top of a ranking list according to Hamming distances of the generated binary codes is optimized. In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information. The core idea is to train the disciplined coding functions, by which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To solve such coding functions, we relax the original discrete optimization objective with a continuous surrogate, and derive a stochastic gradient descent to optimize the surrogate objective. To further reduce the training time cost, we also design an online learning algorithm to optimize the surrogate objective more efficiently. Empirical studies based upon three benchmark image datasets demonstrate that the proposed binary coding approach achieves superior image search accuracy over the state-of-the-arts.

Reis, R..  2015.  Trends on EDA for low power. 2015 IEEE MTT-S International Conference on Numerical Electromagnetic and Multiphysics Modeling and Optimization (NEMO). :1–4.

One of the main issues in the design of modern integrated circuits is power reduction. Mainly in digital circuits, the power consumption was defined by the dynamic power consumption, during decades. But in the new NanoCMOs technologies, the static power due to the leakage current is becoming the main issue in power consumption. As the leakage power is related to the amount of components, it is becoming mandatory to reduce the amount of transistors in any type of design, to reduce power consumption. So, it is important to obtain new EDA algorithms and tools to optimize the amount of components (transistors). It is also needed tools for the layout design automation that are able to design any network of components that is provided by an optimization tool that is able to reduce the size of the network of components. It is presented an example of a layout design automation tool that can do the layout of any network of transistors using transistors of any size. Another issue for power optimization is the use of tools and algorithms for gate sizing. The designer can manage the sizing of transistors to reduce power consumption, without compromising the clock frequency. There are two types of gate sizing, discrete gate sizing and continuous gate sizing. The discrete gate sizing tools are used when it is being used a cell library that has only few available sizes for each cell. The continuous gate sizing considers that the EDA tool can define any transistor sizing. In this case, the designer needs to have a layout design tool able to do the layout of transistors with any size. It will be presented the winner tools of the ISPD Contest 2012 and 2013. Also, it will be discussed the inclusion of our gate sizing algorithms in an industrial flow used to design state-of-the-art microprocessors. Another type of EDA tool that is becoming more and more useful is the visualization tools that provide an animated visual output of the running of EDA tools. This kind of tools is very usef- l to show to the tool developers how the tool is running. So, the EDA developers can use this information to improve the algorithms used in an EDA Tool.

2017-03-07
Krishnan, Sanjay, Haas, Daniel, Franklin, Michael J., Wu, Eugene.  2016.  Towards Reliable Interactive Data Cleaning: A User Survey and Recommendations. Proceedings of the Workshop on Human-In-the-Loop Data Analytics. :9:1–9:5.

Data cleaning is frequently an iterative process tailored to the requirements of a specific analysis task. The design and implementation of iterative data cleaning tools presents novel challenges, both technical and organizational, to the community. In this paper, we present results from a user survey (N = 29) of data analysts and infrastructure engineers from industry and academia. We highlight three important themes: (1) the iterative nature of data cleaning, (2) the lack of rigor in evaluating the correctness of data cleaning, and (3) the disconnect between the analysts who query the data and the infrastructure engineers who design the cleaning pipelines. We conclude by presenting a number of recommendations for future work in which we envision an interactive data cleaning system that accounts for the observed challenges.

Zhang, Jiao, Ren, Fengyuan, Shu, Ran, Cheng, Peng.  2016.  TFC: Token Flow Control in Data Center Networks. Proceedings of the Eleventh European Conference on Computer Systems. :23:1–23:14.

Services in modern data center networks pose growing performance demands. However, the widely existed special traffic patterns, such as micro-burst, highly concurrent flows, on-off pattern of flow transmission, exacerbate the performance of transport protocols. In this work, an clean-slate explicit transport control mechanism, called Token Flow Control (TFC), is proposed for data center networks to achieve high link utilization, ultra-low latency, fast convergence, and rare packets dropping. TFC uses tokens to represent the link bandwidth resource and define the concept of effective flows to stand for consumers. The total tokens will be explicitly allocated to each consumer every time slot. TFC excludes in-network buffer space from the flow pipeline and thus achieves zero-queueing. Besides, a packet delay function is added at switches to prevent packets dropping with highly concurrent flows. The performance of TFC is evaluated using both experiments on a small real testbed and large-scale simulations. The results show that TFC achieves high throughput, fast convergence, near zero-queuing and rare packets loss in various scenarios.

Singh, Rishabh, Gulwani, Sumit.  2016.  Transforming Spreadsheet Data Types Using Examples. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. :343–356.

Cleaning spreadsheet data types is a common problem faced by millions of spreadsheet users. Data types such as date, time, name, and units are ubiquitous in spreadsheets, and cleaning transformations on these data types involve parsing and pretty printing their string representations. This presents many challenges to users because cleaning such data requires some background knowledge about the data itself and moreover this data is typically non-uniform, unstructured, and ambiguous. Spreadsheet systems and Programming Languages provide some UI-based and programmatic solutions for this problem but they are either insufficient for the user's needs or are beyond their expertise. In this paper, we present a programming by example methodology of cleaning data types that learns the desired transformation from a few input-output examples. We propose a domain specific language with probabilistic semantics that is parameterized with declarative data type definitions. The probabilistic semantics is based on three key aspects: (i) approximate predicate matching, (ii) joint learning of data type interpretation, and (iii) weighted branches. This probabilistic semantics enables the language to handle non-uniform, unstructured, and ambiguous data. We then present a synthesis algorithm that learns the desired program in this language from a set of input-output examples. We have implemented our algorithm as an Excel add-in and present its successful evaluation on 55 benchmark problems obtained from online help forums and Excel product team.