Biblio

Found 473 results

Filters: First Letter Of Title is L  [Clear All Filters]
2017-10-19
Ko, Wilson K.H., Wu, Yan, Tee, Keng Peng.  2016.  LAP: A Human-in-the-loop Adaptation Approach for Industrial Robots. Proceedings of the Fourth International Conference on Human Agent Interaction. :313–319.

In the last few years, a shift from mass production to mass customisation is observed in the industry. Easily reprogrammable robots that can perform a wide variety of tasks are desired to keep up with the trend of mass customisation while saving costs and development time. Learning by Demonstration (LfD) is an easy way to program the robots in an intuitive manner and provides a solution to this problem. In this work, we discuss and evaluate LAP, a three-stage LfD method that conforms to the criteria for the high-mix-low-volume (HMLV) industrial settings. The algorithm learns a trajectory in the task space after which small segments can be adapted on-the-fly by using a human-in-the-loop approach. The human operator acts as a high-level adaptation, correction and evaluation mechanism to guide the robot. This way, no sensors or complex feedback algorithms are needed to improve robot behaviour, so errors and inaccuracies induced by these subsystems are avoided. After the system performs at a satisfactory level after the adaptation, the operator will be removed from the loop. The robot will then proceed in a feed-forward fashion to optimise for speed. We demonstrate this method by simulating an industrial painting application. A KUKA LBR iiwa is taught how to draw an eight figure which is reshaped by the operator during adaptation.

2018-03-29
2017-05-22
Howe, J., Moore, C., O'Neill, M., Regazzoni, F., Güneysu, T., Beeden, K..  2016.  Lattice-based Encryption Over Standard Lattices In Hardware. Proceedings of the 53rd Annual Design Automation Conference. :162:1–162:6.

Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations. Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with λ/2-bits of precision. Performance results are promising compared to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing stronger security proofs; generate 1272 encryptions per second and 4395 decryptions per second.

2017-08-22
Esiner, Ertem, Datta, Anwitaman.  2016.  Layered Security for Storage at the Edge: On Decentralized Multi-factor Access Control. Proceedings of the 17th International Conference on Distributed Computing and Networking. :9:1–9:10.

In this paper we propose a protocol that allows end-users in a decentralized setup (without requiring any trusted third party) to protect data shipped to remote servers using two factors - knowledge (passwords) and possession (a time based one time password generation for authentication) that is portable. The protocol also supports revocation and recreation of a new possession factor if the older possession factor is compromised, provided the legitimate owner still has a copy of the possession factor. Furthermore, akin to some other recent works, our approach naturally protects the outsourced data from the storage servers themselves, by application of encryption and dispersal of information across multiple servers. We also extend the basic protocol to demonstrate how collaboration can be supported even while the stored content is encrypted, and where each collaborator is still restrained from accessing the data through a multi-factor access mechanism. Such techniques achieving layered security is crucial to (opportunistically) harness storage resources from untrusted entities.

2017-08-02
Zheng, Yuxin, Guo, Qi, Tung, Anthony K.H., Wu, Sai.  2016.  LazyLSH: Approximate Nearest Neighbor Search for Multiple Distance Functions with a Single Index. Proceedings of the 2016 International Conference on Management of Data. :2023–2037.

Due to the "curse of dimensionality" problem, it is very expensive to process the nearest neighbor (NN) query in high-dimensional spaces; and hence, approximate approaches, such as Locality-Sensitive Hashing (LSH), are widely used for their theoretical guarantees and empirical performance. Current LSH-based approaches target at the L1 and L2 spaces, while as shown in previous work, the fractional distance metrics (Lp metrics with 0 textless p textless 1) can provide more insightful results than the usual L1 and L2 metrics for data mining and multimedia applications. However, none of the existing work can support multiple fractional distance metrics using one index. In this paper, we propose LazyLSH that answers approximate nearest neighbor queries for multiple Lp metrics with theoretical guarantees. Different from previous LSH approaches which need to build one dedicated index for every query space, LazyLSH uses a single base index to support the computations in multiple Lp spaces, significantly reducing the maintenance overhead. Extensive experiments show that LazyLSH provides more accurate results for approximate kNN search under fractional distance metrics.

2017-05-17
Kwon, Yonghwi, Kim, Dohyeong, Sumner, William Nick, Kim, Kyungtae, Saltaformaggio, Brendan, Zhang, Xiangyu, Xu, Dongyan.  2016.  LDX: Causality Inference by Lightweight Dual Execution. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :503–515.

Causality inference, such as dynamic taint anslysis, has many applications (e.g., information leak detection). It determines whether an event e is causally dependent on a preceding event c during execution. We develop a new causality inference engine LDX. Given an execution, it spawns a slave execution, in which it mutates c and observes whether any change is induced at e. To preclude non-determinism, LDX couples the executions by sharing syscall outcomes. To handle path differences induced by the perturbation, we develop a novel on-the-fly execution alignment scheme that maintains a counter to reflect the progress of execution. The scheme relies on program analysis and compiler transformation. LDX can effectively detect information leak and security attacks with an average overhead of 6.08% while running the master and the slave concurrently on separate CPUs, much lower than existing systems that require instruction level monitoring. Furthermore, it has much better accuracy in causality inference.

2017-09-15
Song, Linhai, Huang, Heqing, Zhou, Wu, Wu, Wenfei, Zhang, Yiying.  2016.  Learning from Big Malwares. Proceedings of the 7th ACM SIGOPS Asia-Pacific Workshop on Systems. :12:1–12:8.

This paper calls for the attention to investigate real-world malwares in large scales by examining the largest real malware repository, VirusTotal. As a first step, we analyzed two fundamental characteristics of Windows executable malwares from VirusTotal. We designed offline and online tools for this analysis. Our results show that malwares appear in bursts and that distributions of malwares are highly skewed.

2018-05-27
2018-05-14
Georgios Giantamidis, Stavros Tripakis.  2016.  Learning Moore Machines from Input-Output Traces. {FM} 2016: Formal Methods - 21st International Symposium, Limassol, Cyprus, November 9-11, 2016, Proceedings. :291–309.
2017-03-07
Baba, Asif Iqbal, Jaeger, Manfred, Lu, Hua, Pedersen, Torben Bach, Ku, Wei-Shinn, Xie, Xike.  2016.  Learning-Based Cleansing for Indoor RFID Data. Proceedings of the 2016 International Conference on Management of Data. :925–936.

RFID is widely used for object tracking in indoor environments, e.g., airport baggage tracking. Analyzing RFID data offers insight into the underlying tracking systems as well as the associated business processes. However, the inherent uncertainty in RFID data, including noise (cross readings) and incompleteness (missing readings), pose challenges to high-level RFID data querying and analysis. In this paper, we address these challenges by proposing a learning-based data cleansing approach that, unlike existing approaches, requires no detailed prior knowledge about the spatio-temporal properties of the indoor space and the RFID reader deployment. Requiring only minimal information about RFID deployment, the approach learns relevant knowledge from raw RFID data and uses it to cleanse the data. In particular, we model raw RFID readings as time series that are sparse because the indoor space is only partly covered by a limited number of RFID readers. We propose the Indoor RFID Multi-variate Hidden Markov Model (IR-MHMM) to capture the uncertainties of indoor RFID data as well as the correlation of moving object locations and object RFID readings. We propose three state space design methods for IR-MHMM that enable the learning of parameters while contending with raw RFID data time series. We solely use raw uncleansed RFID data for the learning of model parameters, requiring no special labeled data or ground truth. The resulting IR-MHMM based RFID data cleansing approach is able to recover missing readings and reduce cross readings with high effectiveness and efficiency, as demonstrated by extensive experimental studies with both synthetic and real data. Given enough indoor RFID data for learning, the proposed approach achieves a data cleansing accuracy comparable to or even better than state-of-the-art techniques requiring very detailed prior knowledge, making our solution superior in terms of both effectiveness and employability.

2018-05-25
2017-05-19
Hojjati, Avesta, Adhikari, Anku, Struckmann, Katarina, Chou, Edward, Tho Nguyen, Thi Ngoc, Madan, Kushagra, Winslett, Marianne S., Gunter, Carl A., King, William P..  2016.  Leave Your Phone at the Door: Side Channels That Reveal Factory Floor Secrets. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :883–894.

From pencils to commercial aircraft, every man-made object must be designed and manufactured. When it is cheaper or easier to steal a design or a manufacturing process specification than to invent one's own, the incentive for theft is present. As more and more manufacturing data comes online, incidents of such theft are increasing. In this paper, we present a side-channel attack on manufacturing equipment that reveals both the form of a product and its manufacturing process, i.e., exactly how it is made. In the attack, a human deliberately or accidentally places an attack-enabled phone close to the equipment or makes or receives a phone call on any phone nearby. The phone executing the attack records audio and, optionally, magnetometer data. We present a method of reconstructing the product's form and manufacturing process from the captured data, based on machine learning, signal processing, and human assistance. We demonstrate the attack on a 3D printer and a CNC mill, each with its own acoustic signature, and discuss the commonalities in the sensor data captured for these two different machines. We compare the quality of the data captured with a variety of smartphone models. Capturing data from the 3D printer, we reproduce the form and process information of objects previously unknown to the reconstructors. On average, our accuracy is within 1 mm in reconstructing the length of a line segment in a fabricated object's shape and within 1 degree in determining an angle in a fabricated object's shape. We conclude with recommendations for defending against these attacks.

2017-03-07
Muthusamy, Vinod, Slominski, Aleksander, Ishakian, Vatche, Khalaf, Rania, Reason, Johnathan, Rozsnyai, Szabolcs.  2016.  Lessons Learned Using a Process Mining Approach to Analyze Events from Distributed Applications. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :199–204.

The execution of distributed applications are captured by the events generated by the individual components. However, understanding the behavior of these applications from their event logs can be a complex and error prone task, compounded by the fact that applications continuously change rendering any knowledge obsolete. We describe our experiences applying a suite of process-aware analytic tools to a number of real world scenarios, and distill our lessons learned. For example, we have seen that these tools are used iteratively, where insights gained at one stage inform the configuration decisions made at an earlier stage. As well, we have observed that data onboarding, where the raw data is cleaned and transformed, is the most critical stage in the pipeline and requires the most manual effort and domain knowledge. In particular, missing, inconsistent, and low-resolution event time stamps are recurring problems that require better solutions. The experiences and insights presented here will assist practitioners applying process analytic tools to real scenarios, and reveal to researchers some of the more pressing challenges in this space.

2017-05-22
Castle, Sam, Pervaiz, Fahad, Weld, Galen, Roesner, Franziska, Anderson, Richard.  2016.  Let's Talk Money: Evaluating the Security Challenges of Mobile Money in the Developing World. Proceedings of the 7th Annual Symposium on Computing for Development. :4:1–4:10.

Digital money drives modern economies, and the global adoption of mobile phones has enabled a wide range of digital financial services in the developing world. Where there is money, there must be security, yet prior work on mobile money has identified discouraging vulnerabilities in the current ecosystem. We begin by arguing that the situation is not as dire as it may seem–-many reported issues can be resolved by security best practices and updated mobile software. To support this argument, we diagnose the problems from two directions: (1) a large-scale analysis of existing financial service products and (2) a series of interviews with 7 developers and designers in Africa and South America. We frame this assessment within a novel, systematic threat model. In our large-scale analysis, we evaluate 197 Android apps and take a deeper look at 71 products to assess specific organizational practices. We conclude that although attack vectors are present in many apps, service providers are generally making intentional, security-conscious decisions. The developer interviews support these findings, as most participants demonstrated technical competency and experience, and all worked within established organizations with regimented code review processes and dedicated security teams.

2017-05-30
Kothari, Suresh, Tamrawi, Ahmed, Sauceda, Jeremías, Mathews, Jon.  2016.  Let's Verify Linux: Accelerated Learning of Analytical Reasoning Through Automation and Collaboration. Proceedings of the 38th International Conference on Software Engineering Companion. :394–403.

We describe our experiences in the classroom using the internet to collaboratively verify a significant safety and security property across the entire Linux kernel. With 66,609 instances to check across three versions of Linux, the naive approach of simply dividing up the code and assigning it to students does not scale, and does little to educate. However, by teaching and applying analytical reasoning, the instances can be categorized effectively, the problems of scale can be managed, and students can collaborate and compete with one another to achieve an unprecedented level of verification. We refer to our approach as Evidence-Enabled Collaborative Verification (EECV). A key aspect of this approach is the use of visual software models, which provide mathematically rigorous and critical evidence for verification. The visual models make analytical reasoning interactive, interesting and applicable to large software. Visual models are generated automatically using a tool we have developed called L-SAP [14]. This tool generates an Instance Verification Kit (IVK) for each instance, which contains all of the verification evidence for the instance. The L-SAP tool is implemented on a software graph database platform called Atlas [6]. This platform comes with a powerful query language and interactive visualization to build and apply visual models for software verification. The course project is based on three recent versions of the Linux operating system with altogether 37 MLOC and 66,609 verification instances. The instances are accessible through a website [2] for students to collaborate and compete. The Atlas platform, the L-SAP tool, the structured labs for the project, and the lecture slides are available upon request for academic use.

2018-02-02
Moyer, T., Chadha, K., Cunningham, R., Schear, N., Smith, W., Bates, A., Butler, K., Capobianco, F., Jaeger, T., Cable, P..  2016.  Leveraging Data Provenance to Enhance Cyber Resilience. 2016 IEEE Cybersecurity Development (SecDev). :107–114.

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the "hard-shell" defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to "fight-through" an attack, and we help to identify necessary work to ensure that future systems are resilient.

2017-09-19
Amin, Syed Obaid, Zheng, Qingji, Ravindran, Ravishankar, Wang, GQ.  2016.  Leveraging ICN for Secure Content Distribution in IP Networks. Proceedings of the 2016 ACM on Multimedia Conference. :765–767.

Recent studies shows that by the end of 2016 more than 60% of Internet traffic would be running on HTTPS. In presence of secure tunnels such as HTTPS, transparent caching solutions become in vain, as the application payload is encrypted by lower level security protocols. This paper addresses this issue and provides an alternate approach, for contents caching without compromising their security. There are three parts to our proposal. First, we propose two new IP layer primitives that allow routers to differentiate between IP and ICN flows. Second, we introduce DCAR (Dual-mode Content Aware Router), which is a traditional IP router enabled to understand the proposed IP primitives. Third, design of DISCS (DCAR based Information centric Secure Content Sharing) framework is proposed that leverages DCAR to allow content object caching along with security services that are comparable to HTTPS. Finally we share details on realizing such system.

2017-06-05
Cox, Jr., Jacob H., Clark, Russell J., Owen, III, Henry L..  2016.  Leveraging SDN to Improve the Security of DHCP. Proceedings of the 2016 ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization. :35–38.

Current State of the art technologies for detecting and neutralizing rogue DHCP servers are tediously complex and prone to error. Network operators can spend hours (even days) before realizing that a rogue server is affecting their network. Additionally, once network operators suspect that a rogue server is active on their network, even more hours can be spent finding the server's MAC address and preventing it from affecting other clients. Not only are such methods slow to eliminate rogue servers, they are also likely to affect other clients as network operators shutdown services while attempting to locate the server. In this paper, we present Network Flow Guard (NFG), a simple security application that utilizes the software defined networking (SDN) paradigm of programmable networks to detect and disable rogue servers before they are able to affect network clients. Consequently, the key contributions of NFG are its modular approach and its automated detection/prevention of rogue DHCP servers, which is accomplished with little impact to network architecture, protocols, and network operators.

2017-09-19
Washha, Mahdi, Qaroush, Aziz, Sedes, Florence.  2016.  Leveraging Time for Spammers Detection on Twitter. Proceedings of the 8th International Conference on Management of Digital EcoSystems. :109–116.

Twitter is one of the most popular microblogging social systems, which provides a set of distinctive posting services operating in real time. The flexibility of these services has attracted unethical individuals, so-called "spammers", aiming at spreading malicious, phishing, and misleading information. Unfortunately, the existence of spam results non-ignorable problems related to search and user's privacy. In the battle of fighting spam, various detection methods have been designed, which work by automating the detection process using the "features" concept combined with machine learning methods. However, the existing features are not effective enough to adapt spammers' tactics due to the ease of manipulation in the features. Also, the graph features are not suitable for Twitter based applications, though the high performance obtainable when applying such features. In this paper, beyond the simple statistical features such as number of hashtags and number of URLs, we examine the time property through advancing the design of some features used in the literature, and proposing new time based features. The new design of features is divided between robust advanced statistical features incorporating explicitly the time attribute, and behavioral features identifying any posting behavior pattern. The experimental results show that the new form of features is able to classify correctly the majority of spammers with an accuracy higher than 93% when using Random Forest learning algorithm, applied on a collected and annotated data-set. The results obtained outperform the accuracy of the state of the art features by about 6%, proving the significance of leveraging time in detecting spam accounts.

2017-03-07
Wang, Ju, Jiang, Hongbo, Xiong, Jie, Jamieson, Kyle, Chen, Xiaojiang, Fang, Dingyi, Xie, Binbin.  2016.  LiFS: Low Human-effort, Device-free Localization with Fine-grained Subcarrier Information. Proceedings of the 22Nd Annual International Conference on Mobile Computing and Networking. :243–256.

Device-free localization of people and objects indoors not equipped with radios is playing a critical role in many emerging applications. This paper presents an accurate model-based device-free localization system LiFS, implemented on cheap commercial off-the-shelf (COTS) Wi-Fi devices. Unlike previous COTS device-based work, LiFS is able to localize a target accurately without offline training. The basic idea is simple: channel state information (CSI) is sensitive to a target's location and by modelling the CSI measurements of multiple wireless links as a set of power fading based equations, the target location can be determined. However, due to rich multipath propagation indoors, the received signal strength (RSS) or even the fine-grained CSI can not be easily modelled. We observe that even in a rich multipath environment, not all subcarriers are affected equally by multipath reflections. Our pre-processing scheme tries to identify the subcarriers not affected by multipath. Thus, CSIs on the "clean" subcarriers can be utilized for accurate localization. We design, implement and evaluate LiFS with extensive experiments in three different environments. Without knowing the majority transceivers' locations, LiFS achieves a median accuracy of 0.5 m and 1.1 m in line-of-sight (LoS) and non-line-of-sight (NLoS) scenarios respectively, outperforming the state-of-the-art systems. Besides single target localization, LiFS is able to differentiate two sparsely-located targets and localize each of them at a high accuracy.

2017-08-02
Sultana, Nik, Kohlweiss, Markulf, Moore, Andrew W..  2016.  Light at the Middle of the Tunnel: Middleboxes for Selective Disclosure of Network Monitoring to Distrusted Parties. Proceedings of the 2016 Workshop on Hot Topics in Middleboxes and Network Function Virtualization. :1–6.

Network monitoring is vital to the administration and operation of networks, but it requires privileged access that only highly trusted parties are granted. This severely limits the opportunity for external parties, such as service or equipment providers, auditors, or even clients, to measure the health or operation of a network in which they are stakeholders, but do not have access to its internal structure. In this position paper we propose the use of middleboxes to open up network monitoring to external parties using privacy-preserving technology. This will allow distrusted parties to make more inferences about the network state than currently possible, without learning any precise information about the network or the data that crosses it. Thus the state of the network will be more transparent to external stakeholders, who will be empowered to verify claims made by network operators. Network operators will be able to provide more information about their network without compromising security or privacy.

2018-05-27
2017-09-26
Lavanya, Natarajan.  2016.  Lightweight Authentication for COAP Based IOT. Proceedings of the 6th International Conference on the Internet of Things. :167–168.

Security of Constrained application protocol(COAP) used instead of HTTP in Internet of Thing s(IoT) is achieved using DTLS which uses the Internet key exchange protocol for key exchange and management. In this work a novel key exchange and authentication protocol is proposed. CLIKEv2 protcol is a certificate less and light weight version of the existing protocol. The protocol design is tested with the formal protcol verification tool Scyther, where no named attacks are identified for the propsed protocol. Compared to the existing IKE protocol the CLIKEv2 protocol reduces the computation time, key sizes and ultimately reduces energy consumption.

2017-04-03
Mousa, Ahmed Refaat, NourElDeen, Pakinam, Azer, Marianne, Allam, Mahmoud.  2016.  Lightweight Authentication Protocol Deployment over FlexRay. Proceedings of the 10th International Conference on Informatics and Systems. :233–239.

In-vehicle network security is becoming a major concern for the automotive industry. Although there is significant research done in this area, there is still a significant gap between research and what is actually applied in practice. Controller area network (CAN) gains the most concern of community but little attention is given to FlexRay. Many signs indicate the approaching end of CAN usage and starting with other promising technologies. FlexRay is considered one of the main players in the near future. We believe that migration era is near enough to change our mindset in order to supply industry with complete and mature security proposals with FlexRay. This changing mindset is important to fix the lagging issue appeared in CAN between research and industry. Then, we provide a complete migration of CAN authentication protocol towards FlexRay shows the availability of the protocol over different technologies.