Biblio

Found 7524 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-06-27
Liang, Kaitai, Su, Chunhua, Chen, Jiageng, Liu, Joseph K..  2016.  Efficient Multi-Function Data Sharing and Searching Mechanism for Cloud-Based Encrypted Data. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :83–94.

Outsourcing a huge amount of local data to remote cloud servers that has been become a significant trend for industries. Leveraging the considerable cloud storage space, industries can also put forward the outsourced data to cloud computing. How to collect the data for computing without loss of privacy and confidentiality is one of the crucial security problems. Searchable encryption technique has been proposed to protect the confidentiality of the outsourced data and the privacy of the corresponding data query. This technique, however, only supporting search functionality, may not be fully applicable to real-world cloud computing scenario whereby secure data search, share as well as computation are needed. This work presents a novel encrypted cloud-based data share and search system without loss of user privacy and data confidentiality. The new system enables users to make conjunctive keyword query over encrypted data, but also allows encrypted data to be efficiently and multiply shared among different users without the need of the "download-decrypt-then-encrypt" mode. As of independent interest, our system provides secure keyword update, so that users can freely and securely update data's keyword field. It is worth mentioning that all the above functionalities do not incur any expansion of ciphertext size, namely, the size of ciphertext remains constant during being searched, shared and keyword-updated. The system is proven secure and meanwhile, the efficiency analysis shows its great potential in being used in large-scale database.

2017-05-30
Karumanchi, Sushama, Li, Jingwei, Squicciarini, Anna.  2016.  Efficient Network Path Verification for Policy-routedQueries. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :319–328.

Resource discovery in unstructured peer-to-peer networks causes a search query to be flooded throughout the network via random nodes, leading to security and privacy issues. The owner of the search query does not have control over the transmission of its query through the network. Although algorithms have been proposed for policy-compliant query or data routing in a network, these algorithms mainly deal with authentic route computation and do not provide mechanisms to actually verify the network paths taken by the query. In this work, we propose an approach to deal with the problem of verifying network paths taken by a search query during resource discovery, and detection of malicious forwarding of search query. Our approach aims at being secure and yet very scalable, even in the presence of huge number of nodes in the network.

2017-08-22
Thao, Tran Phuong, Omote, Kazumasa.  2016.  ELAR: Extremely Lightweight Auditing and Repairing for Cloud Security. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :40–51.

Cloud storage has been gaining in popularity as an on-line service for archiving, backup, and even primary storage of files. However, due to the data outsourcing, cloud storage also introduces new security challenges, which require a data audit and data repair service to ensure data availability and data integrity in the cloud. In this paper, we present the design and implementation of a network-coding-based Proof Of Retrievability scheme called ELAR, which achieves a lightweight data auditing and data repairing. In particular, we support direct repair mechanism in which the client can be free from the data repair process. Simultaneously, we also support the task of allowing a third party auditor (TPA), on behalf of the client, to verify the availability and integrity of the data stored in the cloud servers without the need of an asymmetric-key setting. The client is thus also free from the data audit process. TPA uses spot-checking which is a very efficient probabilistic method for checking a large amount of data. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure.

2017-03-20
Helinski, Ryan L., Cole, Edward I., Robertson, Gideon, Woodbridge, Jonathan, Pierson, Lyndon G..  2016.  Electronic forensic techniques for manufacturer attribution. :139–144.

The microelectronics industry seeks screening tools that can be used to verify the origin of and track integrated circuits (ICs) throughout their lifecycle. Embedded circuits that measure process variation of an IC are well known. This paper adds to previous work using these circuits for studying manufacturer characteristics on final product ICs, particularly for the purpose of developing and verifying a signature for a microelectronics manufacturing facility (fab). We present the design, measurements and analysis of 159 silicon ICs which were built as a proof of concept for this purpose. 80 copies of our proof of concept IC were built at one fab, and 80 more copies were built across two lots at a second fab. Using these ICs, our prototype circuits allowed us to distinguish these two fabs with up to 98.7% accuracy and also distinguish the two lots from the second fab with up to 98.8% accuracy.
 

Helinski, Ryan L., Cole, Edward I., Robertson, Gideon, Woodbridge, Jonathan, Pierson, Lyndon G..  2016.  Electronic forensic techniques for manufacturer attribution. :139–144.

The microelectronics industry seeks screening tools that can be used to verify the origin of and track integrated circuits (ICs) throughout their lifecycle. Embedded circuits that measure process variation of an IC are well known. This paper adds to previous work using these circuits for studying manufacturer characteristics on final product ICs, particularly for the purpose of developing and verifying a signature for a microelectronics manufacturing facility (fab). We present the design, measurements and analysis of 159 silicon ICs which were built as a proof of concept for this purpose. 80 copies of our proof of concept IC were built at one fab, and 80 more copies were built across two lots at a second fab. Using these ICs, our prototype circuits allowed us to distinguish these two fabs with up to 98.7% accuracy and also distinguish the two lots from the second fab with up to 98.8% accuracy.

2017-05-30
Unger, Nik, Thandra, Sahithi, Goldberg, Ian.  2016.  Elxa: Scalable Privacy-Preserving Plagiarism Detection. Proceedings of the 2016 ACM on Workshop on Privacy in the Electronic Society. :153–164.

One of the most challenging issues facing academic conferences and educational institutions today is plagiarism detection. Typically, these entities wish to ensure that the work products submitted to them have not been plagiarized from another source (e.g., authors submitting identical papers to multiple journals). Assembling large centralized databases of documents dramatically improves the effectiveness of plagiarism detection techniques, but introduces a number of privacy and legal issues: all document contents must be completely revealed to the database operator, making it an attractive target for abuse or attack. Moreover, this content aggregation involves the disclosure of potentially sensitive private content, and in some cases this disclosure may be prohibited by law. In this work, we introduce Elxa, the first scalable centralized plagiarism detection system that protects the privacy of the submissions. Elxa incorporates techniques from the current state of the art in plagiarism detection, as evaluated by the information retrieval community. Our system is designed to be operated on existing cloud computing infrastructure, and to provide incentives for the untrusted database operator to maintain the availability of the network. Elxa can be used to detect plagiarism in student work, duplicate paper submissions (and their associated peer reviews), similarities between confidential reports (e.g., malware summaries), or any approximate text reuse within a network of private documents. We implement a prototype using the Hadoop MapReduce framework, and demonstrate that it is feasible to achieve competitive detection effectiveness in the private setting.

2017-08-02
Moratelli, Carlos, Johann, Sergio, Neves, Marcelo, Hessel, Fabiano.  2016.  Embedded Virtualization for the Design of Secure IoT Applications. Proceedings of the 27th International Symposium on Rapid System Prototyping: Shortening the Path from Specification to Prototype. :2–6.

Embedded virtualization has emerged as a valuable way to reduce costs, improve software quality, and decrease design time. Additionally, virtualization can enforce the overall system's security from several perspectives. One is security due to separation, where the hypervisor ensures that one domain does not compromise the execution of other domains. At the same time, the advances in the development of IoT applications opened discussions about the security flaws that were introduced by IoT devices. In a few years, billions of these devices will be connected to the cloud exchanging information. This is an opportunity for hackers to exploit their vulnerabilities, endangering applications connected to such devices. At this point, it is inevitable to consider virtualization as a possible approach for IoT security. In this paper we discuss how embedded virtualization could take place on IoT devices as a sound solution for security.

Dolz, Manuel F., del Rio Astorga, David, Fernández, Javier, García, J. Daniel, García-Carballeira, Félix, Danelutto, Marco, Torquati, Massimo.  2016.  Embedding Semantics of the Single-Producer/Single-Consumer Lock-Free Queue into a Race Detection Tool. Proceedings of the 7th International Workshop on Programming Models and Applications for Multicores and Manycores. :20–29.

The rapid progress of multi-/many-core architectures has caused data-intensive parallel applications not yet be fully suited for getting the maximum performance. The advent of parallel programming frameworks offering structured patterns has alleviated developers' burden adapting such applications to parallel platforms. For example, the use of synchronization mechanisms in multithreaded applications is essential on shared-cache multi-core architectures. However, ensuring an appropriate use of their interfaces can be challenging, since different memory models plus instruction reordering at compiler/processor levels may influence the occurrence of data races. The benefits of race detectors are formidable in this sense, nevertheless if lock-free data structures with no high-level atomics are used, they may emit false positives. In this paper, we extend the ThreadSanitizer race detection tool in order to support semantics of the general Single-Producer/Single-Consumer (SPSC) lock-free parallel queue and to detect benign data races where it was correctly used. To perform our analysis, we leverage the FastFlow SPSC bounded lock-free queue implementation to test our extensions over a set of μ-benchmarks and real applications on a dual-socket Intel Xeon CPU E5-2695 platform. We demonstrate that this approach can reduce, on average, 30% the number of data race warning messages.

2017-03-29
Hasegawa, Toru, Tara, Yasutaka, Ryu, Kai, Koizumi, Yuki.  2016.  Emergency Message Delivery Mechanism in NDN Networks. Proceedings of the 3rd ACM Conference on Information-Centric Networking. :199–200.

Emergency message delivery in packet networks is promising in terms of resiliency to failures and service delivery to handicapped persons. In this paper, we propose an NDN(Named Data Networking)-based emergency message delivery mechanism by leveraging multicasting and ABE (Attribute-Based Encryption) functions.

2017-05-19
Kocabas, Ovunc, Soyata, Tolga, Aktas, Mehmet K..  2016.  Emerging Security Mechanisms for Medical Cyber Physical Systems. IEEE/ACM Trans. Comput. Biol. Bioinformatics. 13:401–416.

The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, we depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.

2017-10-03
Sekar, Vyas.  2016.  Enabling Software-Defined Network Security for Next-Generation Networks. Proceedings of the 12th International on Conference on Emerging Networking EXperiments and Technologies. :1–1.

The state of network security today is quite abysmal. Security breaches and downtime of critical infrastructures continue to be the norm rather than the exception, despite the dramatic rise in spending on network security. Attackers today can easily leverage a distributed and programmable infrastructure of compromised machines (or botnets) to launch large-scale and sophisticated attack campaigns. In contrast, the defenders of our critical infrastructures are fundamentally crippled as they rely on fixed capacity, inflexible, and expensive hardware appliances deployed at designated "chokepoints". These primitive defense capabilities force defenders into adopting weak and static security postures configured for simple and known attacks, or otherwise risk user revolt, as they face unpleasant tradeoffs between false positives and false negatives. Unfortunately, attacks can easily evade these defenses; e.g., piggybacking on popular services (e.g., drive-by-downloads) and by overloading the appliances. Continuing along this trajectory means that attackers will always hold the upper hand as defenders are stifled by the inflexible and impotent tools in their arsenal. An overarching goal of my work is to change the dynamics of this attack-defense equation. Instead of taking a conventional approach of developing attack-specific defenses, I argue that we can leverage recent trends in software-defined networking and network functions virtualization to better empower defenders with the right tools and abstractions to tackle the constantly evolving attack landscape. To this end, I envision a new software-defined approach to network security, where we can rapidly develop and deploy novel in-depth defenses and dynamically customize the network's security posture to the current operating context. In this talk, I will give an overview of our recent work on the basic building blocks to enable this vision as well as some early security capabilities we have developed. Using anecdotes from this specific exercise, I will also try to highlight lessons and experiences in the overall research process (e.g., how to pick and formulate problems, the role of serendipity, and the benefits of finding ``bridges'' to other subdomains).

2017-10-13
Agosta, Giovanni, Barenghi, Alessandro, Pelosi, Gerardo, Scandale, Michele.  2016.  Encasing Block Ciphers to Foil Key Recovery Attempts via Side Channel. Proceedings of the 35th International Conference on Computer-Aided Design. :96:1–96:8.

Providing efficient protection against energy consumption based side channel attacks (SCAs) for block ciphers is a relevant topic for the research community, as current overheads are in the 100x range. Unprofiled SCAs exploit information leakage from the outmost rounds of a cipher; we propose a solution encasing it between keyed transformations amenable to an efficient SCA protection. Our solution can be employed as a drop in replacement for an unprotected implementation, or be retrofit to an existing one, while retaining communication capabilities with legacy insecure endpoints. Experiments on a Cortex-M4 μC, show performance improvements in the range of 60x, compared with available solutions.

2017-08-02
Bremler-Barr, Anat, Harchol, Yotam, Hay, David, Hel-Or, Yacov.  2016.  Encoding Short Ranges in TCAM Without Expansion: Efficient Algorithm and Applications. Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures. :35–46.

We present RENE –- a novel encoding scheme for short ranges on Ternary content addressable memory (TCAM), which, unlike previous solutions, does not impose row expansion, and uses bits proportionally to the maximal range length. We provide theoretical analysis to show that our encoding is the closest to the lower bound of number of bits used. In addition, we show several applications of our technique in the field of packet classification, and also, how the same technique could be used to efficiently solve other hard problems such as the nearest-neighbor search problem and its variants. We show that using TCAM, one could solve such problems in much higher rates than previously suggested solutions, and outperform known lower bounds in traditional memory models. We show by experiments that the translation process of RENE on switch hardware induces only a negligible 2.5% latency overhead. Our nearest neighbor implementation on a TCAM device provides search rates that are up to four orders of magnitude higher than previous best prior-art solutions.

2017-05-19
Francis, Leena Mary, Visalatchi, K. C., Sreenath, N..  2016.  End to End Text Recognition from Natural Scene. Proceedings of the International Conference on Informatics and Analytics. :44:1–44:5.

The web world is been flooded with multi-media sources such as images, videos, animations and audios, which has in turn made the computer vision researchers to focus over extracting the content from the sources. Scene text recognition basically involves two major steps namely Text Localization and Text Recognition. This paper provides end-to-end text recognition approach to extract the characters alone from the complex natural scene. Using Maximal Stable Extremal Region (MSER) the various objects are localized, using Canny Edge detection method edges are identified, further binary classification is done using Connected-Component method which segregates the text and nontext objects and finally the stroke analysis method is applied to analyse the style of the character, leading to the character recognization. The Experimental results were obtained by testing the approach over ICDAR2015 dataset, wherein text was able to be recognized from most of the scene images with good precision value.

2017-10-13
Costanzo, David, Shao, Zhong, Gu, Ronghui.  2016.  End-to-end Verification of Information-flow Security for C and Assembly Programs. Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation. :648–664.

Protecting the confidentiality of information manipulated by a computing system is one of the most important challenges facing today's cybersecurity community. A promising step toward conquering this challenge is to formally verify that the end-to-end behavior of the computing system really satisfies various information-flow policies. Unfortunately, because today's system software still consists of both C and assembly programs, the end-to-end verification necessarily requires that we not only prove the security properties of individual components, but also carefully preserve these properties through compilation and cross-language linking. In this paper, we present a novel methodology for formally verifying end-to-end security of a software system that consists of both C and assembly programs. We introduce a general definition of observation function that unifies the concepts of policy specification, state indistinguishability, and whole-execution behaviors. We show how to use different observation functions for different levels of abstraction, and how to link different security proofs across abstraction levels using a special kind of simulation that is guaranteed to preserve state indistinguishability. To demonstrate the effectiveness of our new methodology, we have successfully constructed an end-to-end security proof, fully formalized in the Coq proof assistant, of a nontrivial operating system kernel (running on an extended CompCert x86 assembly machine model). Some parts of the kernel are written in C and some are written in assembly; we verify all of the code, regardless of language.

2017-05-17
Hsu, Terry Ching-Hsiang, Hoffman, Kevin, Eugster, Patrick, Payer, Mathias.  2016.  Enforcing Least Privilege Memory Views for Multithreaded Applications. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :393–405.

Failing to properly isolate components in the same address space has resulted in a substantial amount of vulnerabilities. Enforcing the least privilege principle for memory accesses can selectively isolate software components to restrict attack surface and prevent unintended cross-component memory corruption. However, the boundaries and interactions between software components are hard to reason about and existing approaches have failed to stop attackers from exploiting vulnerabilities caused by poor isolation. We present the secure memory views (SMV) model: a practical and efficient model for secure and selective memory isolation in monolithic multithreaded applications. SMV is a third generation privilege separation technique that offers explicit access control of memory and allows concurrent threads within the same process to partially share or fully isolate their memory space in a controlled and parallel manner following application requirements. An evaluation of our prototype in the Linux kernel (TCB textless 1,800 LOC) shows negligible runtime performance overhead in real-world applications including Cherokee web server (textless 0.69%), Apache httpd web server (textless 0.93%), and Mozilla Firefox web browser (textless 1.89%) with at most 12 LOC changes.

2017-05-22
Anderson, Brian, Bergstrom, Lars, Goregaokar, Manish, Matthews, Josh, McAllister, Keegan, Moffitt, Jack, Sapin, Simon.  2016.  Engineering the Servo Web Browser Engine Using Rust. Proceedings of the 38th International Conference on Software Engineering Companion. :81–89.

All modern web browsers –- Internet Explorer, Firefox, Chrome, Opera, and Safari –- have a core rendering engine written in C++. This language choice was made because it affords the systems programmer complete control of the underlying hardware features and memory in use, and it provides a transparent compilation model. Unfortunately, this language is complex (especially to new contributors!), challenging to write correct parallel code in, and highly susceptible to memory safety issues that potentially lead to security holes. Servo is a project started at Mozilla Research to build a new web browser engine that preserves the capabilities of these other browser engines but also both takes advantage of the recent trends in parallel hardware and is more memory-safe. We use a new language, Rust, that provides us a similar level of control of the underlying system to C++ but which statically prevents many memory safety issues and provides direct support for parallelism and concurrency. In this paper, we show how a language with an advanced type system can address many of the most common security issues and software engineering challenges in other browser engines, while still producing code that has the same performance and memory profile. This language is also quite accessible to new open source contributors and employees, even those without a background in C++ or systems programming. We also outline several pitfalls encountered along the way and describe some potential areas for future improvement.

2017-11-20
Reddy, Alavalapati Goutham, Yoon, Eun-Jun, Das, Ashok Kumar, Yoo, Kee-Young.  2016.  An Enhanced Anonymous Two-factor Mutual Authentication with Key-agreement Scheme for Session Initiation Protocol. Proceedings of the 9th International Conference on Security of Information and Networks. :145–149.

A two-factor authenticated key-agreement scheme for session initiation protocol emerged as a best remedy to overcome the ascribed limitations of the password-based authentication scheme. Recently, Lu et al. proposed an anonymous two-factor authenticated key-agreement scheme for SIP using elliptic curve cryptography. They claimed that their scheme is secure against attacks and achieves user anonymity. Conversely, this paper's keen analysis points out several severe security weaknesses of the Lu et al.'s scheme. In addition, this paper puts forward an enhanced anonymous two-factor mutual authenticated key-agreement scheme for session initiation protocol using elliptic curve cryptography. The security analysis and performance analysis sections demonstrates that the proposed scheme is more robust and efficient than Lu et al.'s scheme.

2017-08-02
Moran, Sean, McCreadie, Richard, Macdonald, Craig, Ounis, Iadh.  2016.  Enhancing First Story Detection Using Word Embeddings. Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. :821–824.

In this paper we show how word embeddings can be used to increase the effectiveness of a state-of-the art Locality Sensitive Hashing (LSH) based first story detection (FSD) system over a standard tweet corpus. Vocabulary mismatch, in which related tweets use different words, is a serious hindrance to the effectiveness of a modern FSD system. In this case, a tweet could be flagged as a first story even if a related tweet, which uses different but synonymous words, was already returned as a first story. In this work, we propose a novel approach to mitigate this problem of lexical variation, based on tweet expansion. In particular, we propose to expand tweets with semantically related paraphrases identified via automatically mined word embeddings over a background tweet corpus. Through experimentation on a large data stream comprised of 50 million tweets, we show that FSD effectiveness can be improved by 9.5% over a state-of-the-art FSD system.

2017-04-03
Kang, Chanhyun, Park, Noseong, Prakash, B. Aditya, Serra, Edoardo, Subrahmanian, V. S..  2016.  Ensemble Models for Data-driven Prediction of Malware Infections. Proceedings of the Ninth ACM International Conference on Web Search and Data Mining. :583–592.

Given a history of detected malware attacks, can we predict the number of malware infections in a country? Can we do this for different malware and countries? This is an important question which has numerous implications for cyber security, right from designing better anti-virus software, to designing and implementing targeted patches to more accurately measuring the economic impact of breaches. This problem is compounded by the fact that, as externals, we can only detect a fraction of actual malware infections. In this paper we address this problem using data from Symantec covering more than 1.4 million hosts and 50 malware spread across 2 years and multiple countries. We first carefully design domain-based features from both malware and machine-hosts perspectives. Secondly, inspired by epidemiological and information diffusion models, we design a novel temporal non-linear model for malware spread and detection. Finally we present ESM, an ensemble-based approach which combines both these methods to construct a more accurate algorithm. Using extensive experiments spanning multiple malware and countries, we show that ESM can effectively predict malware infection ratios over time (both the actual number and trend) upto 4 times better compared to several baselines on various metrics. Furthermore, ESM's performance is stable and robust even when the number of detected infections is low.

2017-05-30
Xu, Guanshuo, Wu, Han-Zhou, Shi, Yun Q...  2016.  Ensemble of CNNs for Steganalysis: An Empirical Study. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :103–107.

There has been growing interest in using convolutional neural networks (CNNs) in the fields of image forensics and steganalysis, and some promising results have been reported recently. These works mainly focus on the architectural design of CNNs, usually, a single CNN model is trained and then tested in experiments. It is known that, neural networks, including CNNs, are suitable to form ensembles. From this perspective, in this paper, we employ CNNs as base learners and test several different ensemble strategies. In our study, at first, a recently proposed CNN architecture is adopted to build a group of CNNs, each of them is trained on a random subsample of the training dataset. The output probabilities, or some intermediate feature representations, of each CNN, are then extracted from the original data and pooled together to form new features ready for the second level of classification. To make best use of the trained CNN models, we manage to partially recover the lost information due to spatial subsampling in the pooling layers when forming feature vectors. Performance of the ensemble methods are evaluated on BOSSbase by detecting S-UNIWARD at 0.4 bpp embedding rate. Results have indicated that both the recovery of the lost information, and learning from intermediate representation in CNNs instead of output probabilities, have led to performance improvement.

2017-09-19
Huo, Jing, Gao, Yang, Shi, Yinghuan, Yang, Wanqi, Yin, Hujun.  2016.  Ensemble of Sparse Cross-Modal Metrics for Heterogeneous Face Recognition. Proceedings of the 2016 ACM on Multimedia Conference. :1405–1414.

Heterogeneous face recognition aims to identify or verify person identity by matching facial images of different modalities. In practice, it is known that its performance is highly influenced by modality inconsistency, appearance occlusions, illumination variations and expressions. In this paper, a new method named as ensemble of sparse cross-modal metrics is proposed for tackling these challenging issues. In particular, a weak sparse cross-modal metric learning method is firstly developed to measure distances between samples of two modalities. It learns to adjust rank-one cross-modal metrics to satisfy two sets of triplet based cross-modal distance constraints in a compact form. Meanwhile, a group based feature selection is performed to enforce that features in the same position of two modalities are selected simultaneously. By neglecting features that attribute to "noise" in the face regions (eye glasses, expressions and so on), the performance of learned weak metrics can be markedly improved. Finally, an ensemble framework is incorporated to combine the results of differently learned sparse metrics into a strong one. Extensive experiments on various face datasets demonstrate the benefit of such feature selection especially when heavy occlusions exist. The proposed ensemble metric learning has been shown superiority over several state-of-the-art methods in heterogeneous face recognition.

2017-05-30
De Groef, Willem, Subramanian, Deepak, Johns, Martin, Piessens, Frank, Desmet, Lieven.  2016.  Ensuring Endpoint Authenticity in WebRTC Peer-to-peer Communication. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :2103–2110.

WebRTC is one of the latest additions to the ever growing repository of Web browser technologies, which push the envelope of native Web application capabilities. WebRTC allows real-time peer-to-peer audio and video chat, that runs purely in the browser. Unlike existing video chat solutions, such as Skype, that operate in a closed identity ecosystem, WebRTC was designed to be highly flexible, especially in the domains of signaling and identity federation. This flexibility, however, opens avenues for identity fraud. In this paper, we explore the technical underpinnings of WebRTC's identity management architecture. Based on this analysis, we identify three novel attacks against endpoint authenticity. To answer the identified threats, we propose and discuss defensive strategies, including security improvements for the WebRTC specifications and mitigation techniques for the identity and service providers.

2017-09-05
Liberzon, Daniel, Mitra, Sayan.  2016.  Entropy and Minimal Data Rates for State Estimation and Model Detection. Proceedings of the 19th International Conference on Hybrid Systems: Computation and Control. :247–256.

We investigate the problem of constructing exponentially converging estimates of the state of a continuous-time system from state measurements transmitted via a limited-data-rate communication channel, so that only quantized and sampled measurements of continuous signals are available to the estimator. Following prior work on topological entropy of dynamical systems, we introduce a notion of estimation entropy which captures this data rate in terms of the number of system trajectories that approximate all other trajectories with desired accuracy. We also propose a novel alternative definition of estimation entropy which uses approximating functions that are not necessarily trajectories of the system. We show that the two entropy notions are actually equivalent. We establish an upper bound for the estimation entropy in terms of the sum of the system's Lipschitz constant and the desired convergence rate, multiplied by the system dimension. We propose an iterative procedure that uses quantized and sampled state measurements to generate state estimates that converge to the true state at the desired exponential rate. The average bit rate utilized by this procedure matches the derived upper bound on the estimation entropy. We also show that no other estimator (based on iterative quantized measurements) can perform the same estimation task with bit rates lower than the estimation entropy. Finally, we develop an application of the estimation procedure in determining, from the quantized state measurements, which of two competing models of a dynamical system is the true model. We show that under a mild assumption of exponential separation of the candidate models, detection is always possible in finite time. Our numerical experiments with randomly generated affine dynamical systems suggest that in practice the algorithm always works.

2017-05-18
Foremski, Pawel, Plonka, David, Berger, Arthur.  2016.  Entropy/IP: Uncovering Structure in IPv6 Addresses. Proceedings of the 2016 Internet Measurement Conference. :167–181.

In this paper, we introduce Entropy/IP: a system that discovers Internet address structure based on analyses of a subset of IPv6 addresses known to be active, i.e., training data, gleaned by readily available passive and active means. The system is completely automated and employs a combination of information-theoretic and machine learning techniques to probabilistically model IPv6 addresses. We present results showing that our system is effective in exposing structural characteristics of portions of the active IPv6 Internet address space, populated by clients, services, and routers. In addition to visualizing the address structure for exploration, the system uses its models to generate candidate addresses for scanning. For each of 15 evaluated datasets, we train on 1K addresses and generate 1M candidates for scanning. We achieve some success in 14 datasets, finding up to 40% of the generated addresses to be active. In 11 of these datasets, we find active network identifiers (e.g., /64 prefixes or "subnets") not seen in training. Thus, we provide the first evidence that it is practical to discover subnets and hosts by scanning probabilistically selected areas of the IPv6 address space not known to contain active hosts a priori.