Visible to the public Biblio

Filters: Author is Jiang, L.  [Clear All Filters]
2019-09-11
Duncan, A., Jiang, L., Swany, M..  2018.  Repurposing SoC Analog Circuitry for Additional COTS Hardware Security. 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :201–204.

This paper introduces a new methodology to generate additional hardware security in commercial off-the-shelf (COTS) system-on-a-chip (SoC) integrated circuits (ICs) that have already been fabricated and packaged. On-chip analog hardware blocks such as analog to digital converters (ADCs), digital to analog converters (DACs) and comparators residing within an SoC are repurposed and connected to one another to generate unique physically unclonable function (PUF) responses. The PUF responses are digitized and processed on-chip to create keys for use in encryption and device authentication activities. Key generation and processing algorithms are presented that minimize the effects of voltage and temperature fluctuations to maximize the repeatability of a key within a device. Experimental results utilizing multiple on-chip analog blocks inside a common COTS microcontroller show reliable key generation with minimal overhead.

2019-09-04
Liang, J., Jiang, L., Cao, L., Li, L., Hauptmann, A..  2018.  Focal Visual-Text Attention for Visual Question Answering. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. :6135–6143.
Recent insights on language and vision with neural networks have been successfully applied to simple single-image visual question answering. However, to tackle real-life question answering problems on multimedia collections such as personal photos, we have to look at whole collections with sequences of photos or videos. When answering questions from a large collection, a natural problem is to identify snippets to support the answer. In this paper, we describe a novel neural network called Focal Visual-Text Attention network (FVTA) for collective reasoning in visual question answering, where both visual and text sequence information such as images and text metadata are presented. FVTA introduces an end-to-end approach that makes use of a hierarchical process to dynamically determine what media and what time to focus on in the sequential data to answer the question. FVTA can not only answer the questions well but also provides the justifications which the system results are based upon to get the answers. FVTA achieves state-of-the-art performance on the MemexQA dataset and competitive results on the MovieQA dataset.
2017-12-12
Jiang, L., Kuhn, W., Yue, P..  2017.  An interoperable approach for Sensor Web provenance. 2017 6th International Conference on Agro-Geoinformatics. :1–6.

The Sensor Web is evolving into a complex information space, where large volumes of sensor observation data are often consumed by complex applications. Provenance has become an important issue in the Sensor Web, since it allows applications to answer “what”, “when”, “where”, “who”, “why”, and “how” queries related to observations and consumption processes, which helps determine the usability and reliability of data products. This paper investigates characteristics and requirements of provenance in the Sensor Web and proposes an interoperable approach to building a provenance model for the Sensor Web. Our provenance model extends the W3C PROV Data Model with Sensor Web domain vocabularies. It is developed using Semantic Web technologies and thus allows provenance information of sensor observations to be exposed in the Web of Data using the Linked Data approach. A use case illustrates the applicability of the approach.