Biblio
Scan design is a universal design for test (DFT) technology to increase the observability and controllability of the circuits under test by using scan chains. However, it also leads to a potential security problem that attackers can use scan design as a backdoor to extract confidential information. Researchers have tried to address this problem by using secure scan structures that usually have some keys to confirm the identities of users. However, the traditional methods to store intermediate data or keys in memory are also under high risk of being attacked. In this paper, we propose a dynamic-key secure DFT structure that can defend scan-based and memory attacks without decreasing the system performance and the testability. The main idea is to build a scan design key generator that can generate the keys dynamically instead of storing and using keys in the circuit statically. Only specific patterns derived from the original test patterns are valid to construct the keys and hence the attackers cannot shift in any other patterns to extract correct internal response from the scan chains or retrieve the keys from memory. Analysis results show that the proposed method can achieve a very high security level and the security level will not decrease no matter how many guess rounds the attackers have tried due to the dynamic nature of our method.
We present a framework for learning to describe finegrained visual differences between instances using attribute phrases. Attribute phrases capture distinguishing aspects of an object (e.g., “propeller on the nose” or “door near the wing” for airplanes) in a compositional manner. Instances within a category can be described by a set of these phrases and collectively they span the space of semantic attributes for a category. We collect a large dataset of such phrases by asking annotators to describe several visual differences between a pair of instances within a category. We then learn to describe and ground these phrases to images in the context of a reference game between a speaker and a listener. The goal of a speaker is to describe attributes of an image that allows the listener to correctly identify it within a pair. Data collected in a pairwise manner improves the ability of the speaker to generate, and the ability of the listener to interpret visual descriptions. Moreover, due to the compositionality of attribute phrases, the trained listeners can interpret descriptions not seen during training for image retrieval, and the speakers can generate attribute-based explanations for differences between previously unseen categories. We also show that embedding an image into the semantic space of attribute phrases derived from listeners offers 20% improvement in accuracy over existing attributebased representations on the FGVC-aircraft dataset.