Biblio
Infrared thermography has been recognized for its ability to investigate integrated circuits in a non destructive way. Coupled to lock-in correlation it has proven efficient in detecting thermal hot spots. Most of the state of the Art measurement systems are based on amplitude analysis. In this paper we propose to investigate weak thermal hot spots using the phase of infrared signals. We demonstrate that phase analysis is a formidable alternative to amplitude to detect small heat signatures. Finally, we apply our measurement platform and its detection method to the identification of stealthy hardware Trojans.
Pre-Silicon hardware Trojan detection has been studied for years. The most popular benchmark circuits are from the Trust-Hub. Their common feature is that the probability of activating hardware Trojans is very low. This leads to a series of machine learning based hardware Trojan detection methods which try to find the nets with low signal probability of 0 or 1. On the other hand, it is considered that, if the probability of activating hardware Trojans is high, these hardware Trojans can be easily found through behaviour simulations or during functional test. This paper explores the "grey zone" between these two opposite scenarios: if the activation probability of a hardware Trojan is not low enough for machine learning to detect it and is not high enough for behaviour simulation or functional test to find it, it can escape from detection. Experiments show the existence of such hardware Trojans, and this paper suggests a new set of hardware Trojan benchmark circuits for future study.
In the security area, there has been an increasing tendency to apply deep learning, which is perceived as a black box method because of the lack of understanding of its internal functioning. Can we trust deep learning models when they achieve high test accuracy? Using a visual explanation method, we find that deep learning models used in security tasks can easily focus on semantically non-discriminative parts of input data even though they produce the right answers. Furthermore, when a model is re-trained without any change in the learning procedure (i.e., no change in training/validation data, initialization/optimization methods and hyperparameters), it can focus on significantly different parts of many samples while producing the same answers. For trustworthy deep learning in security, therefore, we argue that it is necessary to verify the classification criteria of deep learning models before deploying them, even though they successfully achieve high test accuracy.
Industrial cluster is an important organization form and carrier of development of small and medium-sized enterprises, and information service platform is an important facility of industrial cluster. Improving the credibility of the network platform is conducive to eliminate the adverse effects of distrust and information asymmetry on industrial clusters. The decentralization, transparency, openness, and intangibility of block chain technology make it an inevitable choice for trustworthiness optimization of industrial cluster network platform. This paper first studied on trusted standard of industry cluster network platform and construct a new trusted framework of industry cluster network platform. Then the paper focus on trustworthiness optimization of data layer and application layer of the platform. The purpose of this paper is to build an industrial cluster network platform with data access, information trustworthiness, function availability, high-speed and low consumption, and promote the sustainable and efficient development of industrial cluster.
Inference based techniques are one of the major approaches to analyze DNS data and detect malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new approach to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak "co-IP" relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed approach not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithms are specifically designed for DNS data analysis. They are effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improve the inference efficiency, we construct a new domain-IP graph that can work well with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only a minor impact to detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.
3D modeling usually refers to be the use of 3D software to build production through the virtual 3D space model with 3D data. At present, most 3D modeling software such as 3dmax, FLAC3D and Midas all need adjust models to get a satisfactory model or by coding a precise modeling. There are many matters such as complicated steps, strong profession, the high modeling cost. Aiming at this problem, the paper presents a new 3D modeling methods which is based on Deep Belief Networks (DBN) and Interactive Evolutionary Algorithm (IEA). Following this method, firstly, extract characteristic vectors from vertex, normal, surfaces of the imported model samples. Secondly, use the evolution strategy, to extract feature vector for stochastic evolution by artificial grading control the direction of evolution, and in the process to extract the characteristics of user preferences. Then, use evolution function matrix to establish the fitness approximation evaluation model, and simulate subjective evaluation. Lastly, the user can control the whole machine simulation evaluation process at any time, and get a satisfactory model. The experimental results show that the method in this paper is feasible.
Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.
Information and systems are the most valuable asset of almost all global organizations. Thus, sufficient security is key to protect these assets. The reliability and security of a manufacturing company's supply chain are key concerns as it manages assurance & quality of supply. Traditional concerns such as physical security, disasters, political issues & counterfeiting remain, but cyber security is an area of growing interest. Statistics show that cyber-attacks still continue with no signs of slowing down. Technical controls, no matter how good, will only take the company thus far since no usable system is 100 percent secure or impenetrable. Evaluating the security vulnerabilities of one organization and taking the action to mitigate the risks will strengthen the layer of protection in the manufacturing company's supply chain. In this paper, the researchers created an IT Security Assessment Tool to facilitate the evaluation of the sufficiency of policy, procedures, and controls implemented by semiconductor companies. The proposed IT Security Assessment Tool was developed considering the factors that are critical in protecting the information and systems of various semiconductor companies. Subsequently, the created IT Security Assessment Tool was used to evaluate existing semiconductor companies to identify their areas of security vulnerabilities. The result shows that all suppliers visited do not have cyber security programs and most dwell on physical and network security controls. Best practices were shared and action items were suggested to improve the security controls and minimize risk of service disruption for customers, theft of sensitive data and reputation damage.
While existing visual recognition approaches, which rely on 2D images to train their underlying models, work well for object classification, recognizing the changing state of a 3D object requires addressing several additional challenges. This paper proposes an active visual recognition approach to this problem, leveraging camera pose data available on mobile devices. With this approach, the state of a 3D object, which captures its appearance changes, can be recognized in real time. Our novel approach selects informative video frames filtered by 6-DOF camera poses to train a deep learning model to recognize object state. We validate our approach through a prototype for Augmented Reality-assisted hardware maintenance.
In rapid continuous software development, time- and cost-effective prototyping techniques are beneficial through enabling software designers to quickly explore and evaluate different design concepts. Regarding low-fidelity prototyping for augmented reality (AR) applications, software designers are so far restricted to non-digital prototypes, which enable the visualization of first design concepts, but can be laborious in capturing interactivity. The lack of empirical values and standards for designing user interactions in AR-software leads to a particular need for applying end-user feedback to software refinement. In this paper we present the concept of a tool for rapid digital prototyping for augmented reality applications, enabling software designers to rapidly design augmented reality prototypes, without requiring programming skills. The prototyping tool focuses on modeling multimodal interactions, especially regarding the interaction with physical objects, as well as performing user-based studies to integrate valuable end-user feedback into the refinement of software aspects.
This paper aims to explain static analysis techniques in detail, and to highlight the weaknesses and challenges which face it. To this end, more than 80 static analysis-based framework have been studied, and in their light, the process of detecting malicious applications has been divided into four phases that were explained in a schematic manner. Also, the features that is used in static analysis were discussed in detail by dividing it into four categories namely, Manifest-based features, code-based features, semantic features and app's metadata-based features. Also, the challenges facing methods based on static analysis were discussed in detail. Finally, a case study was conducted to test the strength of some known commercial antivirus and one of the stat-of-art academic static analysis frameworks against obfuscation techniques used by developers of malicious applications. The results showed a significant impact on the performance of the most tested antiviruses and frameworks, which is reflecting the urgent need for more accurately tools.