Biblio
With the development of the Internet of Things (IoT), it has been widely deployed. As many embedded devices are connected to the network and massive amounts of security-sensitive data are stored in these devices, embedded devices in IoT have become the target of attackers. The trusted computing is a key technology to guarantee the security and trustworthiness of devices' execution environment. This paper focuses on security problems on IoT devices, and proposes a security architecture for IoT devices based on the trusted computing technology. This paper implements a security management system for IoT devices, which can perform integrity measurement, real-time monitoring and security management for embedded applications, providing a safe and reliable execution environment and whitelist-based security protection for IoT devices. This paper also designs and implements an embedded security protection system based on trusted computing technology, containing a measurement and control component in the kernel and a remote graphical management interface for administrators. The kernel layer enforces the integrity measurement and control of the embedded application on the device. The graphical management interface communicates with the remote embedded device through the TCP/IP protocol, and provides a feature-rich and user-friendly interaction interface. It implements functions such as knowledge base scanning, whitelist management, log management, security policy management, and cryptographic algorithm performance testing.
Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.