Biblio
Re-drawing the image as a certain artistic style is considered to be a complicated task for computer machine. On the contrary, human can easily master the method to compose and describe the style between different images. In the past, many researchers studying on the deep neural networks had found an appropriate representation of the artistic style using perceptual loss and style reconstruction loss. In the previous works, Gatys et al. proposed an artificial system based on convolutional neural networks that creates artistic images of high perceptual quality. Whereas in terms of running speed, it was relatively time-consuming, thus it cannot apply to video style transfer. Recently, a feed-forward CNN approach has shown the potential of fast style transformation, which is an end-to-end system without hundreds of iteration while transferring. We combined the benefits of both approaches, optimized the feed-forward network and defined time loss function to make it possible to implement the style transfer on video in real time. In contrast to the past method, our method runs in real time with higher resolution while creating competitive visually pleasing and temporally consistent experimental results.
Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.
A semi-analytical model for internal optical losses at high power in a 1.5 μm laser diode with strong n-doping in the n-side of the optical confinement layer is created. The model includes intervalence band absorption by holes supplied by both current flow and two-photon absorption. The resulting losses are shown to be substantially lower than those in a similar, but weakly doped structure. Thus a significant improvement in the output power and efficiency by strong n-doping is predicted.
We show high confinement thermally tunable, low loss (0.27 ± 0.04 dB/cm) Si3N4waveguides that are 42 cm long. We show that this platform can enable the miniaturization of traditionally bulky active OCT components.
We demonstrate high-speed operation of ultracompact electroabsorption modulators based on epsilon-near-zero confinement in indium oxide (In$_\textrm2$$_\textrm3$\$) on silicon using field-effect carrier density tuning. Additionally, we discuss strategies to enhance modulator performance and reduce confinement-related losses by introducing high-mobility conducting oxides such as cadmium oxide (CdO).