Biblio
False data injection is an on-going concern facing power system state estimation. In this work, a neural network is trained to detect the existence of false data in measurements. The proposed approach can make use of historical data, if available, by using them in the training sets of the proposed neural network model. However, the inputs of perceptron model in this work are the residual elements from the state estimation, which are highly correlated. Therefore, their dimension could be reduced by preserving the most informative features from the inputs. To this end, principal component analysis is used (i.e., a data preprocessing technique). This technique is especially efficient for highly correlated data sets, which is the case in power system measurements. The results of different perceptron models that are proposed for detection, are compared to a simple perceptron that produces identical result to the outlier detection scheme. For generating the training sets, state estimation was run for different false data on different measurements in 13-bus IEEE test system, and the residuals are saved as inputs of training sets. The testing results of the trained network show its good performance in detection of false data in measurements.
Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.