Visible to the public Biblio

Filters: Author is Lu, Aidong  [Clear All Filters]
2018-06-07
Yuan, Shuhan, Wu, Xintao, Li, Jun, Lu, Aidong.  2017.  Spectrum-based Deep Neural Networks for Fraud Detection. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. :2419–2422.
In this paper, we focus on fraud detection on a signed graph with only a small set of labeled training data. We propose a novel framework that combines deep neural networks and spectral graph analysis. In particular, we use the node projection (called as spectral coordinate) in the low dimensional spectral space of the graph's adjacency matrix as the input of deep neural networks. Spectral coordinates in the spectral space capture the most useful topology information of the network. Due to the small dimension of spectral coordinates (compared with the dimension of the adjacency matrix derived from a graph), training deep neural networks becomes feasible. We develop and evaluate two neural networks, deep autoencoder and convolutional neural network, in our fraud detection framework. Experimental results on a real signed graph show that our spectrum based deep neural networks are effective in fraud detection.
2017-06-27
Mahfoud, Eli, Lu, Aidong.  2016.  Gaze-directed Immersive Visualization of Scientific Ensembles. Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces. :77–82.

The latest advances in head-mounted displays (HMDs) for augmented reality (AR) and mixed reality (MR) have produced commercialized devices that are gradually accepted by the public. These HMDs are generally equipped with head tracking, which provides an excellent input to explore immersive visualization and interaction techniques for various AR/MR applications. This paper explores the head tracking function on the latest Microsoft HoloLens – where gaze is defined as the ray starting at the head location and points forward. We present a gaze-directed visualization approach to study ensembles of 2D oil spill simulations in mixed reality. Our approach allows users to place an ensemble as an image stack in a real environment and explore the ensemble with gaze tracking. The prototype system demonstrates the challenges and promising effects of gaze-based interaction in the state-of-the-art mixed reality.