Biblio

Filters: Author is McAuley, Julian  [Clear All Filters]
2022-04-25
Hussain, Shehzeen, Neekhara, Paarth, Jere, Malhar, Koushanfar, Farinaz, McAuley, Julian.  2021.  Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). :3347–3356.
Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before. Manipulated videos can fuel disinformation and reduce trust in media. Therefore detection of fake videos has garnered immense interest in academia and industry. Recently developed Deepfake detection methods rely on Deep Neural Networks (DNNs) to distinguish AI-generated fake videos from real videos. In this work, we demonstrate that it is possible to bypass such detectors by adversarially modifying fake videos synthesized using existing Deepfake generation methods. We further demonstrate that our adversarial perturbations are robust to image and video compression codecs, making them a real-world threat. We present pipelines in both white-box and black-box attack scenarios that can fool DNN based Deepfake detectors into classifying fake videos as real.
2018-12-10
Pasricha, Rajiv, McAuley, Julian.  2018.  Translation-based Factorization Machines for Sequential Recommendation. Proceedings of the 12th ACM Conference on Recommender Systems. :63–71.

Sequential recommendation algorithms aim to predict users' future behavior given their historical interactions. A recent line of work has achieved state-of-the-art performance on sequential recommendation tasks by adapting ideas from metric learning and knowledge-graph completion. These algorithms replace inner products with low-dimensional embeddings and distance functions, employing a simple translation dynamic to model user behavior over time. In this paper, we propose TransFM, a model that combines translation and metric-based approaches for sequential recommendation with Factorization Machines (FMs). Doing so allows us to reap the benefits of FMs (in particular, the ability to straightforwardly incorporate content-based features), while enhancing the state-of-the-art performance of translation-based models in sequential settings. Specifically, we learn an embedding and translation space for each feature dimension, replacing the inner product with the squared Euclidean distance to measure the interaction strength between features. Like FMs, we show that the model equation for TransFM can be computed in linear time and optimized using classical techniques. As TransFM operates on arbitrary feature vectors, additional content information can be easily incorporated without significant changes to the model itself. Empirically, the performance of TransFM significantly increases when taking content features into account, outperforming state-of-the-art models on sequential recommendation tasks for a wide variety of datasets.