Visible to the public Deep Cross-Modal Audio-Visual Generation

TitleDeep Cross-Modal Audio-Visual Generation
Publication TypeConference Paper
Year of Publication2017
AuthorsChen, Lele, Srivastava, Sudhanshu, Duan, Zhiyao, Xu, Chenliang
Conference NameProceedings of the on Thematic Workshops of ACM Multimedia 2017
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-5416-5
Keywordsaudio-visual, cross-modal generation, Generative Adversarial Learning, generative adversarial networks, Metrics, pubcrawl, resilience, Resiliency, Scalability
Abstract

Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite work on computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks to achieve cross-modal audio-visual generation of musical performances. We explore different encoding methods for audio and visual signals, and work on two scenarios: instrument-oriented generation and pose-oriented generation. Being the first to explore this new problem, we compose two new datasets with pairs of images and sounds of musical performances of different instruments. Our experiments using both classification and human evaluation demonstrate that our model has the ability to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio, to a good extent. Our experiments on various design choices along with the datasets will facilitate future research in this new problem space.

URLhttps://dl.acm.org/citation.cfm?doid=3126686.3126723
DOI10.1145/3126686.3126723
Citation Keychen_deep_2017