Title | Image Modeling with Deep Convolutional Gaussian Mixture Models |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | Gepperth, Alexander, Pfülb, Benedikt |
Conference Name | 2021 International Joint Conference on Neural Networks (IJCNN) |
Date Published | jul |
Keywords | anomaly detection, compositionality, convolution, Deep Convolutional Gaussian Mixture Models, Deep Learning, Gaussian mixture model, Gaussian Mixture Models, Memory management, Neural networks, pubcrawl, stochastic gradient descent, Stochastic processes, Training |
Abstract | In this conceptual work, we present Deep Convolutional Gaussian Mixture Models (DCGMMs): a new formulation of deep hierarchical Gaussian Mixture Models (GMMs) that is particularly suitable for describing and generating images. Vanilla (i.e., flat) GMMs require a very large number of components to describe images well, leading to long training times and memory issues. DCGMMs avoid this by a stacked architecture of multiple GMM layers, linked by convolution and pooling operations. This allows to exploit the compositionality of images in a similar way as deep CNNs do. DCGMMs can be trained end-to-end by Stochastic Gradient Descent. This sets them apart from vanilla GMMs which are trained by Expectation-Maximization, requiring a prior k-means initialization which is infeasible in a layered structure. For generating sharp images with DCGMMs, we introduce a new gradient-based technique for sampling through non-invertible operations like convolution and pooling. Based on the MNIST and FashionMNIST datasets, we validate the DCGMMs model by demonstrating its superiority over flat GMMs for clustering, sampling and outlier detection. |
DOI | 10.1109/IJCNN52387.2021.9533745 |
Citation Key | gepperth_image_2021 |