Representation learning with MMD-VAE
This article is originally published at https://blogs.rstudio.com/tensorflow/Like GANs, variational autoencoders (VAEs) are often used to generate images. However, VAEs add an additional promise: namely, to model an underlying latent space. Here, we first look at a typical implementation that maximizes the evidence lower bound. Then, we compare it to one of the more recent competitors, MMD-VAE, from the Info-VAE (information maximizing VAE) family.
Thanks for visiting r-craft.org
This article is originally published at https://blogs.rstudio.com/tensorflow/
Please visit source website for post related comments.