Skip to content
/ VAEs Public

Variational AutoEncoder (VAE) variants i.e., VAE, Beta-VAE, Dirichlet-VAE, VQ-VAE implemented on MNIST & CelebA with PyTorch

Notifications You must be signed in to change notification settings

k-karna/VAEs

Repository files navigation

# Variational Autoencoder (VAE) Variant

Experimented with following VAE, methods are explained in the notebook prior to modelling

  • Basic (vanilaa) VAE
  • Beta VAE
  • Dirichlet VAE
  • Vector Quantized VAE

From experimentation in basic_vae, we have decided upon these few:

  1. num_epochs = 30, 2. learning_rate = 25e-4, 3.latent_dim = 20 and 4. batch_size = 128

Results

VAE Variant Parameters Total Loss Reconstruction Loss KL-D Loss Training Time
General VAE 310, 504 97.72 73.21 24.51 20 min 26s
Beta VAE 236, 740 131.97 94.51 12.49 16 min 49s
Dirirchlet VAE 284, 405 0.92 0.77 0.15 21 min 20s
VQ-VAE 299,985 0.9079 0.0012 VQ-Loss: 0.9067 61 min 44s
VQ-VAE ( celebA) 303,187 0.0104 0.0079 VQ_Loss: 0.0025 80 min 41s (GPU)

About

Variational AutoEncoder (VAE) variants i.e., VAE, Beta-VAE, Dirichlet-VAE, VQ-VAE implemented on MNIST & CelebA with PyTorch

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published