Welcome to the Variational Autoencoder (VAE) implementation repository!
This repository contains the implementation of a Variational Autoencoder (VAE) from scratch using the MNIST and CIFAR-10 datasets. The VAE is a generative model that learns to encode data into a latent space and then decode it back to the original data space. This implementation focuses on understanding the core concepts and building blocks of VAEs without relying on high-level libraries.
Variational Autoencoders (VAEs) are a type of generative model that learn to encode data into a latent space and decode it back to the original space. They are widely used in various applications, including image generation, anomaly detection, and data compression. This repository provides a step-by-step implementation of a VAE using Python and popular deep learning libraries.
For more study and understanding, you can visit this link.
Before you begin, ensure you have met the following requirements:
- Python 3.9 or later
- NumPy
- TensorFlow 2.x
- Matplotlib
vae_mnist.ipynb
: Jupyter notebook for training the VAE model on the MNIST dataset.vae_cifar10.ipynb
: Jupyter notebook for training the VAE model on the CIFAR-10 dataset.
- Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv preprint arXiv:1312.6114.
- Doersch, C. (2016). Tutorial on Variational Autoencoders. arXiv preprint arXiv:1606.05908.
This project is implemented by Faezeh. For more information and updates, visit Curious Seekers Hub.