FaceMix

Click the button below to mix two faces.

Face 1

Face 2

Mixed

Currently training a better model!

A VAE for Image Reconstruction and Generation

This project implements a Variational Autoencoder (VAE) designed for image reconstruction and generative modeling. A VAE learns a latent representation of images that can be sampled to generate new outputs with similar characteristics to the training data. Built using a typical Encoder-Decoder architecture, the implementation includes standard VAE components such as the reparameterization trick, KL-divergence regularization, and a reconstruction loss.

Architecture Overview

Encoder (Inference Model)

Latent Space & Reparameterization

Instead of sampling z directly from N(μ, σ²I) (which breaks backpropagation), we use reparameterization to sample an ε from N(0,I) and construct z = μ + σ × ε. This trick allows gradients to flow through μ and σ.

Decoder (Generative Model)

Implemented Solutions

Loss Function

Optimizer and Training Enhancements

Visualization and Monitoring