Skip to content

EmanueleLedda97/AutoencoderCIFAR-10

Repository files navigation

AutoencoderCIFAR-10

Here the results of the first 8 topologies: two of them are wrong becouse of their latent space (#6, #7).

Autoencoder id #1 #2 #3 #4 #5 #6 #7 #8
Loss (e^-3) 2.30 1.50 1.50 0.89 0.99 0.92 0.44 2.40
Latent Space 2048 2048 2048 2048 2048 8192 8192 2048

Autoencoder topology comparison w.r.t. different latent spaces

Autoencoder id #1 #2 #3 #4 #5
Loss (e^-3) 2048 LS 2.30 5️⃣ 1.50 4️⃣ 1.50 3️⃣ 0.89 🥇 0.99 2️⃣
Loss (e^-3) 1024 LS 5.20 5️⃣ 5.00 4️⃣ 4.70 2️⃣ 3.00 🥇 4.70 3️⃣
Loss (e^-3) 512 LS 5.90 5️⃣ 5.20 4️⃣ 5.20 2️⃣ 4.00 🥇 5.60 4️⃣
Loss (e^-3) 256 LS ---- 6.10 3️⃣ 5.70 2️⃣ 4.80 🥇 15.80 4️⃣
Loss (e^-3) 128 LS 8.20 5️⃣ 7.10 3️⃣ 6.80 2️⃣ 6.50 🥇 8.20 4️⃣
Loss (e^-3) 64 LS 10.20 4️⃣ 9.50 3️⃣ 8.80 🥇 9.30 2️⃣ 18.90 5️⃣
Loss (e^-3) 32 LS ---- 14.10 3️⃣ 12.90 🥇 13.10 2️⃣ 14.20 4️⃣
Loss (e^-3) 16 LS ---- 18.80 3️⃣ 17.30 🥇 18.00 2️⃣ 19.50 4️⃣

Batch Normalized Topologies (50 epochs)

Obtained with learning rate 1e-04 and batch size 128

Autoencoder id #1 #2 #3 #4 #5
Loss (e^-3) 2048 LS 2.9 2.5 24 1.2 🏆 1.6
Loss (e^-3) 1024 LS 2.8 2.6 2.4 1.3 🏆 2.3
Loss (e^-3) 512 LS 3.0 2.8 2.7 1.6 🏆 2.9
Loss (e^-3) 256 LS 3.4 3.2 3.5 2.9 🏆 4.9
Loss (e^-3) 128 LS 4.9 4.9 5.0 🏆 5.1 7.9
Loss (e^-3) 64 LS 7.8 7.7 🏆 7.8 8.1 10.1

Batch Normalized Topologies (75 epochs)

Obtained with learning rate 1e-04 and batch size 128

Autoencoder id #1 #2 #3 #4 #5 loss gain
Loss (e^-3) 2048 LS 2.6 2.5 11 0.9 🏆 2.8 -0.3:heavy_check_mark:
Loss (e^-3) 1024 LS 2.5 2.5 2.2 1.2 🏆 2.0 -0.1:heavy_check_mark:
Loss (e^-3) 512 LS 2.7 2.5 2.5 1.5 🏆 2.7 -0.1:heavy_check_mark:
Loss (e^-3) 256 LS 3.2 3.2 3.2 2.8 🏆 4.5 -0.1:heavy_check_mark:
Loss (e^-3) 128 LS 5.0 4.9 4.9 🏆 5.0 4.8 -0.1:heavy_check_mark:
Loss (e^-3) 64 LS 7.7 7.7 🏆 7.9 8.1 10.1 +0.0:heavy_minus_sign:

About

Developing (maybe) an Autoencoder for the dataset CIFAR10

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages