Skip to content

Add autoencoders #22

@aromanro

Description

@aromanro

This could be done for EMNIST or something else.

Train an autoencoder (encoder + decoder) to learn an efficient encoding/features representation in an unsupervised manner, then throw away the decoder, fix the encoder weights, add some layers (perhaps only one would suffice, a softmax layer) and do supervised learning for those layers only (or maybe finetune the whole network) to classify digits/letters, perhaps using for training only a subset of the training data.

This would be a quite easy & nice example of unsupervised learning, encoder/decoder, transfer learning, unsupervised pretraining and some other buzzwords :)

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions