The aim with this project is to analyze the difference between traditional AI training and this data parallelization version. The Darwinistic ideology of Natural Selection was used with the goal to make the “fittest” model survive on every epoch and, therefore, “carry” the beneficial “traits” to all the other cores that make the model learn faster. The algorithm consisted of using the same neural network that predicts stock prices of Apple from 1980-2019, but any model can be used, and training on different child cores, and once the training is done, the master node will choose the model with the least error, and then it will send the master core will send it back to the child cores for the next epoch until training is done.
-
Notifications
You must be signed in to change notification settings - Fork 0
cange017/parallelizationOfAITraining
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
Parallelization of model training to end up with "fittest" model
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published