Skip to content

Using Evolutionary Algorithms for Neural Network weight optimisation. Research and implementations for the Evolutionary Computing course at Vrije Universiteit Amsterdam

License

Notifications You must be signed in to change notification settings

mklblm/VU-Evolutionary-Computing

Repository files navigation

Evolutionary Computing: Training Evoman

This repository contains the code and research papers of two group projects (tasks) that are part of the Master's course Evolutionary Computing (course description) at the Vrije Universiteit Amsterdam (2024-2025). Both tasks involved the implementation of Evolutionary Algorithms (EAs) to optimise the controller of an autonomous video game agent (player character) in the Evoman framework. Specifically, the EAs optimised the weights of the Neural Network controlling the player character. Performance of the algorithms was measured through competition of the player character with AI-controlled enemies included in the framework. EAs were constructed in line with Eiben and Smith (2015).

Task 1 - Specialist Agent: Comparing Steady State And Generational Survivor Selection

The first project involved the training of a "specialist" agent against a single enemy. Subsequently, the agent was tested against three enemies, including the one it had been trained on. This task required the implementation of two variations of an Evolutionary Algorithm from-scratch. The base version of our implementation contained components of Genetic Algorithms and Evolutionary Strategies. Our research compared two approaches for the survivor selection component of the implemented EA, namely Steady State and Generational Selection.

Task 2 - Generalist Agent: Comparing Scalarized And Pareto Approaches For Handling Multiple Objectives

The second project involved the weights optimisation of a "generalist" agent, with the goal to obtain a high performance against multiple enemies. The performance of the agent was evaluated against all eight enemies in the Evoman framework. The goal of the generalist agent was to maximise the number of enemies beaten, making it an example of a Multi-Objective Optimisation (MOO) problem. As MOO algorithms can be quite sophisticated, it was most feasible to use the pymoo implementation of U-NSGA-III for this project. This allowed us to compare the performance differences between a Scalarization and a Pareto approach to MOOs.

Our research was awarded the grades 9.6 (Task 1) and 10 (Task 2). The generalist agent we created for Task 2 additionally scored the third spot in the course-internal competition.

Installation and usage

The code can be executed in a conda environment. We recommend installing miniconda - a guide to do so can be found here.

To install the requirements for running the code, you can clone this repo and create a conda environment (which will be named vu_evocomp) with Python 3.11 and the necessary dependencies by running the following commands in your CLI:

git clone https://github.com/mklblm/VU-Evolutionary-Computing
cd VU-Evolutionary-Computing
conda env create -f environment.yml

Alternatively, you can create a python 3.11 environment by other means, and install the required packages by running:

git clone https://github.com/mklblm/VU-Evolutionary-Computing
cd VU-Evolutionary-Computing
pip install -r requirements.txt

Running the experiments

The Jupyter notebooks for all our implementations contain cells in which the hyperparameters of the experiments can be configured. Please refer to the relevant papers provided if you wish to replicate a specific experiment.

Authors

References

License

This project is licensed under the MIT License - see the LICENSE.md file for details

About

Using Evolutionary Algorithms for Neural Network weight optimisation. Research and implementations for the Evolutionary Computing course at Vrije Universiteit Amsterdam

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •