This repository contains the code and research papers of two group projects (tasks) that are part of the Master's course Evolutionary Computing (course description) at the Vrije Universiteit Amsterdam (2024-2025). Both tasks involved the implementation of Evolutionary Algorithms (EAs) to optimise the controller of an autonomous video game agent (player character) in the Evoman framework. Specifically, the EAs optimised the weights of the Neural Network controlling the player character. Performance of the algorithms was measured through competition of the player character with AI-controlled enemies included in the framework. EAs were constructed in line with Eiben and Smith (2015).
The first project involved the training of a "specialist" agent against a single enemy. Subsequently, the agent was tested against three enemies, including the one it had been trained on. This task required the implementation of two variations of an Evolutionary Algorithm from-scratch. The base version of our implementation contained components of Genetic Algorithms and Evolutionary Strategies. Our research compared two approaches for the survivor selection component of the implemented EA, namely Steady State and Generational Selection.
Task 2 - Generalist Agent: Comparing Scalarized And Pareto Approaches For Handling Multiple Objectives
The second project involved the weights optimisation of a "generalist" agent, with the goal to obtain a high performance against multiple enemies. The performance of the agent was evaluated against all eight enemies in the Evoman framework. The goal of the generalist agent was to maximise the number of enemies beaten, making it an example of a Multi-Objective Optimisation (MOO) problem. As MOO algorithms can be quite sophisticated, it was most feasible to use the pymoo implementation of U-NSGA-III for this project. This allowed us to compare the performance differences between a Scalarization and a Pareto approach to MOOs.
Our research was awarded the grades 9.6 (Task 1) and 10 (Task 2). The generalist agent we created for Task 2 additionally scored the third spot in the course-internal competition.
The code can be executed in a conda environment. We recommend installing miniconda - a guide to do so can be found here.
To install the requirements for running the code, you can clone this repo and create a conda environment (which will be named vu_evocomp) with Python 3.11 and the necessary dependencies by running the following commands in your CLI:
git clone https://github.com/mklblm/VU-Evolutionary-Computing
cd VU-Evolutionary-Computing
conda env create -f environment.ymlAlternatively, you can create a python 3.11 environment by other means, and install the required packages by running:
git clone https://github.com/mklblm/VU-Evolutionary-Computing
cd VU-Evolutionary-Computing
pip install -r requirements.txtThe Jupyter notebooks for all our implementations contain cells in which the hyperparameters of the experiments can be configured. Please refer to the relevant papers provided if you wish to replicate a specific experiment.
- Eiben, A. E., & Smith, J. E. (2015). Introduction to evolutionary computing. Springer-Verlag Berlin Heidelberg.
- de Franca, F. O., Fantinato, D., Miras, K., Eiben, A. E., & Vargas, P. A. (2019). Evoman: Game-playing competition. arXiv preprint arXiv:1912.10445. github
- J. Blank and K. Deb, pymoo: Multi-Objective Optimization in Python, in IEEE Access, vol. 8, pp. 89497-89509, 2020, doi: 10.1109/ACCESS.2020.2990567 official website
This project is licensed under the MIT License - see the LICENSE.md file for details