This repository is the Submission template and Starter kit for the Global Chess Challenge! Clone the repository to compete now!
This repository contains:
- Documentation on how to submit your agent to the leaderboard
- The procedure for best practices and information on how we evaluate your agent
- Starter code for you to get started!
- Competition Overview
- Challenge Description
- Tracks
- Evaluation Metrics
- Getting Started
- Frequently Asked Questions
- Important Links
Most chess players don't have regular access to a top coach. What they do have are their own games and a recurring question: "What should I have played here?" The Global Chess Challenge imagines a tool that looks at those positions, suggests a strong move, and explains the idea in simple language, so players can coach themselves using the games they already play.
This challenge asks you to build models that play legal chess moves and briefly explain their choices in natural language, while a world-class engine checks how well those moves hold up on the board. The challenge turns a familiar game into a testbed to see whether reasoning models can think clearly, play good moves, and talk about them in a way humans can follow.
The Global Chess Challenge asks participants to build a text-only chess agent that does two things at once: play a legal move and explain the idea behind it in simple language.
On each turn, your model receives a chess position as text and must respond with:
- A one-sentence rationale explaining the idea behind the move
- A legal move in UCI format
The environment verifies legality, evaluates move quality using Stockfish, and runs full games in tournaments to measure overall playing strength.
For every turn, your agent receives:
- Position encoded as a FEN string
- Side to move (White or Black)
- List of legal moves in UCI format
Your agent must return:
- A one-sentence rationale:
<think>...</think> - Exactly one move in UCI format:
<uci_move>...</uci_move>
- Sign up to join the competition on the AIcrowd website.
- Clone this repo and start developing your agent.
- Develop your agent(s) following the template in how to write your own agent section.
- Submit your trained models using huggingface for evaluation.
Please follow the instructions in player_agents/README.md for instructions and examples on how to write your own chess agent for this competition.
Clone the repository recursively:
bash git clone --recursive [email protected]:aicrowd/global-chess-challenge-2025-starter-kit.git cd global-chess-challenge-2025-starter-kit
In case you didn't clone with --recursive, you can do the following
bash cd global-chess-challenge-2025-starter-kit git submodule update --init --recursive
Install competition specific dependencies
bash pip install -r requirements.txt
Before running local evaluation, you need to start either a vLLM server or a Flask server in a separate terminal from the player_agents directory.
cd player_agents
pip install vllm
bash run_vllm.shOption 2: Using Flask for rule-based agents - (Note that rule based agents cannot be submitted, its only for local testing)
cd player_agents
# For random agent
python random_agent_flask_server.py
# OR for Stockfish agent
python stockfish_agent_flask_server.pyKeep this server running in the background while you run local evaluation.
Test your agent locally using python local_evaluation.py.
Note: Make sure you have started either the vLLM server or Flask server (see Running LLM locally) in a separate terminal before running local evaluation.
Accept the Challenge Rules on the main challenge page by clicking on the Participate button.
This guide walks you through the process of submitting your chess agent to the Global Chess Challenge 2025.
Before making a submission, ensure you have:
- โ Accepted the Challenge Rules on the challenge page by clicking the Participate button
- โ
Installed AIcrowd CLI (included in
requirements.txt) - โ Logged in to AIcrowd via the CLI
- โ Prepared your model on Hugging Face
- โ Created a prompt template for your agent
First, authenticate with AIcrowd:
aicrowd loginYou'll be prompted to enter your AIcrowd API key. You can find your API key at: https://www.aicrowd.com/participants/me
Your model must be hosted on Hugging Face. You can use:
- A public model (e.g.,
Qwen/Qwen3-0.6B) - Your own fine-tuned model
- A private/gated model (requires additional setup - see below)
If your model is private or gated, you need to grant AIcrowd access. See docs/huggingface-gated-models.md for detailed instructions.
Your prompt template should be a Jinja file that formats the chess position and legal moves for your model. Examples are available in the player_agents/ directory:
llm_agent_prompt_template.jinja- For general LLM agentssft_agent_prompt_template.jinja- For supervised fine-tuned agentsrandom_agent_prompt_template.jinja- Minimal template example
Edit the aicrowd_submit.sh file with your submission details:
# Configuration variables
CHALLENGE="global-chess-challenge-2025"
HF_REPO="YOUR_HF_USERNAME/YOUR_MODEL_NAME" # e.g., "Qwen/Qwen3-0.6B"
HF_REPO_TAG="main" # or specific branch/tag
PROMPT_TEMPLATE="player_agents/YOUR_PROMPT_TEMPLATE.jinja"- CHALLENGE: The challenge identifier (keep as
global-chess-challenge-2025) - HF_REPO: Your Hugging Face model repository (format:
username/model-name) - HF_REPO_TAG: The branch or tag to use (typically
main) - PROMPT_TEMPLATE: Path to your prompt template file
Once configured, run the submission script:
bash aicrowd_submit.shOr submit directly using the AIcrowd CLI:
aicrowd submit-model \
--challenge "global-chess-challenge-2025" \
--hf-repo "YOUR_HF_USERNAME/YOUR_MODEL_NAME" \
--hf-repo-tag "main" \
--prompt-template-path "player_agents/YOUR_PROMPT_TEMPLATE.jinja"Games are played in round-robin tournaments with ACPL ratings determining the final rankings. Each move is also checked for legality and compared against Stockfish for calculating CPL.
Your agent must be self-contained and run without network access during evaluation. You can use Stockfish locally during training. During inference, we only run the LLM.
Best of Luck ๐ ๐
