Skip to content

mht3/quad_rl

Repository files navigation

quad_rl

Quadrotor gymnasium environments and baseline RL implementations.

image

Features

  • Quadrotor environments with predefined trajectories that follow Lissajous curves.
  • Baseline RL training code (PPO and SAC) from Stable-Baselines-3
  • Integration with Weights & Biases

Getting Started

Clone the environment and change directories. The following uses cloning via ssh:

git clone [email protected]:mht3/quad_rl.git
cd quad_rl

Environment Setup

Create a new conda environment with Python 3.11.

conda create -n quad python=3.11

Activate the environment.

conda activate quad

Install torch

PyTorch on GPU
Install a CUDA enabled PyTorch that matches your system architecture.
pip install -U torch==2.7.0 torchvision==0.22.0 --index-url https://download.pytorch.org/whl/cu128
PyTorch on CPU Only
Alternatively, install PyTorch on the CPU.
pip install torch==2.7.0 torchvision==0.22.0 --index-url https://download.pytorch.org/whl/cpu

Install the remaining required packages.

pip install -r requirements.txt

Model Training

python main.py --env_id Quadrotor-Fixed-v0 --algorithm PPO --seed 42 -t 10000000 --n_steps 3072 --batch_size 256 --lr 0.00005 --policy_net 512 256 128 --value_net 512 256 128

Model Playback

python main.py --env_id Quadrotor-Fixed-v0 --algorithm PPO --seed 42 -t 10000000 --n_steps 3072 --batch_size 256 --lr 0.00005 --policy_net 512 256 128 --value_net 512 256 128 --test --render

About

Quadrotor gymnasium environments and baseline RL implementations.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published