Plasticine is a library that provides high-quality implementations of plasticity loss mitigation methods in deep reinforcement learning. We highlight the features of Plasticine as follows:
- 📜 Strike a balance between "single-file" and "modularized" implementation;
- 🏞️ Support comprehensive continual RL Scenarios;
- 📊 Benchmarked implementation (13+ algorithms and 6+ plasticity metrics);
- 🧱 Easy combination of different plasticity enhancement strategies;
- ⚙️ Local reproducibility via seeding;
- 🧫 Experiment management with Weights and Biases.
Plasticine is built on the top of CleanRL. Thanks for the excellent project!
- Create an environment and install the dependencies:
conda create -n plasticine python=3.10
pip install -r requirements/requirements-ale.txt # for ALE
pip install -r requirements/requirements-procgen.txt # for Procgen
pip install -r requirements/requirements-dmc.txt # for DeepMind Control Suite- Clone the repository and run the training script:
git clone https://github.com/RLE-Foundation/Plasticine
cd Plasticine
# Train the PPO agent with the Plasticine methods on the continual Procgen benchmark
CUDA_VISIBLE_DEVICES=0 python plasticine/ppo_continual_procgen_plasticine.py \
--env_id starpilot \
--seed 1 \
# --use_shrink_and_perturb \
# --use_normalize_and_project \
# --use_layer_resetting \
# --use_trac_optimizer \
# --use_kron_optimizer \
# --use_parseval_regularization \
# --use_regenerative_regularization \
# --use_crelu_activation \
# --use_dff_activation \
# --use_redo \
# --use_plasticity_injection \
# --use_use_l2_norm \
# --use_layer_norm The architecture of Plasticine is as follows:
-
Reset-based Intervention
-
Normlization Techniques
-
Regularization Techniques
-
Activation Functions
-
Optimizer
- Ratio of Dormant Units
- Fraction of Active Units
- Stable Rank
- Effective Rank
- Gradient Norm
- Weight Difference
- ALE
- Continual Procgen (Intra-task Switch)
- Continual DMC (Inter-task Switch)
Please refer to Plasticine's W&B Space for a collection of Weights and Biases reports showcasing the benchmark experiments.
If you use Plasticine in your work, please cite our paper:
@article{yuan2026plasticine,
title={Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning},
author={Yuan, Mingqi and Wang, Qi and Ma, Guozheng and Sun, Caihao and Li, Bo and Jin, Xin and Wang, Yunbo and Yang, Xiaokang and Zeng, Wenjun and Tao, Dacheng and Chen, Jiayu},
journal={arXiv preprint arXiv:2504.17490},
year={2026}
}We thank the high-performance computing center at INFIFORCE Intelligent Technology Co., Ltd., Eastern Institute of Technology, and Ningbo Institute of Digital Twin for providing the computing resources. Some code of this project is borrowed or inspired by several excellent projects, and we highly appreciate them.




