This is an official repository for the paper "DMWM: Dual-Mind World Model with Long-Term Imagination" accepted by NeurIPS 2025.
You can create and activate the environment as follows:
conda create -n dmwm python==3.7
conda activate dmwm
pip install -r requirements.txtSuggested GPU: All experiments in the paper were conducted on a single NVIDIA RTX 3090 GPU. We also tried the NVIDIA RTX 3080 GPU, which can also work.
Training Env: Google DeepMind Infrastructure for Physics-Based Simulation.
To train the model(s) in the paper, run this command: Taking the "walker-walk" task as an example:
python main.py --algo dreamer --env walker-walk --action-repeat 2 --id your_named-experiementSome useful commands:
python main.py --algo dreamer --env walker-walk --action-repeat 2 --logic-overshooting-distance 10 --id your_named-experiementpython main.py --algo dreamer --env walker-walk --action-repeat 2 --planning-horizon 50 --id your_named-experiementpython main.py --algo dreamer --env walker-walk --action-repeat 2 --planning-horizon 50 --logic-overshooting-distance 50 --id your_named-experiementTo evaluate my model on control tasks, run:
python main.py --models saved_path --testOur implementation is based on Dreamer (for System 1) and Logic-Integrated Neural Network (LINN) (as the basic framework with the proposed deep logical inference and automatic logic learning from environment dynamics for System 2). Thanks for their great open-source work!
All content in this repository is under the MIT license.
If any parts of our paper and code help your research, please consider citing us and giving a star to our repository.
@article{wang2025dmwm,
title={DMWM: Dual-Mind World Model with Long-Term Imagination},
author={Wang, Lingyi and Shelim, Rashed and Saad, Walid and Ramakrishnan, Naren},
journal={arXiv preprint arXiv:2502.07591},
year={2025}
}High Data Efficiency and Robust Planning Over Extended Horizon Size:

