This repository contains the code developed for “Coordinated Humanoid Manipulation with Choice Policies.” In this codebase, we teleoperate functional modules to enable coordinated humanoid manipulation.
We currently support two full-sized humanoid robots: Fourier GR-1 and RobotEra Star-1. Depending on which robot you are using, please refer to the setup instructions in GR1 or Star1.
Below is an overview of the function of the important files
agents/
diffusion/
diffusion_agent_client.py # policy inference (asynchronous)
diffusion_agent_server.py # policy inference (asynchronous)
diffusion_agent_sync.py # policy inference (synchronous)
quest_agent.py # read quest data and computer the target joint positions
assets/ # assets used to compute IK
gr1/ # Fourier GR1 xml
star1/ # Robtera Star1 xml
envs/
gr1.py # interface for GR1
tools/
launch/
launch_teleop_server.py # create gr1 robot server
run_sim_teleop_gr1.py # simulation teleoperation for GR1
run_teleop.py # main interface for teleoperation
We first try if the library is correctly setup. The following code let you control a robot in mujoco using data read from quest controller.
# cd /path/to/lilith/hato
python tools/run_sim_teleop_gr1.py # for GR1
python tools/run_sim_teleop_star1.py # for STAR1
Some additional setup details: during development, we use a foot pedal to mark the start and end of an episode. You can run python tools/pedal_test.py to verify that the pedal is working.
In our configuration, the operator stands side-by-side with the robot, facing the same direction. The Quest headset is positioned on the opposite side of the operator’s face. The left controller maps to the robot’s left arm, and the right controller maps to the right arm.
ssh into your GR-1 robot and run python tools/launch/launch_teleop_server.py.
Then, on your desktop machine, run python run_teleop.py. This will command the GR-1 robot slowly to the initial position and enable you to use quest controller to teleop the robot. If you want to save data, you can use python run_teleop.py --save-data and define the correspoinding pedal trigger stage to control that.
Please refer to star1.
Please refer to MinBC repo.
For safety and debugging, we recommend the following deployment workflow:
-
Open-loop replay of dataset actions: verifies that the dataset format and robot interface match correctly.
-
Open-loop policy evaluation using dataset observations: checks whether the trained policy behaves correctly when conditioned on recorded images/proprioception, ensuring no collection–deployment gap.
-
Closed-loop testing: final on-robot evaluation.
python tools/deploy/replay.py
This script loads images and proprioception from the dataset and infers actions using a trained policy. The goal is to verify that the policy behaves as expected on the training set and to rule out obvious bugs.
# use checkpoint path above
python tools/deploy/openloop.py \
--ckpt_path outputs/bottle_hand_over/full_base_seed0/model_best.ckpt \
--data_dir data/bottle_hand_over/
For hardware deployment, please refer to star1.
If you find Humanoid Teleop or this codebase helpful, please consider cite:
@article{qi2025coordinated,
title={Coordinated Humanoid Manipulation with Choice Policies},
author={Qi, Haozhi and Wang, Yen-Jen and Lin, Toru and Yi, Brent and Ma, Yi and Sreenath, Koushil and Malik, Jitendra},
journal={arXiv:2512.25072},
year={2025}
}