Merged
Conversation
Signed-off-by: Clemens Volk <cvolk@nvidia.com>
Cherry-picks the fix that defers isaaclab.utils imports until after AppLauncher starts, required because the recent 'Remove mesh import *' commit added pxr (OpenUSD) as a transitive import in utils/__init__.py. Signed-off-by: Clemens Volk <cvolk@nvidia.com>
Remove the separate JSON agent config file. RslRlActionPolicy now auto-detects params/agent.yaml saved alongside the checkpoint by IsaacLab's train.py, making the checkpoint the single source of truth. Signed-off-by: Clemens Volk <cvolk@nvidia.com>
- Delete rigid_object_variant.py (prototype scratch file) - Move base_rsl_rl_policy.py to isaaclab_arena_examples/policy/ - Update lift_object_environment.py import for new module path - Replace WIP comment in cameras.py with clean TODO(cvolk) - Add TODO(cvolk) to RL workflow docs for follow-up rewrite Signed-off-by: Clemens Volk <cvolk@nvidia.com>
Replace the outdated Docker-based instructions with the correct host workflow using a Python 3.11 venv. Signed-off-by: Clemens Volk <cvolk@nvidia.com>
…back - Replace Arena's removed train.py with IsaacLab's train.py + --external_callback - Add explanation of how the callback registers the environment before training - Add Hydra override examples for hyperparameter tuning - Update tensorboard command to use /isaac-sim/python.sh -m tensorboard.main - Rewrite evaluation section: drop removed play.py method, update commands to remove --agent_cfg_path (checkpoint now auto-loads params/agent.yaml) - Update step 1 validation command to use IsaacLab train.py Signed-off-by: Clemens Volk <cvolk@nvidia.com>
Signed-off-by: Clemens Volk <cvolk@nvidia.com>
AppLauncher's enable_pinocchio path wraps _start_app() with a patch that calls from pxr import Gf immediately after startup. If Isaac Sim's extension loading is incomplete (e.g. due to a version constraint in the experience file), pxr is never added to sys.path and the patch crashes with ModuleNotFoundError. Setting disable_pinocchio_patch=True tells AppLauncher to skip the patch. Pinocchio is already imported before AppLauncher is constructed, which is sufficient for it to work correctly. Signed-off-by: Clemens Volk <cvolk@nvidia.com>
cvolkcvolk
commented
Mar 4, 2026
Replace the static lift_object_model.pt fixture with a dynamic train-then-eval round trip in a dedicated test_rsl_rl.py. The previous test_rl_policy_lift_object loaded a pre-committed checkpoint that had no params/agent.yaml alongside it, causing a FileNotFoundError after the PR switched RslRlActionPolicy to load agent config from that file instead of a separate JSON. The new test_rl_train_and_eval_lift_object trains for one iteration via IsaacLab's rsl_rl/train.py (which saves params/agent.yaml), locates the produced checkpoint by mtime, then runs policy_runner.py against it. This exercises the full train-to-eval pipeline and keeps test_policy_runner.py focused on policy-runner-only tests. Signed-off-by: Clemens Volk <cvolk@nvidia.com>
Draft
alexmillane
approved these changes
Mar 6, 2026
Collaborator
alexmillane
left a comment
There was a problem hiding this comment.
Looks great! I have a couple of nits. Feel free to do those in follow ups.
docs/pages/example_workflows/reinforcement_learning/step_2_policy_training.rst
Show resolved
Hide resolved
docs/pages/example_workflows/reinforcement_learning/step_2_policy_training.rst
Show resolved
Hide resolved
cvolkcvolk
added a commit
that referenced
this pull request
Mar 9, 2026
Address two nits from PR #465 review: - Replace /isaac-sim/python.sh with the python alias in all RL workflow step pages (steps 1-3), consistent with other doc pages in the repo (e.g. static_manipulation workflow). - Remove --headless from the default training commands in step 2; add a tip directing users to pass it for headless server runs. New users benefit from seeing visual feedback by default. Signed-off-by: Clemens Volk <cvolk@nvidia.com>
cvolkcvolk
added a commit
that referenced
this pull request
Mar 10, 2026
## Summary Addresses two nits from the PR #465 review that were approved but not resolved before merge: - Use the `python` alias instead of `/isaac-sim/python.sh` in all RL workflow doc pages - Remove `--headless` as a default flag in the training command examples Signed-off-by: Clemens Volk <cvolk@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Enables Isaac Lab's
train.pyandplay.pyscripts to run Arena environments directly via an external callback, without requiring Arena-specific training scripts.Original PR: Isaac Lab Interop. #413
Changes
Adds
environment_registration_callback(), an Isaac Lab external callback that parses CLI args, builds, and registers an Arena environment before Isaac Lab's train/play script runs.Adds
RLPolicyCfg, a@configclasssubclassingRslRlOnPolicyRunnerCfgwith Arena defaults (PPO, 4000 iterations, obs groups). Registered asrsl_rl_cfg_entry_pointin the gym registry so Isaac Lab's@hydra_task_configcan load it.Drop the need for
agent_cfg_path/ JSON agent configs.Agent config is now auto-loaded from
params/agent.yamlsaved alongside the checkpoint by Isaac Lab'strain.py— no separate config file required.Coerces viewer
lookat/eyetuples to plain Pythonfloats (notnp.float64) to avoid Hydra/OmegaConf serialization errors. Follow up planned on that.Deleted
isaaclab_arena_examples/policy/rl_policy/base_rsl_rl_policy.py— replaced by new locationisaaclab_arena_examples/policy/rl_policy/generic_policy.json— no longer neededisaaclab_arena/scripts/reinforcement_learning/cli_args.py,play.py,train.py— replaced by Isaac Lab's own scripts + the callbackDocs updated
docs/README.mdand RSL-RL tutorial.rstfiles updated new interop training/eval workflow.Usage
Train:
Evaluate (checkpoint path is the only required argument — agent config is loaded automatically from
params/agent.yamlin the same directory):