Troubleshooting
Step 6 – Troubleshooting the Standing Task¶
This page collects common issues you might encounter while setting up and training the G1 standing task, along with suggested fixes.
If you haven’t completed the steps in order, start from the Overview or Step 1 – Define the standing environment.
Where things live: All paths below are relative to <G1_STAND_ROOT> (the root of your extension project). For example: source/g1_stand/..., scripts/rsl_rl/..., logs/rsl_rl/g1_stand_flat/....
Environment not found (UnregisteredEnv)¶
Symptom:
gym.error.UnregisteredEnv: No registered env with id: G1-Stand-Flat-v0
Possible causes and fixes:
-
Extension not installed (or not in this environment)
Reinstall the extension
From the extension project root (
<G1_STAND_ROOT>):python -m pip install -e source/g1_stand -
Typo in Gym ID
-
Ensure the ID in
__init__.pymatches the one you pass to--task:G1-Stand-Flat-v0G1-Stand-Flat-Play-v0
-
Registration not triggered
- Make sure
g1_stand.tasksis imported before env creation.
The providedtrain.pyscript already imports it.
Only Template-G1-Stand-v0 shows in list_envs.py¶
Symptom:
python scripts/list_envs.py only prints:
Template-G1-Stand-v0
and not your new G1-Stand-Flat-* environments.
Explanation:
- The helper script filters on
"Template-"in the environment ID. - Your new environments are named
G1-Stand-Flat-*, so they are intentionally filtered out.
Fix:
Run training to verify registration
python scripts/rsl_rl/train.py --task G1-Stand-Flat-v0 --headless --num_envs 1024
AttributeError on reward terms¶
Symptom:
AttributeError: 'G1Rewards' object has no attribute 'base_height_l2'
Explanation:
- Different Isaac Lab versions may or may not define certain reward terms such as:
base_height_l2stand_still_joint_deviation_l1- Accessing a missing attribute causes
AttributeError.
Fix:
Guard reward weights with hasattr
In g1_stand_env_cfg.py:
if hasattr(self.rewards, "base_height_l2"):
self.rewards.base_height_l2.weight = -1.0
GPU out-of-memory (OOM) during training¶
Symptom:
RuntimeError: CUDA out of memory ...
Fix:
Reduce parallel envs (e.g. 512)
python scripts/rsl_rl/train.py --task G1-Stand-Flat-v0 --headless --num_envs 512
- Optionally:
- Decrease
num_steps_per_envin the parent PPO config. - Use a smaller network (fewer/lower hidden dimensions) if necessary.
TensorBoard shows “No dashboards are active”¶
Symptom:
TensorBoard starts but shows:
No dashboards are active for the current data set.
Fix:
Point TensorBoard at log directory (not a .pt file)
tensorboard --logdir logs/rsl_rl/g1_stand_flat --port 6006
ls logs/rsl_rl/g1_stand_flat/*/events.out.tfevents*
Robot not standing still¶
Symptom:
- In the play environment, the G1 humanoid walks, drifts, or oscillates instead of standing.
Fix:
Check env config (commands and rewards)
In g1_stand_env_cfg.py (G1StandFlatEnvCfg), ensure:
- Commands are zero: lin_vel_x, lin_vel_y, ang_vel_z all (0.0, 0.0).
- Velocity tracking rewards are disabled: track_lin_vel_xy_exp.weight = 0.0, track_ang_vel_z_exp.weight = 0.0.
- Standing rewards have non-zero weights: flat_orientation_l2, lin_vel_z_l2, ang_vel_xy_l2, etc.
Check PPO config (max_iterations)
In rsl_rl_ppo_cfg.py (G1StandFlatPPORunnerCfg): set max_iterations to at least 1500 (or higher if needed).