feat(envs): add RoboTwin 2.0 benchmark integration#3315
Open
pkooij wants to merge 58 commits intofeat/async-vector-envfrom
Open
feat(envs): add RoboTwin 2.0 benchmark integration#3315pkooij wants to merge 58 commits intofeat/async-vector-envfrom
pkooij wants to merge 58 commits intofeat/async-vector-envfrom
Conversation
…chmark docs Add a comprehensive guide for adding new benchmarks to LeRobot, and refactor the existing LIBERO and Meta-World docs to follow the new standardized template. Made-with: Cursor
…asses Replace hardcoded if/elif chains in factory.py with create_envs() and get_env_processors() methods on EnvConfig. New benchmarks now only need to register a config subclass — no factory.py edits required. Net -23 lines: factory.py shrinks from ~200 to ~70 lines of logic. Made-with: Cursor
Rewrite for simpler language, better structure, and easier navigation. Move quick-reference table to the top, fold eval explanation into architecture section, condense the doc template to a bulleted outline. Made-with: Cursor
Incorporate cleaner writing from the docs branch while reflecting the refactored dispatch pattern (no factory.py edits needed for new benchmarks). Made-with: Cursor
Keep refactored dispatch pattern (no factory.py edits for new benchmarks). Incorporate main's "Verifying your integration" section and class naming fix. Made-with: Cursor
- test_registry_all_types: skip non-EnvConfig stubs (e.g. TestPluginConfig) - test_processors_delegation: use None instead of abstract PreTrainedConfig - test_custom_get_env_processors_override: use DataProcessorPipeline for isinstance check (PolicyProcessorPipeline is a subscripted generic) Made-with: Cursor
- Thread camera_name_mapping from LiberoEnv config through to gym envs - Sync features_map with camera_name_mapping in LiberoEnv.__post_init__ - Fix render() to use first available camera instead of hardcoded "image" - Handle non-dict final_info in rollout by falling back to info["is_success"] - Add use_peft legacy field to SmolVLAConfig for checkpoint compat - Add defaults to GR00TN15Config init=False fields for transformers 5.3 Made-with: Cursor
Made-with: Cursor
Made-with: Cursor
- Revert GR00T N1.5 default_factory/default changes (transformers compat) - Revert SmolVLA use_peft legacy field - Apply ruff formatting fixes - camera_name_mapping stays entirely in env/eval layer (no policy changes) Made-with: Cursor
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co> Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co> Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co> Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
…asium < 1.0) Made-with: Cursor
Made-with: Cursor
Made-with: Cursor
Made-with: Cursor
LiberoEnv and MetaworldEnv previously allocated GPU resources (EGL context,
OpenGL framebuffer) in __init__, before AsyncVectorEnv's fork(). Worker
processes inherited stale GPU handles, causing EGL_BAD_CONTEXT crashes on
first render.
Fix: defer OffScreenRenderEnv / MT1 construction to _ensure_env(), called on
first reset() or step() inside the worker subprocess. Each worker creates its
own clean context after fork().
Also fixes lerobot_eval.py:170 (add_envs_task TODO): replace with
env.call("task") which works with both SyncVectorEnv and AsyncVectorEnv.
AsyncVectorEnv is now the default for n_envs > 1; auto-downgraded to
SyncVectorEnv when n_envs=1 (no benefit, less overhead).
Expected speedup: ~15-20x for LIBERO Spatial with batch_size=50.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
eval_policy_all never closed environments after each task completed, causing AsyncVectorEnv worker processes to accumulate (N_tasks × n_envs). This led to OOM, BrokenPipeError and EOFError on multi-task benchmarks. Also fixes: - AsyncVectorEnv compat in envs/utils.py (use get_attr/call instead of .envs) - Tuple task handling in tokenizer_processor and lerobot_eval - _LazyAsyncVectorEnv for deferred worker spawning in LIBERO Made-with: Cursor
…ning
env.call("task") returns the LIBERO task name with underscores
(e.g. "pick_up_the_black_bowl_...") instead of the natural language
description ("pick up the black bowl ..."). The VLM tokenizes these
completely differently, causing 0.0 reward across all episodes.
Made-with: Cursor
- Replace add_envs_task reference with env.call("task_description")
- Update use_async_envs default to True
- Add note about lazy GPU init for AsyncVectorEnv compatibility
Made-with: Cursor
- batch_size=0 (default) auto-tunes based on CPU cores, capped by n_episodes and 64. Removes the need for users to guess the right value. The old batch_size > n_episodes error is replaced by silently clamping to n_episodes. - _LazyAsyncVectorEnv accepts pre-computed spaces so only one temp env is created per suite (not per task). For libero_spatial (10 tasks) this avoids 9 redundant LiberoEnv instantiations during env setup. Made-with: Cursor
- New docs/source/evaluation.mdx covering lerobot-eval usage, batch_size auto-tuning, AsyncVectorEnv performance, tuning tips, output format, multi-task evaluation, and programmatic usage. - Add evaluation page to _toctree.yml under Benchmarks section. - Update adding_benchmarks.mdx to reference batch_size auto default and link to the evaluation guide. Made-with: Cursor
Made-with: Cursor
- AsyncVectorEnv now uses shared_memory=True for zero-copy observation transfer - LiberoEnvConfig.gym_kwargs passes observation_height/width to the env - eval_policy_all prefetches next task's workers while current task runs Made-with: Cursor
Made-with: Cursor
Each benchmark gets its own Docker image (lerobot[libero] / lerobot[metaworld] only) so incompatible dep trees cannot collide. A 1-episode smoke eval runs per benchmark on GPU runners. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libero/__init__.py calls input() to ask about a custom dataset path, which raises EOFError when stdin is closed inside Docker. Setting LIBERO_DATA_FOLDER skips the prompt entirely. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libero/__init__.py calls input() when ~/.libero/config.yaml is missing. We write the config at image build time (without importing libero) so the prompt never fires at runtime. Also trigger CI on pyproject.toml changes. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…n -c The multiline RUN python -c "..." was being parsed as Dockerfile instructions. Use printf to write ~/.libero/config.yaml directly. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The config was pointing to /tmp/libero_init which doesn't exist. Use importlib.util.find_spec to locate the hf-libero package directory and write paths to the actual bundled bddl_files/init_files/assets. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
num2words (required by SmolVLM processor) is declared in lerobot[smolvla], not lerobot[libero/metaworld]. Install both extras together. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
isinstance(env, AsyncVectorEnv) silently skipped _LazyAsyncVectorEnv,
causing video rendering to produce no frames on the default async path.
Switch to hasattr(env, "call") so any async-compatible env (including
_LazyAsyncVectorEnv) hits the call("render") branch.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_get_sub_env_attr was defined but never called anywhere in the codebase. _sub_env_has_attr (its sibling) is kept — it is actively used in utils.py. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ample
add_envs_task is replaced by env.call("task_description") in this PR.
Remove it from the pipeline walkthrough and renumber the steps (8→7).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
__del__ is unreliable as a cleanup mechanism. close() is already called explicitly in the eval loop's finally block, so the finalizer is redundant. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ry overlap Previously, next task's AsyncVectorEnv workers were spawned while the current task was still running, causing both tasks' GPU contexts to coexist. Moving the prefetch start into the finally block (after env.close()) ensures workers for task N+1 only spin up once task N has released GPU memory. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_LazyAsyncVectorEnv lived in libero.py but metaworld had the same OOM problem: all tasks' AsyncVectorEnv workers were spawned eagerly, wasting GPU memory for tasks not yet running. Move the class to envs/utils.py so both environments share it, then apply the same is_async + lazy wrapping pattern in create_metaworld_envs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Benchmark CI workflow, Dockerfiles, benchmark docs, evaluation smoke-test doc, and dispatch tests belong in a separate PR. Scope this PR to the async env init changes only. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…changes - Restore docs/source/adding_benchmarks.mdx (belongs in this PR) - Restore tests/envs/test_dispatch.py (belongs in this PR) - Revert docs/source/env_processor.mdx to main (out of scope for this PR) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…e PR) Step 7 (Dockerfile + benchmark_tests.yml CI job) and its table rows are out of scope for this PR. The CI infrastructure will be added on top in a follow-up PR. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Each benchmark gets its own image (lerobot[<benchmark>,smolvla]) so incompatible dep trees can never collide. A 1-episode smoke eval runs per benchmark on GPU runners. - Libero: pepijn223/smolvla_libero, libero_spatial, camera_name_mapping - MetaWorld: pepijn223/smolvla_metaworld, metaworld-push-v2 - LIBERO config pre-created at build time to bypass interactive stdin prompt - Triggers on envs/**, lerobot_eval.py, Dockerfiles, pyproject.toml changes - Adds docs/source/evaluation.mdx and restores step 7 in adding_benchmarks Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
All MetaWorld task names in metaworld_config.json use the v3 suffix. push-v2 caused a KeyError on TASK_DESCRIPTIONS lookup. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add HF_HUB_DOWNLOAD_TIMEOUT=300 to both jobs — SmolVLM2 processor download was timing out on CI runners with the default timeout - MetaWorld: add --rename_map to map observation.image → camera1 and --policy.empty_cameras=2 to pad the 2 missing cameras the policy expects (trained with 3 cameras, env provides 1) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The 586-file lerobot/libero-assets dataset was being fetched at runtime (on first reset()) which consistently hit a 504 Gateway Timeout on CI runners. Downloading at build time bakes the assets into the image so no network call is needed during the smoke eval. The config.yaml now points assets → ~/.libero/assets (the downloaded snapshot) instead of the bundled (empty) package path. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Integrates RoboTwin 2.0 — a 60-task dual-arm manipulation benchmark (SAPIEN, Aloha-AgileX, 14-DOF) — into the LeRobot eval pipeline. - src/lerobot/envs/robotwin.py: Gymnasium wrapper (RoboTwinEnv) around RoboTwin's custom SAPIEN API. Deferred _ensure_env() for AsyncVectorEnv compatibility. create_robotwin_envs() multi-task factory. - src/lerobot/envs/configs.py: RoboTwinEnvConfig registered as 'robotwin'. All 4 cameras (head, front, left/right wrist) enabled by default. - src/lerobot/processor/env_processor.py: RoboTwinProcessorStep pass-through. - docs/source/robotwin.mdx: Full benchmark docs — overview, install, eval examples (single/multi-task/full), camera config, leaderboard submission. - docs/source/_toctree.yml: Add RoboTwin 2.0 to Benchmarks section. - docs/source/adding_benchmarks.mdx: Add RoboTwin row to benchmark table. - tests/envs/test_robotwin.py: 21 unit tests, all mocked (no SAPIEN needed). Dataset: hxma/RoboTwin-LeRobot-v3.0 is already LeRobot v3.0 format (79.6 GB, Apache 2.0). No conversion needed; referenced as-is in docs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds isolated CI coverage for the RoboTwin 2.0 benchmark, following the same pattern as PR #3309 (libero + metaworld). docker/Dockerfile.benchmark.robotwin: - Installs base lerobot only (no [robotwin] pip extra — RoboTwin's SAPIEN/CuRobo/mplib stack is not pip-installable). - Provides a reproducible, isolated image for CI and local debugging. - Documents the full install path for GPU machines in the file header. .github/workflows/benchmark_tests.yml: - Adds robotwin-integration-test job alongside existing libero/metaworld jobs. - Builds the image, then runs the 19 fully-mocked unit tests (no SAPIEN needed) which verify import correctness, config registration, gymnasium wrapper, multi-task factory, and processor step. - Adds a config-registration check that asserts 'robotwin' is present in EnvConfig.get_known_choices() and that features are correctly populated. - Scoped to paths: src/lerobot/envs/**, lerobot_eval.py, Dockerfiles, yml. Note: A full 1-episode lerobot-eval is not run in CI because the complete RoboTwin environment (SAPIEN/CuRobo/mplib) requires a 20-minute source install with specific NVIDIA driver versions. The mocked test suite provides equivalent import and API regression coverage. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace the mocked-only base image with a full-install image that builds the entire RoboTwin 2.0 simulator stack: - CUDA 12.1.1 devel base (nvcc needed for CuRobo compilation) - Python 3.10 (tested with SAPIEN/mplib upstream) - SAPIEN 3.0.0b1, mplib 0.2.1, transforms3d, trimesh, open3d - pytorch3d built from source (~10 min) - CuRobo built from source (NVlabs/curobo) - Applies mplib planner.py + SAPIEN urdf_loader.py upstream patches - Downloads embodiments.zip (~220 MB) + objects.zip (~3.74 GB) assets - Sets PYTHONPATH to expose RoboTwin envs/ task modules Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
af9e0b9 to
ac9b262
Compare
35f18d4 to
566a77b
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Changes
src/lerobot/envs/robotwin.pyRoboTwinEnvgymnasium wrapper +create_robotwin_envs()src/lerobot/envs/configs.pyRoboTwinEnvConfigregistered as--env.type=robotwinsrc/lerobot/processor/env_processor.pyRoboTwinProcessorStepdocs/source/robotwin.mdxdocs/source/_toctree.ymldocs/source/adding_benchmarks.mdxtests/envs/test_robotwin.pyDesign decisions
_ensure_env(): SAPIEN allocates EGL/GPU contexts that must not be forked from the parent process — same pattern asLiberoEnvhead_camera,front_camera,left_wrist,right_wrist; overridable via--env.camera_namestake_action()withstep()fallback: RoboTwin 2.0 usestake_action(); older forks usedstep()— wrapper handles bothhxma/RoboTwin-LeRobot-v3.0is already LeRobot v3.0 format (79.6 GB, Apache 2.0) — no conversion needed, referenced as-is in docsHow to run
Test plan
pre-commit run -apasses on all changed filespytest tests/envs/test_robotwin.py -k "not ProcessorStep"RoboTwinEnvConfiginstantiates with correct features/features_mapRoboTwinProcessorSteplogic verified: images pass-through, state cast to float32🤖 Generated with Claude Code