Skip to content

feat(envs): add RoboCerebra long-horizon manipulation benchmark#3314

Open
pkooij wants to merge 57 commits intofeat/benchmark-cifrom
feat/robocerebra-benchmark
Open

feat(envs): add RoboCerebra long-horizon manipulation benchmark#3314
pkooij wants to merge 57 commits intofeat/benchmark-cifrom
feat/robocerebra-benchmark

Conversation

@pkooij
Copy link
Copy Markdown
Member

@pkooij pkooij commented Apr 8, 2026

Summary

  • Adds new robocerebra env type wrapping LIBERO's libero_10 suite with RoboCerebra-specific defaults (256×256, 20 FPS, camera keys image/wrist_image matching the HF dataset)
  • Fixes a gap in the LIBERO factory: camera_name_mapping was accepted by LiberoEnv.__init__ but never forwarded through create_libero_envs → _make_env_fns → _make_env; now correctly propagated end-to-end (also benefits existing LiberoEnv users who pass --env.camera_name_mapping via CLI)
  • Adds robocerebra optional dependency group in pyproject.toml (aliases lerobot[libero])

Dataset

The dataset CollisionCode/RoboCerebra_lerobot_v3.0 is already in LeRobot v3.0 format — no conversion needed.

Camera keys in the dataset match our defaults:

  • observation.images.image (agent-view)
  • observation.images.wrist_image (wrist)

Files changed

File Change
src/lerobot/envs/configs.py Add RoboCerebraEnv; fix LiberoEnv.create_envs to forward camera_name_mapping
src/lerobot/envs/libero.py Thread camera_name_mapping through create_libero_envs → _make_env_fns → _make_env
pyproject.toml Add robocerebra extra (= lerobot[libero], Linux only)
tests/envs/test_robocerebra_env.py 10 unit tests (no LIBERO install needed — factory fully mocked)
docs/source/robocerebra.md Install, dataset, task table, eval commands, config reference, citation

Test plan

  • 10 unit tests pass (pytest tests/envs/test_robocerebra_env.py) — no LIBERO needed, all env creation mocked
  • pre-commit run -a passes on all changed files
  • Eval smoke test (requires Linux + LIBERO + GPU):
    # Test existing libero path (user-provided command)
    lerobot-eval \
        --policy.path=pepijn223/smolvla_libero \
        --env.type=libero \
        --env.task=libero_spatial \
        --eval.batch_size=1 \
        --eval.n_episodes=1 \
        --eval.use_async_envs=false \
        --policy.device=cuda \
        '--env.camera_name_mapping={"agentview_image": "camera1", "robot0_eye_in_hand_image": "camera2"}' \
        --policy.empty_cameras=1
    
    # Test new robocerebra env type
    lerobot-eval \
        --policy.path=pepijn223/smolvla_libero \
        --env.type=robocerebra \
        --env.task=libero_10 \
        --eval.batch_size=1 \
        --eval.n_episodes=1 \
        --eval.use_async_envs=false \
        --policy.device=cuda

Publishing proposal

The dataset is already on HuggingFace at CollisionCode/RoboCerebra_lerobot_v3.0. To mirror it under the lerobot/ org:

huggingface-cli repo create robocerebra --type dataset --organization lerobot
huggingface-cli upload lerobot/robocerebra CollisionCode/RoboCerebra_lerobot_v3.0 --repo-type dataset

Requires lerobot/ org write access — can be done separately after merge.

🤖 Generated with Claude Code

pkooij and others added 30 commits April 2, 2026 20:43
…chmark docs

Add a comprehensive guide for adding new benchmarks to LeRobot, and
refactor the existing LIBERO and Meta-World docs to follow the new
standardized template.

Made-with: Cursor
…asses

Replace hardcoded if/elif chains in factory.py with create_envs() and
get_env_processors() methods on EnvConfig. New benchmarks now only need
to register a config subclass — no factory.py edits required.

Net -23 lines: factory.py shrinks from ~200 to ~70 lines of logic.

Made-with: Cursor
Rewrite for simpler language, better structure, and easier navigation.
Move quick-reference table to the top, fold eval explanation into
architecture section, condense the doc template to a bulleted outline.

Made-with: Cursor
Incorporate cleaner writing from the docs branch while reflecting the
refactored dispatch pattern (no factory.py edits needed for new benchmarks).

Made-with: Cursor
Keep refactored dispatch pattern (no factory.py edits for new benchmarks).
Incorporate main's "Verifying your integration" section and class naming fix.

Made-with: Cursor
- test_registry_all_types: skip non-EnvConfig stubs (e.g. TestPluginConfig)
- test_processors_delegation: use None instead of abstract PreTrainedConfig
- test_custom_get_env_processors_override: use DataProcessorPipeline for isinstance check (PolicyProcessorPipeline is a subscripted generic)

Made-with: Cursor
- Thread camera_name_mapping from LiberoEnv config through to gym envs
- Sync features_map with camera_name_mapping in LiberoEnv.__post_init__
- Fix render() to use first available camera instead of hardcoded "image"
- Handle non-dict final_info in rollout by falling back to info["is_success"]
- Add use_peft legacy field to SmolVLAConfig for checkpoint compat
- Add defaults to GR00TN15Config init=False fields for transformers 5.3

Made-with: Cursor
- Revert GR00T N1.5 default_factory/default changes (transformers compat)
- Revert SmolVLA use_peft legacy field
- Apply ruff formatting fixes
- camera_name_mapping stays entirely in env/eval layer (no policy changes)

Made-with: Cursor
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
LiberoEnv and MetaworldEnv previously allocated GPU resources (EGL context,
OpenGL framebuffer) in __init__, before AsyncVectorEnv's fork(). Worker
processes inherited stale GPU handles, causing EGL_BAD_CONTEXT crashes on
first render.

Fix: defer OffScreenRenderEnv / MT1 construction to _ensure_env(), called on
first reset() or step() inside the worker subprocess. Each worker creates its
own clean context after fork().

Also fixes lerobot_eval.py:170 (add_envs_task TODO): replace with
env.call("task") which works with both SyncVectorEnv and AsyncVectorEnv.

AsyncVectorEnv is now the default for n_envs > 1; auto-downgraded to
SyncVectorEnv when n_envs=1 (no benefit, less overhead).

Expected speedup: ~15-20x for LIBERO Spatial with batch_size=50.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
eval_policy_all never closed environments after each task completed,
causing AsyncVectorEnv worker processes to accumulate (N_tasks × n_envs).
This led to OOM, BrokenPipeError and EOFError on multi-task benchmarks.

Also fixes:
- AsyncVectorEnv compat in envs/utils.py (use get_attr/call instead of .envs)
- Tuple task handling in tokenizer_processor and lerobot_eval
- _LazyAsyncVectorEnv for deferred worker spawning in LIBERO

Made-with: Cursor
…ning

env.call("task") returns the LIBERO task name with underscores
(e.g. "pick_up_the_black_bowl_...") instead of the natural language
description ("pick up the black bowl ..."). The VLM tokenizes these
completely differently, causing 0.0 reward across all episodes.

Made-with: Cursor
- Replace add_envs_task reference with env.call("task_description")
- Update use_async_envs default to True
- Add note about lazy GPU init for AsyncVectorEnv compatibility

Made-with: Cursor
- batch_size=0 (default) auto-tunes based on CPU cores, capped by
  n_episodes and 64. Removes the need for users to guess the right
  value. The old batch_size > n_episodes error is replaced by silently
  clamping to n_episodes.
- _LazyAsyncVectorEnv accepts pre-computed spaces so only one temp env
  is created per suite (not per task). For libero_spatial (10 tasks)
  this avoids 9 redundant LiberoEnv instantiations during env setup.

Made-with: Cursor
- New docs/source/evaluation.mdx covering lerobot-eval usage, batch_size
  auto-tuning, AsyncVectorEnv performance, tuning tips, output format,
  multi-task evaluation, and programmatic usage.
- Add evaluation page to _toctree.yml under Benchmarks section.
- Update adding_benchmarks.mdx to reference batch_size auto default and
  link to the evaluation guide.

Made-with: Cursor
- AsyncVectorEnv now uses shared_memory=True for zero-copy observation transfer
- LiberoEnvConfig.gym_kwargs passes observation_height/width to the env
- eval_policy_all prefetches next task's workers while current task runs

Made-with: Cursor
pkooij and others added 27 commits April 7, 2026 20:11
Made-with: Cursor
Each benchmark gets its own Docker image (lerobot[libero] / lerobot[metaworld]
only) so incompatible dep trees cannot collide. A 1-episode smoke eval runs
per benchmark on GPU runners.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libero/__init__.py calls input() to ask about a custom dataset path,
which raises EOFError when stdin is closed inside Docker. Setting
LIBERO_DATA_FOLDER skips the prompt entirely.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libero/__init__.py calls input() when ~/.libero/config.yaml is missing.
We write the config at image build time (without importing libero) so
the prompt never fires at runtime. Also trigger CI on pyproject.toml changes.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…n -c

The multiline RUN python -c "..." was being parsed as Dockerfile
instructions. Use printf to write ~/.libero/config.yaml directly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The config was pointing to /tmp/libero_init which doesn't exist.
Use importlib.util.find_spec to locate the hf-libero package directory
and write paths to the actual bundled bddl_files/init_files/assets.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
num2words (required by SmolVLM processor) is declared in lerobot[smolvla],
not lerobot[libero/metaworld]. Install both extras together.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
isinstance(env, AsyncVectorEnv) silently skipped _LazyAsyncVectorEnv,
causing video rendering to produce no frames on the default async path.
Switch to hasattr(env, "call") so any async-compatible env (including
_LazyAsyncVectorEnv) hits the call("render") branch.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_get_sub_env_attr was defined but never called anywhere in the codebase.
_sub_env_has_attr (its sibling) is kept — it is actively used in utils.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ample

add_envs_task is replaced by env.call("task_description") in this PR.
Remove it from the pipeline walkthrough and renumber the steps (8→7).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
__del__ is unreliable as a cleanup mechanism. close() is already called
explicitly in the eval loop's finally block, so the finalizer is redundant.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ry overlap

Previously, next task's AsyncVectorEnv workers were spawned while the
current task was still running, causing both tasks' GPU contexts to coexist.
Moving the prefetch start into the finally block (after env.close()) ensures
workers for task N+1 only spin up once task N has released GPU memory.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_LazyAsyncVectorEnv lived in libero.py but metaworld had the same OOM
problem: all tasks' AsyncVectorEnv workers were spawned eagerly, wasting
GPU memory for tasks not yet running.

Move the class to envs/utils.py so both environments share it, then apply
the same is_async + lazy wrapping pattern in create_metaworld_envs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Benchmark CI workflow, Dockerfiles, benchmark docs, evaluation smoke-test
doc, and dispatch tests belong in a separate PR. Scope this PR to the
async env init changes only.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…changes

- Restore docs/source/adding_benchmarks.mdx (belongs in this PR)
- Restore tests/envs/test_dispatch.py (belongs in this PR)
- Revert docs/source/env_processor.mdx to main (out of scope for this PR)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…e PR)

Step 7 (Dockerfile + benchmark_tests.yml CI job) and its table rows are
out of scope for this PR. The CI infrastructure will be added on top in a
follow-up PR.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Each benchmark gets its own image (lerobot[<benchmark>,smolvla]) so
incompatible dep trees can never collide. A 1-episode smoke eval runs
per benchmark on GPU runners.

- Libero: pepijn223/smolvla_libero, libero_spatial, camera_name_mapping
- MetaWorld: pepijn223/smolvla_metaworld, metaworld-push-v2
- LIBERO config pre-created at build time to bypass interactive stdin prompt
- Triggers on envs/**, lerobot_eval.py, Dockerfiles, pyproject.toml changes
- Adds docs/source/evaluation.mdx and restores step 7 in adding_benchmarks

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
All MetaWorld task names in metaworld_config.json use the v3 suffix.
push-v2 caused a KeyError on TASK_DESCRIPTIONS lookup.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add HF_HUB_DOWNLOAD_TIMEOUT=300 to both jobs — SmolVLM2 processor
  download was timing out on CI runners with the default timeout
- MetaWorld: add --rename_map to map observation.image → camera1 and
  --policy.empty_cameras=2 to pad the 2 missing cameras the policy
  expects (trained with 3 cameras, env provides 1)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The 586-file lerobot/libero-assets dataset was being fetched at runtime
(on first reset()) which consistently hit a 504 Gateway Timeout on CI
runners. Downloading at build time bakes the assets into the image so
no network call is needed during the smoke eval.

The config.yaml now points assets → ~/.libero/assets (the downloaded
snapshot) instead of the bundled (empty) package path.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Introduces a new `robocerebra` environment type that wraps the LIBERO
libero_10 suite with RoboCerebra-specific defaults (256×256 resolution,
20 FPS, camera keys `image`/`wrist_image` matching the HF dataset).

Also fixes a gap in the LIBERO factory chain: `camera_name_mapping` was
accepted by `LiberoEnv.__init__` but never forwarded through
`create_libero_envs` → `_make_env_fns` → `_make_env`. Both `LiberoEnv`
and `RoboCerebraEnv` now propagate it correctly end-to-end.

- `envs/configs.py`: `RoboCerebraEnv` config + `LiberoEnv.create_envs` fix
- `envs/libero.py`: thread `camera_name_mapping` through factory chain
- `pyproject.toml`: `robocerebra` optional dep group (= `lerobot[libero]`)
- `tests/envs/test_robocerebra_env.py`: 10 unit tests (no LIBERO needed)
- `docs/source/robocerebra.md`: install, dataset, eval commands, citation

Dataset: CollisionCode/RoboCerebra_lerobot_v3.0 (already LeRobot v3 — no conversion needed)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds the RoboCerebra benchmark to the benchmark integration test suite
introduced in #3309. Follows the same pattern: one isolated Docker image
per benchmark so dependency trees cannot collide.

- docker/Dockerfile.benchmark.robocerebra: installs lerobot[robocerebra]
  only (= lerobot[libero] alias: hf-libero + dm-control + mujoco)
- .github/workflows/benchmark_tests.yml: full workflow with libero,
  metaworld, and robocerebra parallel jobs; robocerebra job builds its
  own image and runs a 1-episode smoke eval on libero_10

Note: benchmark_tests.yml is also created in #3309. Whichever PR merges
second will need a trivial conflict resolution (add the robocerebra job
block to the existing file).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@pkooij pkooij force-pushed the feat/robocerebra-benchmark branch from 865c2a1 to cc8c571 Compare April 8, 2026 14:29
@pkooij pkooij changed the base branch from feat/async-vector-env to feat/benchmark-ci April 8, 2026 14:29
@pkooij pkooij force-pushed the feat/benchmark-ci branch from e89e6d9 to 927118e Compare April 8, 2026 17:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant