Skip to content

Inference with ACT with relative action not working well #3312

@LaFeuilleMorte

Description

@LaFeuilleMorte

Ticket Type

🐛 Bug Report (Something isn't working)

Environment & System Info

Description

Title: Relative-action postprocessing uses the wrong anchor state when action queue is enabled (n_action_steps > 1)

Hi team,

I found a mismatch between relative-action postprocessing and action-queue inference for ACT-like chunked policies.

Summary

With relative actions enabled, inference should follow:

model output (relative) -> unnormalize -> absolute conversion (relative + state_at_chunk_start) -> robot

However, when n_action_steps > 1 (action queue enabled), the absolute conversion is applied once per control step, and each call uses the latest cached observation state, not the state from when that chunk was predicted.

This causes wrong absolute actions for queued steps #2, #3, ... of the same chunk.

Why this matters

Documentation says relative outputs are converted back to absolute at inference, which is correct.
But with queued chunk execution, the anchor state must remain fixed for all actions from the same chunk.
Otherwise, relative->absolute is no longer aligned with how the model produced that chunk.

Expected behavior

For one predicted chunk, all queued relative actions should be converted to absolute using the same anchor state (the state when the chunk was generated).

Actual behavior

Absolute conversion uses the newest cached state on every step.
So if robot state changes between dequeued actions, conversion drift appears.

Likely root cause

  • AbsoluteActionsProcessorStep.__call__ uses relative_step._last_state on each postprocess call.
  • RelativeActionsProcessorStep caches state every inference loop (even when no action exists in preprocessor input).
  • select_action in chunked policies dequeues previously predicted actions over multiple loops.

So queued actions are converted with moving states.

Affected places

  • src/lerobot/processor/relative_action_processor.py
    • RelativeActionsProcessorStep state caching behavior
    • AbsoluteActionsProcessorStep.__call__ uses latest cached state every call
  • chunked policy select_action implementations using action queues (e.g., ACT/PI0-style)

Proposal

Option A (generic, recommended):

  • Add a state-hold mechanism in AbsoluteActionsProcessorStep (e.g., state_hold_steps).
  • For queued action execution, reuse the same cached state for n_action_steps postprocess calls, then refresh.

Option B (policy-local):

  • Implement an ACT-specific absolute step with state hold, without changing generic processor behavior.

Temporary workaround

Set n_action_steps = 1 so each action is predicted and converted in the same loop.
This avoids anchor mismatch but increases inference frequency.

Thanks!

Context & Reproduction

No response

Relevant logs or stack trace

Checklist

  • I have searched existing tickets to ensure this isn't a duplicate.
  • I am using the latest version of the main branch.
  • I have verified this is not an environment-specific problem.

Additional Info / Workarounds

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn’t working correctlydocumentationImprovements or fixes to the project’s docsenhancementSuggestions for new features or improvementspoliciesItems related to robot policiesprocessorIssue related to processor

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions