fix(pi0-fast): don't apply embed scaling#3304
Open
zucchini-nlp wants to merge 2 commits intohuggingface:mainfrom
Open
fix(pi0-fast): don't apply embed scaling#3304zucchini-nlp wants to merge 2 commits intohuggingface:mainfrom
zucchini-nlp wants to merge 2 commits intohuggingface:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
Updates PI0/PI0Fast embedding logic to align with the upstream Transformers change that relocates Gemma/PaliGemma embedding scaling into the model’s embedding layer (so this code should no longer manually apply or remove scaling).
Changes:
- Removed manual
sqrt(hidden_size)scaling for image embeddings and language token embeddings. - Switched language-token embedding retrieval from direct
embed_tokens(...)access to an embedding-layer accessor call. - Removed additional scaling when embedding incremental next-tokens during fast decoding paths.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
src/lerobot/policies/pi0/modeling_pi0.py |
Drops manual embedding scaling; updates language token embedding call site. |
src/lerobot/policies/pi0_fast/modeling_pi0_fast.py |
Drops manual embedding scaling across prefix/FAST action/decoding paths; updates language token embedding call site. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Comment on lines
414
to
+417
| # Process language instruction tokens | ||
| def lang_embed_func(tokens): | ||
| lang_emb = self.paligemma_with_expert.embed_language_tokens(tokens) | ||
| lang_emb_dim = lang_emb.shape[-1] | ||
| return lang_emb * math.sqrt(lang_emb_dim) | ||
| return lang_emb |
There was a problem hiding this comment.
After removing the math.sqrt(...) scaling, this block no longer uses math; since the module is now unused in this file, the import math at the top should be removed to satisfy linters (ruff/flake8) and avoid dead imports.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
From internal conversation, huggingface/transformers#44432 moved all the scaling to LM's embedding layer and now we shouldn't be manually apply or removing the scaling with gemma models. This fix is needed to pin
transformers>= 5.4.0