-
Notifications
You must be signed in to change notification settings - Fork 7
feat: Add Eagle3 speculative decoding support for Llama4 #100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Implements Eagle3 speculative decoding architecture for Llama4 models, enabling faster inference through single-layer draft model speculation. Key additions: - Eagle3Llama4ForCausalLM: Main implementation with single-layer draft architecture - SupportsEagle3 interface integration for Llama4ForCausalLM - Model registry mappings for Eagle3 Llama4 models - Auxiliary hidden state combination and vocabulary mapping - Draft-to-target token conversion for multi-vocabulary support 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> Signed-off-by: Rahul Tuli <[email protected]>
Enables multimodal input processing for Eagle3 speculative decoding with Llama4 models, supporting vision and other modalities. Key changes: - Updated get_input_embeddings to support multimodal embeddings - Added merge_multimodal_embeddings integration - Proper handling of image_token_index configuration - Maintains compatibility with existing text-only workflows Co-Authored-By: Claude <[email protected]> Signed-off-by: Rahul Tuli <[email protected]>
Fixes IndexError during draft model initialization by properly padding no_rope_layers configuration to match the exact pattern from llama4_eagle.py. The configuration validation now correctly handles layer offset for speculative decoding compatibility. Co-Authored-By: Claude <[email protected]> Signed-off-by: Rahul Tuli <[email protected]>
Handles cases where draft model configs (like LlamaConfig) don't have no_rope_layers attribute by initializing it with proper defaults before Llama4DecoderLayer creation. This prevents AttributeError during draft model initialization while maintaining compatibility with existing configurations. Co-Authored-By: Claude <[email protected]> Signed-off-by: Rahul Tuli <[email protected]>
32f9392
to
184da35
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Would be good to add a speculators test model
@@ -266,6 +266,8 @@ | |||
"EagleLlama4ForCausalLM": ("llama4_eagle", "EagleLlama4ForCausalLM"), | |||
"EagleMiniCPMForCausalLM": ("minicpm_eagle", "EagleMiniCPMForCausalLM"), | |||
"Eagle3LlamaForCausalLM": ("llama_eagle3", "Eagle3LlamaForCausalLM"), | |||
"Eagle3Llama4ForCausalLM": ("llama4_eagle3", "Eagle3Llama4ForCausalLM"), | |||
"LlamaForCausalLMEagle3": ("llama4_eagle3", "Eagle3Llama4ForCausalLM"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need both of these to be added?
Summary
This PR implements Eagle3 speculative decoding support for Llama4 models, enabling faster inference through single-layer draft model speculation.
Key Features
Architecture
The implementation follows the established Eagle3 pattern from
llama_eagle3.py
with Llama4-specific enhancements:Usage
Testing
The implementation has been tested with: