Skip to content

Conversation

rahul-tuli
Copy link
Member

@rahul-tuli rahul-tuli commented Aug 21, 2025

Summary

This PR implements Eagle3 speculative decoding support for Llama4 models, enabling faster inference through single-layer draft model speculation.

Key Features

  • Eagle3Llama4ForCausalLM: Complete implementation with single-layer draft architecture
  • SupportsEagle3 Interface: Integration with existing Llama4ForCausalLM class
  • Model Registry: Proper mappings for Eagle3 Llama4 model resolution
  • Auxiliary Hidden States: Multi-layer combination for optimal speculation
  • Vocabulary Mapping: Draft-to-target token conversion for multi-vocabulary support

Architecture

The implementation follows the established Eagle3 pattern from llama_eagle3.py with Llama4-specific enhancements:

  1. Single Decoder Layer: Uses one Llama4 decoder layer for draft token generation
  2. Hidden State Combination: Combines auxiliary states from target model layers (early, middle, late)
  3. Vocabulary Independence: Supports separate draft and target vocabularies
  4. Distributed Inference: Compatible with vLLM's tensor parallelism (for verifier)

Usage

# Example serving command for Eagle3 Llama4 speculation
CUDA_VISIBLE_DEVICES=4,5,6,7 vllm serve meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 \
  --tensor-parallel-size 4 \
  --gpu-memory-utilization 0.8 \
  --speculative-config '{"method": "eagle3", "model": "seanmamasde/llama4-maverick-17B-128E-eagle3-sglang", "num_speculative_tokens": 4, "draft_tensor_parallel_size": 1}' \
  --trust-remote-code

Testing

The implementation has been tested with:

  • Model loading and initialization
  • Speculative decoding configuration
  • GPU memory optimization
  • Vocabulary mapping functionality

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

rahul-tuli and others added 4 commits August 26, 2025 08:06
Implements Eagle3 speculative decoding architecture for Llama4 models,
enabling faster inference through single-layer draft model speculation.

Key additions:
- Eagle3Llama4ForCausalLM: Main implementation with single-layer draft architecture
- SupportsEagle3 interface integration for Llama4ForCausalLM
- Model registry mappings for Eagle3 Llama4 models
- Auxiliary hidden state combination and vocabulary mapping
- Draft-to-target token conversion for multi-vocabulary support

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Enables multimodal input processing for Eagle3 speculative decoding
with Llama4 models, supporting vision and other modalities.

Key changes:
- Updated get_input_embeddings to support multimodal embeddings
- Added merge_multimodal_embeddings integration
- Proper handling of image_token_index configuration
- Maintains compatibility with existing text-only workflows

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Fixes IndexError during draft model initialization by properly
padding no_rope_layers configuration to match the exact pattern
from llama4_eagle.py.

The configuration validation now correctly handles layer offset
for speculative decoding compatibility.

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Handles cases where draft model configs (like LlamaConfig) don't have
no_rope_layers attribute by initializing it with proper defaults before
Llama4DecoderLayer creation.

This prevents AttributeError during draft model initialization while
maintaining compatibility with existing configurations.

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
@rahul-tuli rahul-tuli force-pushed the llama4-eagle3-drafter branch from 32f9392 to 184da35 Compare August 26, 2025 12:07
Copy link

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
Would be good to add a speculators test model

@@ -266,6 +266,8 @@
"EagleLlama4ForCausalLM": ("llama4_eagle", "EagleLlama4ForCausalLM"),
"EagleMiniCPMForCausalLM": ("minicpm_eagle", "EagleMiniCPMForCausalLM"),
"Eagle3LlamaForCausalLM": ("llama_eagle3", "Eagle3LlamaForCausalLM"),
"Eagle3Llama4ForCausalLM": ("llama4_eagle3", "Eagle3Llama4ForCausalLM"),
"LlamaForCausalLMEagle3": ("llama4_eagle3", "Eagle3Llama4ForCausalLM"),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need both of these to be added?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants