Skip to content

Conversation

SonicSaurav
Copy link

Add Qwen3 MoE experiment with model args, architecture, and train spec registration.

Add Qwen3 MoE experiment with model args, architecture, and train spec registration.
Copy link

meta-cla bot commented Sep 2, 2025

Hi @SonicSaurav!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

- Fix config validation by removing invalid TOML fields
- Add proper model registration in experiments/__init__.py
- Fix missing model attributes: qk_norm, initializer_range, hidden_dim, pad_token_id, stage_idx, num_stages, head_dim
- Correct TransformerBlock forward signature to accept attention_mask and position_ids
- Add RoPE cache initialization and proper forwarding to decoder layers
- Implement required init_weights methods for model initialization
- Fix parallelize function signature and implementation for TorchTitan compatibility
- Correct attribute naming (hidden_size → dim, num_hidden_layers → n_layers)
- Update model configuration with accurate HuggingFace Qwen3-30B-A3B parameters
- Enable activation checkpointing and optimize memory usage settings
- Successfully tested: model builds, initializes, and trains without errors

The Qwen3 MoE model now fully integrates with TorchTitan framework and trains successfully.
Copy link
Contributor

@wwwjn wwwjn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @SonicSaurav Thanks for contributing! From my reading, in this PR the model are dense model and the MoE part is not added yet. Also I would suggest reuse the experiments/qwen model as they share common parts, and don't start a new folder under experiments for the same model

)
return hidden_states

class QwenForCausalLM(torch.nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is adopted from transformers?

r"""
Example:

```python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove those transformers related comments

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will do all this also can you please make it for moe incase i made some mistake as i am in learning phase right now but need this urgently help will be really appreciated

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @SonicSaurav I made it in #1685, please take a look, thank you!

@SonicSaurav
Copy link
Author

SonicSaurav commented Sep 2, 2025 via email

Copy link
Contributor

@tianyu-l tianyu-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should put this under the same folder qwen3 and reuse as much as possible.
cc @wwwjn

@tianyu-l tianyu-l linked an issue Sep 2, 2025 that may be closed by this pull request
@soulitzer
Copy link
Contributor

soulitzer commented Sep 4, 2025

We may want to add a2a in the selective AC policy with this PR similar to in #1672 since now the save lists are model specific.

@wwwjn
Copy link
Contributor

wwwjn commented Sep 5, 2025

Close this PR because of #1685 and avoid confusion, please feel free to re-open it if needed :)

@wwwjn wwwjn closed this Sep 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Qwen3 MoE Support
4 participants