Release v1.1.0
Release Notes - v1.1.0
What's Changed
New recipes
- Added support for Llama 3.1 70b and Mixtral 22b 128 node pre-training.
- Added support for Llama 3.3 fine-tuning with SFT and LoRA.
- Added support for Llama 405b 32k sequence length QLoRA fine-tuning.
All new recipes are listed under "Model Support" section of README.