Skip to content

Release v1.1.0

Choose a tag to compare

@rohithn1 rohithn1 released this 31 Dec 21:16
· 71 commits to main since this release
66e49e0

Release Notes - v1.1.0

What's Changed

New recipes

  • Added support for Llama 3.1 70b and Mixtral 22b 128 node pre-training.
  • Added support for Llama 3.3 fine-tuning with SFT and LoRA.
  • Added support for Llama 405b 32k sequence length QLoRA fine-tuning.

All new recipes are listed under "Model Support" section of README.