Skip to content

hiddenswitch/SongBloom

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SongBloom: Coherent Song Generation via Interleaved Autoregressive Sketching and Diffusion Refinement

Paper Hugging Face Demo Page

We propose SongBloom, a novel framework for full-length song generation that leverages an interleaved paradigm of autoregressive sketching and diffusion-based refinement. SongBloom employs an autoregressive diffusion model that combines the high fidelity of diffusion models with the scalability of language models. Specifically, it gradually extends a musical sketch from short to long and refines the details from coarse to fine-grained. The interleaved generation paradigm effectively integrates prior semantic and acoustic context to guide the generation process. Experimental results demonstrate that SongBloom outperforms existing methods across both subjective and objective metrics and achieves performance comparable to the state-of-the-art commercial music generation platforms.

img

Models

Name Size Max Length Prompt type 🤗
songbloom_full_150s 2B 2m30s 10s wav link
songbloom_full_150s_dpo 2B 2m30s 10s wav link
songbloom_mulan_150s 2B 2m30s 10s wav / text description coming soon
...

Updates

  • Jun 2025: Release the songbloom_full_150s and inference script
  • Sep 2025: Release the songbloom_full_150s model with DPO post-training

Getting Started

Prepare Environments

uv venv
uv pip install --torch-backend=auto -e .
source .venv/bin/activate

Data Preparation

A .jsonl file, where each line is a json object:

{"idx": "The index of each sample", "lyrics": "([intro]:8) 8 seconds of intro, then lyrics to be generated, followed by instrumental for 8 seconds by writing ([inst]:8), more lyrics", "prompt_wav": "The path of the style prompt audio"}

One example can be refered to as: example/test.jsonl

The prompt wav should be a 10-second, 48kHz audio clip.

For details on lyric formatting, see docs/lyric_format.md.

Inference

source set_env.sh

python3 infer.py --input-jsonl example/test.jsonl


# For GPUs with low VRAM like RTX4090, you should set the dtype as bfloat16
python3 infer.py --input-jsonl example/test.jsonl --dtype bfloat16

# SongBloom also supports flash-attn (optional). To enable it, please install flash-attn
  • model-name: Specify model version, see the model cards (eg: songbloom_full_150s/songbloom_full_150s_dpo);
  • local-dir: Dir where the weights and config files are downloaded;
  • input-jsonl: input raw data;
  • output-dir: Dir where the output audio saved;
  • n-samples: How many audios will be generated for each input term;

Citation

@article{yang2025songbloom,
title={SongBloom: Coherent Song Generation via Interleaved Autoregressive Sketching and Diffusion Refinement},
author={Yang, Chenyu and Wang, Shuai and Chen, Hangting and Tan, Wei and Yu, Jianwei and Li, Haizhou},
journal={arXiv preprint arXiv:2506.07634},
year={2025}
}

License

SongBloom (codes and weights) is released under the LICENSE.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%