Skip to content

Conversation

nithinraok
Copy link
Collaborator

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add support for inference for longer audio samples by chunking them into 10 min segments

Collection: SpeechLM2

Changelog

  • Add chunk wise inference

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

@nithinraok nithinraok requested a review from pzelasko October 2, 2025 18:01
model_kwargs['cfg']['pretrained_weights'] = False
# When loading from HF checkpoint, set init_from_path to the HF checkpoint directory
# so it can load the model weights from the correct location.
if 'init_from_path' in model_kwargs['cfg']:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

init_from_path is used to load a checkpoint from a previous stage.
In Stage 1, this argument is not set.
In Stage 2, it points to the final HF checkpoint from Stage 1.
When loading the final checkpoint after Stage 2, perhaps we do not need init_from_path, assuming the HF checkpoint saves everything.

Comment on lines +532 to +535
asr_tokens_chunk = [
torch.as_tensor(self.tokenizer.text_to_ids(f">> {hyp.text} <<" if hyp.text else ">> <<"))
for hyp in asr_hyps_chunk
]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

">>" and "<<" are applied at the very beginning and end of the ASR transcript.
For short audio, the final "audio" embedding looks like:

{audio_features}>> {asr_hyp} <<

I guess we can do the same for long audio:

{audio_features1}{audio_features2}>> {asr_hyp1} {asr_hyp2} <<

But here it seems ">>" and "<<" are added for each chunk instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants