Skip to content

Conversation

pjin-nvidia
Copy link
Contributor

@pjin-nvidia pjin-nvidia commented Sep 29, 2025

What does this PR do ?

At the start of training, log the initial master_config as both json and yaml into the experiment dir to help easily distinguish different runs.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • New Features

    • Training runs now automatically save the full configuration at start in both JSON and YAML formats for improved reproducibility and auditability.
    • Configuration files are persisted alongside run logs, making it easier to track, compare, and share experiment settings across GRPO and SFT workflows.
  • Chores

    • Enhanced logging infrastructure to support exporting configuration data in multiple formats.

@pjin-nvidia pjin-nvidia requested review from a team as code owners September 29, 2025 23:04
Copy link
Contributor

coderabbitai bot commented Sep 29, 2025

📝 Walkthrough

Walkthrough

Adds a new WandbLogger.log_config_dict method to persist config dicts as JSON/YAML. Integrates calls in GRPO (sync and async) and SFT training to log master_config at start of training. Imports yaml for serialization. No public API signatures changed.

Changes

Cohort / File(s) Summary
Logger utility extension
nemo_rl/utils/logger.py
Added WandbLogger.log_config_dict(config, filename) to write configs to base_log_dir, supporting .json and .yaml/.yml; added yaml import; prints confirmation.
GRPO training config logging
nemo_rl/algorithms/grpo.py
At start of grpo_train and after timer init in async_grpo_train, added logger.log_config_dict(master_config, "train_config.json") and "train_config.yaml".
SFT training config logging
nemo_rl/algorithms/sft.py
At start of sft_train, added dual logging of master_config to train_config.json and train_config.yaml.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Trainer as grpo_train/sft_train
  participant Logger as WandbLogger
  participant FS as Filesystem

  User->>Trainer: start training(master_config)
  Note over Trainer: Initialization
  Trainer->>Logger: log_config_dict(master_config, "train_config.json")
  Logger->>FS: write JSON file
  FS-->>Logger: OK
  Trainer->>Logger: log_config_dict(master_config, "train_config.yaml")
  Logger->>FS: write YAML file
  FS-->>Logger: OK
  Note over Trainer: Proceed with training steps
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10–15 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly summarizes the primary change—adding logging of the initial training master configuration—using clear, concise language and a conventional “chore” prefix, making it immediately understandable to reviewers without unnecessary detail.
Test Results For Major Changes ✅ Passed This PR adds logging functionality to persist the initial master configuration to disk in JSON and YAML formats at the start of training. The changes are purely additive and do not modify any training algorithms, core functionality, API signatures, or introduce breaking changes. The addition of configuration logging is an observability enhancement that does not affect numerics, convergence, or performance characteristics of the system. Based on the check criteria, this qualifies as a minor enhancement rather than a major change requiring test results documentation.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch pjin/log-config

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🧪 Early access (Sonnet 4.5): enabled

We are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience.

Note:

  • Public repositories are always opted into early access features.
  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 17ea9ab and 80ed13f.

📒 Files selected for processing (3)
  • nemo_rl/algorithms/grpo.py (2 hunks)
  • nemo_rl/algorithms/sft.py (1 hunks)
  • nemo_rl/utils/logger.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/algorithms/grpo.py
  • nemo_rl/algorithms/sft.py
  • nemo_rl/utils/logger.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/algorithms/grpo.py
  • nemo_rl/algorithms/sft.py
  • nemo_rl/utils/logger.py
🧬 Code graph analysis (2)
nemo_rl/algorithms/grpo.py (1)
nemo_rl/utils/logger.py (1)
  • log_config_dict (1020-1034)
nemo_rl/algorithms/sft.py (1)
nemo_rl/utils/logger.py (1)
  • log_config_dict (1020-1034)
🪛 Ruff (0.13.1)
nemo_rl/utils/logger.py

1034-1034: Use explicit conversion flag

Replace with conversion flag

(RUF010)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
  • GitHub Check: Post automodel integration comment / Comment on PR
🔇 Additional comments (4)
nemo_rl/algorithms/sft.py (1)

365-366: LGTM! Config logging placement is appropriate.

The dual logging (JSON + YAML) at training start is well-placed and provides useful artifacts for run tracking. The method will fail fast on errors (NotImplementedError, OSError), which is acceptable at startup.

nemo_rl/algorithms/grpo.py (2)

584-585: LGTM! Consistent config logging at training start.

The dual logging placement is appropriate and consistent with the SFT implementation.


1192-1193: LGTM! Config logging is consistent across training modes.

The logging placement in async GRPO matches the synchronous version and SFT, providing uniform config tracking across all training modes.

nemo_rl/utils/logger.py (1)

34-34: LGTM! Import added for YAML serialization support.

The yaml import is necessary for the new log_config_dict method and is correctly placed in alphabetical order.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant