Skip to content

Conversation

JyotinderSingh
Copy link
Collaborator

Description of the change

Adds support for Gemma 270M base and instruct models

Reference

https://developers.googleblog.com/en/introducing-gemma-3-270m/

@github-actions github-actions bot added the Gemma Gemma model specific issues label Sep 1, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @JyotinderSingh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the checkpoint conversion tool by integrating support for the Gemma 270M base and instruct models. This expansion allows for the conversion of these smaller, efficient models, broadening the utility's compatibility and enabling their use in various applications where resource constraints are a factor. The changes include necessary configuration updates and a new validation step to ensure conversion integrity.

Highlights

  • Gemma 270M Model Support: This pull request adds comprehensive support for the Gemma 270M base and instruct models to the checkpoint conversion utility, allowing for their proper handling and conversion.
  • Configuration and Preset Updates: The PRESET_MAP has been extended to include new entries for gemma3_instruct_270m and gemma3_270m, ensuring these models are correctly recognized and processed during conversion.
  • Output Validation Enhancement: A new validation step has been introduced in the validate_output function to compare the outputs of Flax and Keras models, providing immediate feedback on conversion accuracy.
  • Script Logic Refinement: Adjustments were made to the text_only flag determination logic to correctly accommodate the newly added 270M models, ensuring proper model loading behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the gemma_270m base and instruct models to the Gemma 3 checkpoint converter. The changes include updating the preset map, adjusting the logic to correctly identify text-only models, and improving the validation output. The implementation is mostly correct, but I've provided one suggestion to improve the maintainability of the script by making model properties more explicit in the configuration map, rather than relying on string matching of preset names. This will make the script more robust for future model additions.

Comment on lines +524 to 529
text_only = "text" in preset or "1b" in preset or "270m" in preset

print("🏃 Loading Flax model and tokeniser")
flax_kwargs = {}
if text_only and "1b" not in preset:
if text_only and "1b" not in preset and "270m" not in preset:
flax_kwargs["text_only"] = True

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for determining if a model is text_only and if it needs the text_only kwarg for Flax is based on string matching in the preset name. This can be fragile and hard to maintain as new model sizes are added. To improve reusability and ensure all presets are handled robustly, consider making this information explicit in the PRESET_MAP.1

For example, you could add is_text_only and needs_flax_text_only_kwarg flags to each preset dictionary:

PRESET_MAP = {
    # ...
    "gemma3_instruct_270m": {
        "model": gm.nn.Gemma3_270M,
        "params": gm.ckpts.CheckpointPath.GEMMA3_270M_IT,
        "is_text_only": True,
        "needs_flax_text_only_kwarg": False,
    },
    "gemma3_4b_text": {
        "model": gm.nn.Gemma3_4B,
        "params": gm.ckpts.CheckpointPath.GEMMA3_4B_PT,
        "is_text_only": True,
        "needs_flax_text_only_kwarg": True,
    },
    # ...
}

Then, the logic in main() would be much cleaner and less error-prone when adding new presets.

Suggested change
text_only = "text" in preset or "1b" in preset or "270m" in preset
print("🏃 Loading Flax model and tokeniser")
flax_kwargs = {}
if text_only and "1b" not in preset:
if text_only and "1b" not in preset and "270m" not in preset:
flax_kwargs["text_only"] = True
preset_info = PRESET_MAP[preset]
text_only = preset_info.get("is_text_only", False)
print("🏃 Loading Flax model and tokeniser")
flax_kwargs = {}
if preset_info.get("needs_flax_text_only_kwarg", False):
flax_kwargs["text_only"] = True

Style Guide References

Footnotes

  1. Checkpoint conversion scripts should be reusable and able to handle all presets for a model. Relying on string matching in preset names can make the script less robust and harder to maintain when new presets are added. (link)

@@ -0,0 +1,29 @@
{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove? Or add to .gitignore?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Gemma Gemma model specific issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants