Skip to content

Conversation

pavangudiwada
Copy link
Contributor

@pavangudiwada pavangudiwada commented Aug 22, 2025

Make it easy to verify which model is being used by HolmesGPT in cases where a user has multiple LLM's configured locally.

Also helps with extending /model to support model switching etc in the future

  • Added MODEL slash command to show current LLM model being used
  • Added handle_model_command() function to display model information
  • Also updated /context command to include model name in output
  • Provides quick access to see which model is currently configured
CleanShot 2025-08-22 at 10 17 44 CleanShot 2025-08-22 at 10 17 03

- Added MODEL slash command to show current LLM model being used
- Added handle_model_command() function to display model information
- Also updated /context command to include model name in output
- Provides quick access to see which model is currently configured
@pavangudiwada pavangudiwada requested a review from aantn August 22, 2025 04:51
Copy link
Contributor

coderabbitai bot commented Aug 22, 2025

Walkthrough

Adds a new /model slash command to the interactive CLI, wires it into command handling, and extends /context to display the current LLM model. Introduces a handle_model_command function that prints ai.llm.model. The function appears twice, creating duplicate definitions.

Changes

Cohort / File(s) Summary
Interactive CLI commands
holmes/interactive.py
- Added SlashCommands.MODEL = ("/model", "Show current LLM model being used")
- Added handle_model_command(ai: ToolCallingLLM, console: Console) to print ai.llm.model
- Invokes handle_model_command when user inputs /model
- Extended /context output to include "Model: {ai.llm.model}"
- Noted duplication: handle_model_command defined twice

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as User
  participant CLI as Interactive CLI
  participant AI as ToolCallingLLM
  participant LLM as AI.llm
  participant C as Console

  U->>CLI: /model
  CLI->>AI: access ai.llm.model
  AI->>LLM: read model
  LLM-->>AI: model name
  AI-->>CLI: model name
  CLI->>C: print "Model: <name>"
  Note over CLI,C: handle_model_command invoked

  U->>CLI: /context
  CLI->>AI: gather context incl. ai.llm.model
  CLI->>C: print context + "Model: <name>"
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • mainred

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-model-command

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
holmes/interactive.py (1)

461-466: Verified: Single handle_model_command implementation and SlashCommands.MODEL exist; optional refactor still recommended

All checks confirm there’s only one definition of handle_model_command (line 461) and one reference (line 983), and the SlashCommands.MODEL member is defined at line 64. The suggested enhancement remains a safe, non-breaking improvement to harden edge cases.

Locations verified:

  • def handle_model_command at holmes/interactive.py:461
  • Invocation at holmes/interactive.py:983
  • SlashCommands.MODEL defined at holmes/interactive.py:64

Apply this diff to guard against missing attributes and display backend class:

-def handle_model_command(ai: ToolCallingLLM, console: Console) -> None:
-    """Handle the /model command to show current LLM model information."""
-    console.print(
-        f"[bold {STATUS_COLOR}]Current Model:[/bold {STATUS_COLOR}] {ai.llm.model}"
-    )
+def handle_model_command(ai: ToolCallingLLM, console: Console) -> None:
+    """Handle the /model command to show current LLM model information."""
+    # Safely fetch model name and backend class
+    model = getattr(getattr(ai, "llm", None), "model", None) or "<unknown>"
+    backend = getattr(ai, "llm", None).__class__.__name__ if getattr(ai, "llm", None) else "<unknown>"
+    console.print(
+        f"[bold {STATUS_COLOR}]Current Model:[/bold {STATUS_COLOR}] {model} [dim]({backend})[/dim]"
+    )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9746fa6 and d4ba1bc.

📒 Files selected for processing (1)
  • holmes/interactive.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • holmes/interactive.py
🧬 Code graph analysis (1)
holmes/interactive.py (1)
holmes/core/tool_calling_llm.py (1)
  • ToolCallingLLM (204-728)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: llm_evals
  • GitHub Check: Pre-commit checks
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.12)
🔇 Additional comments (3)
holmes/interactive.py (3)

64-64: Nice addition: /model command enumerated and discoverable.

Adding MODEL to SlashCommands plugs it into help and autocompletion via SLASH_COMMANDS_REFERENCE and ALL_SLASH_COMMANDS. Looks consistent with existing command docs.


500-500: Good context enhancement: model surfaced in /context.

Showing the model alongside token stats is useful for quick diagnostics when multiple LLMs are configured.


982-985: Command routing wired correctly.

/model dispatch integrates cleanly with the existing command flow and uses continue to avoid falling through to LLM calls.

Copy link
Contributor

Results of HolmesGPT evals

  • ask_holmes: 33/39 test cases were successful, 1 regressions, 2 skipped, 2 setup failures
Test suite Test case Status
ask 01_how_many_pods
ask 02_what_is_wrong_with_pod
ask 03_what_is_the_command_to_port_forward
ask 04_related_k8s_events ↪️
ask 05_image_version
ask 09_crashpod
ask 10_image_pull_backoff
ask 11_init_containers
ask 14_pending_resources
ask 15_failed_readiness_probe
ask 17_oom_kill
ask 18_crash_looping_v2
ask 19_detect_missing_app_details
ask 20_long_log_file_search
ask 24_misconfigured_pvc
ask 28_permissions_error 🚧
ask 29_events_from_alert_manager ↪️
ask 39_failed_toolset
ask 41_setup_argo
ask 42_dns_issues_steps_new_tools ⚠️
ask 43_current_datetime_from_prompt
ask 45_fetch_deployment_logs_simple
ask 51_logs_summarize_errors
ask 53_logs_find_term
ask 54_not_truncated_when_getting_pods
ask 59_label_based_counting
ask 60_count_less_than 🚧
ask 61_exact_match_counting
ask 63_fetch_error_logs_no_errors
ask 79_configmap_mount_issue
ask 83_secret_not_found
ask 86_configmap_like_but_secret
ask 93_calling_datadog
ask 93_calling_datadog
ask 93_calling_datadog
ask 97_logs_clarification_needed
ask 110_k8s_events_image_pull
ask 24a_misconfigured_pvc_basic
ask 13a_pending_node_selector_basic

Legend

  • ✅ the test was successful
  • ↪️ the test was skipped
  • ⚠️ the test failed but is known to be flaky or known to fail
  • 🚧 the test had a setup failure (not a code regression)
  • 🔧 the test failed due to mock data issues (not a code regression)
  • ❌ the test failed and should be fixed before merging the PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant