-
Notifications
You must be signed in to change notification settings - Fork 179
Add /model command to interactive mode #888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- Added MODEL slash command to show current LLM model being used - Added handle_model_command() function to display model information - Also updated /context command to include model name in output - Provides quick access to see which model is currently configured
WalkthroughAdds a new /model slash command to the interactive CLI, wires it into command handling, and extends /context to display the current LLM model. Introduces a handle_model_command function that prints ai.llm.model. The function appears twice, creating duplicate definitions. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User
participant CLI as Interactive CLI
participant AI as ToolCallingLLM
participant LLM as AI.llm
participant C as Console
U->>CLI: /model
CLI->>AI: access ai.llm.model
AI->>LLM: read model
LLM-->>AI: model name
AI-->>CLI: model name
CLI->>C: print "Model: <name>"
Note over CLI,C: handle_model_command invoked
U->>CLI: /context
CLI->>AI: gather context incl. ai.llm.model
CLI->>C: print context + "Model: <name>"
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
holmes/interactive.py (1)
461-466
: Verified: Singlehandle_model_command
implementation andSlashCommands.MODEL
exist; optional refactor still recommendedAll checks confirm there’s only one definition of
handle_model_command
(line 461) and one reference (line 983), and theSlashCommands.MODEL
member is defined at line 64. The suggested enhancement remains a safe, non-breaking improvement to harden edge cases.Locations verified:
def handle_model_command
at holmes/interactive.py:461- Invocation at holmes/interactive.py:983
SlashCommands.MODEL
defined at holmes/interactive.py:64Apply this diff to guard against missing attributes and display backend class:
-def handle_model_command(ai: ToolCallingLLM, console: Console) -> None: - """Handle the /model command to show current LLM model information.""" - console.print( - f"[bold {STATUS_COLOR}]Current Model:[/bold {STATUS_COLOR}] {ai.llm.model}" - ) +def handle_model_command(ai: ToolCallingLLM, console: Console) -> None: + """Handle the /model command to show current LLM model information.""" + # Safely fetch model name and backend class + model = getattr(getattr(ai, "llm", None), "model", None) or "<unknown>" + backend = getattr(ai, "llm", None).__class__.__name__ if getattr(ai, "llm", None) else "<unknown>" + console.print( + f"[bold {STATUS_COLOR}]Current Model:[/bold {STATUS_COLOR}] {model} [dim]({backend})[/dim]" + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
holmes/interactive.py
(4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py
: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
holmes/interactive.py
🧬 Code graph analysis (1)
holmes/interactive.py (1)
holmes/core/tool_calling_llm.py (1)
ToolCallingLLM
(204-728)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: llm_evals
- GitHub Check: Pre-commit checks
- GitHub Check: build (3.10)
- GitHub Check: build (3.11)
- GitHub Check: build (3.12)
🔇 Additional comments (3)
holmes/interactive.py (3)
64-64
: Nice addition: /model command enumerated and discoverable.Adding
MODEL
toSlashCommands
plugs it into help and autocompletion viaSLASH_COMMANDS_REFERENCE
andALL_SLASH_COMMANDS
. Looks consistent with existing command docs.
500-500
: Good context enhancement: model surfaced in /context.Showing the model alongside token stats is useful for quick diagnostics when multiple LLMs are configured.
982-985
: Command routing wired correctly.
/model
dispatch integrates cleanly with the existing command flow and usescontinue
to avoid falling through to LLM calls.
Make it easy to verify which model is being used by HolmesGPT in cases where a user has multiple LLM's configured locally.
Also helps with extending /model to support model switching etc in the future