Skip to content

Conversation

aantn
Copy link
Contributor

@aantn aantn commented Aug 22, 2025

I'm not crazy about this implementation. I would much rather has the --fast flag enable/disable the ToDo toolset and have everything else happen automatically. i.e. if each toolset had an API like 'add_to_system_prompt' then we could just disable the toolset and automatically nothing would be added to the prompt.

@aantn aantn requested a review from arikalon1 August 22, 2025 15:41
Copy link
Contributor

coderabbitai bot commented Aug 22, 2025

Walkthrough

Adds a disable_todos/fast path: a new CLI flag (--fast) propagates to LLM/tool initialization, causing create_console_tool_executor to omit the core_investigation toolset. Prompt rendering now receives has_todos derived from toolsets and conditionally includes Todo guidance in the system prompt.

Changes

Cohort / File(s) Summary
Config: tool executor and LLM construction
holmes/config.py
Added disable_todos: bool flag to create_console_tool_executor and create_console_toolcalling_llm; when true, filters out core_investigation toolset (removing TodoWriteTool) for console runs.
Prompt context wiring
holmes/core/prompt.py
Detects presence of core_investigation in tool_executor.toolsets, sets has_todos, and passes it into system prompt template context. No API changes.
CLI flag and propagation
holmes/main.py
Adds --fast flag to ask command; passes fast as disable_todos to AI/tool initialization. No other flow changes.
Prompt template conditional
holmes/plugins/prompts/_general_instructions.jinja2
Adds has_todos-gated “MANDATORY Task Management” sections instructing use of TodoWrite for multi-step work; omitted when has_todos is false.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant User
  participant CLI as holmes/main.py (ask)
  participant Config as Config
  participant Exec as ToolExecutor
  participant Prompt as Prompt Renderer

  User->>CLI: holmes ask ... [--fast]
  CLI->>Config: create_console_toolcalling_llm(disable_todos=fast)
  Config->>Config: create_console_tool_executor(disable_todos)
  alt disable_todos = true
    Config->>Exec: Build toolsets (exclude core_investigation)
  else disable_todos = false
    Config->>Exec: Build toolsets (include core_investigation)
  end
  CLI->>Prompt: render_system_prompt(context from Exec)
  Prompt->>Prompt: has_todos = "core_investigation" in toolsets
  Prompt-->>CLI: System prompt (Todo guidance conditional on has_todos)
  Note over CLI,Exec: Subsequent tool calls limited by available toolsets
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • Subtasks #851 — Prior work around todo/task tooling (core_investigation, TodoWrite, prompt context) that this PR toggles via disable_todos and template conditionals.

Suggested reviewers

  • moshemorad

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fast-mode

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
holmes/core/prompt.py (1)

78-79: Gate Todo reminder on has_todos in fast mode

The call to get_tasks_management_system_reminder() is currently unconditional, so even when has_todos is false (fast mode) the reminder is still appended. This contradicts the intended behavior.

• In holmes/core/prompt.py (around line 78), wrap the append in an if has_todos block:

-    user_prompt_with_files += get_tasks_management_system_reminder()
+    if has_todos:
+        user_prompt_with_files += get_tasks_management_system_reminder()

• Update the existing tests in tests/core/test_prompt.py (lines 43, 68, and 118) so that:
– For scenarios where has_todos is true, they continue to assert that the reminder is appended.
– Add at least one new test where has_todos is false and assert that user_prompt_with_files does not include the reminder.

🧹 Nitpick comments (7)
holmes/config.py (4)

306-311: Add disable_todos param: good, but document the behavior and align naming.

The new flag is sensible. Please also update the docstring to explicitly describe the effect of disable_todos, and consider naming consistency: here it's refresh_status while callers pass refresh_toolsets (Line 358). Not a blocker, but mildly confusing.

Apply this docstring addition to clarify the parameter:

 def create_console_tool_executor(
     self,
     dal: Optional["SupabaseDal"],
     refresh_status: bool = False,
     disable_todos: bool = False,
 ) -> ToolExecutor:
     """
     Creates a ToolExecutor instance configured for CLI usage. This executor manages the available tools
     and their execution in the command-line interface.

     The method loads toolsets in this order, with later sources overriding earlier ones:
     1. Built-in toolsets (tagged as CORE or CLI)
     2. toolsets from config file will override and be merged into built-in toolsets with the same name.
     3. Custom toolsets from config files which can not override built-in toolsets
+    
+    Args:
+        refresh_status: When True, refresh each toolset’s status before returning.
+        disable_todos: When True, omit the Todo-related toolset(s) from the CLI executor
+            (currently implemented by filtering out the "core_investigation" toolset).
     """

325-330: Avoid magic string; add debug log to aid support.

Filtering by the literal name "core_investigation" couples behavior to a display name. Prefer a capability flag (e.g., ts.capabilities.get("todos")) or a constant/enum. At minimum, emit a debug log so users can confirm what got disabled.

Minimal improvement with logging:

-        if disable_todos:
-            cli_toolsets = [
-                ts for ts in cli_toolsets if ts.name != "core_investigation"
-            ]
+        if disable_todos:
+            before = len(cli_toolsets)
+            cli_toolsets = [ts for ts in cli_toolsets if ts.name != "core_investigation"]
+            after = len(cli_toolsets)
+            if before != after:
+                logging.debug("Fast mode: disabled 'core_investigation' toolset (Todo tools)")

Longer-term, consider tagging toolsets with a features dict (e.g., {"todos": True}) and filter by feature instead of name.


351-357: Propagating disable_todos is correct; add a type to tracer for mypy.

Tiny type-hint tweak keeps us aligned with “Type hints are required.”

 def create_console_toolcalling_llm(
     self,
     dal: Optional["SupabaseDal"] = None,
     refresh_toolsets: bool = False,
-    tracer=None,
+    tracer: Optional[Any] = None,
     disable_todos: bool = False,
 ) -> "ToolCallingLLM":

358-360: Pass by keyword to future-proof the call.

Using positional args makes future param reordering risky. Use keywords for clarity.

-        tool_executor = self.create_console_tool_executor(
-            dal, refresh_toolsets, disable_todos
-        )
+        tool_executor = self.create_console_tool_executor(
+            dal=dal, refresh_status=refresh_toolsets, disable_todos=disable_todos
+        )
holmes/core/prompt.py (1)

57-59: Prefer capability-based detection over toolset name.

Checking ts.name == "core_investigation" is brittle. If the toolset is renamed or split, this will silently break. Consider either:

  • Detect the actual tool presence (e.g., 'TodoWrite' in ts.tools_by_name), or
  • Use a capability flag (e.g., ts.capabilities.get("todos")).
holmes/plugins/prompts/_general_instructions.jinja2 (1)

54-66: Conditional Todo guidance looks good; align naming with actual tool identifier.

The gated block achieves the intended behavior. Minor wording nit: elsewhere we reference TodoWriteTool. Consider consistently using the canonical tool name to avoid confusion.

holmes/main.py (1)

218-222: Consider parity across investigate subcommands.

If “fast mode” is useful beyond ask, consider adding --fast to investigate commands (jira, github, pagerduty, opsgenie) and plumbing it to their respective create_console_issue_investigator paths (or add a shared helper). Not required for this PR, but improves UX consistency.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d95846a and 8d2ece1.

📒 Files selected for processing (4)
  • holmes/config.py (3 hunks)
  • holmes/core/prompt.py (1 hunks)
  • holmes/main.py (2 hunks)
  • holmes/plugins/prompts/_general_instructions.jinja2 (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • holmes/core/prompt.py
  • holmes/main.py
  • holmes/config.py
holmes/plugins/prompts/*.jinja2

📄 CodeRabbit inference engine (CLAUDE.md)

Prompts must be Jinja2 templates named {name}.jinja2 under holmes/plugins/prompts/

Files:

  • holmes/plugins/prompts/_general_instructions.jinja2
🧠 Learnings (1)
📚 Learning: 2025-08-17T08:42:48.789Z
Learnt from: CR
PR: robusta-dev/holmesgpt#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-17T08:42:48.789Z
Learning: Applies to holmes/plugins/prompts/*.jinja2 : Prompts must be Jinja2 templates named {name}.jinja2 under holmes/plugins/prompts/

Applied to files:

  • holmes/plugins/prompts/_general_instructions.jinja2
🧬 Code graph analysis (2)
holmes/core/prompt.py (1)
tests/plugins/toolsets/test_prometheus_integration.py (1)
  • tool_executor (21-28)
holmes/config.py (1)
holmes/main.py (1)
  • refresh_toolsets (956-966)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: Pre-commit checks
  • GitHub Check: llm_evals
🔇 Additional comments (4)
holmes/core/prompt.py (2)

57-59: Good: derive has_todos from active toolsets.

This is the right direction to decouple prompt content from CLI flags.


67-67: Plumbing has_todos into the template context is correct.

This enables prompt-side gating without additional conditionals in code.

holmes/main.py (2)

218-222: Nice addition: --fast flag for ask.

Flag naming and help text are clear.


260-260: Correct propagation of --fast to tool construction.

This ties the CLI flag to tool availability as intended.

Copy link
Contributor

Results of HolmesGPT evals

  • ask_holmes: 35/39 test cases were successful, 0 regressions, 2 skipped, 2 setup failures
Test suite Test case Status
ask 01_how_many_pods
ask 02_what_is_wrong_with_pod
ask 03_what_is_the_command_to_port_forward
ask 04_related_k8s_events ↪️
ask 05_image_version
ask 09_crashpod
ask 10_image_pull_backoff
ask 11_init_containers
ask 14_pending_resources
ask 15_failed_readiness_probe
ask 17_oom_kill
ask 18_crash_looping_v2
ask 19_detect_missing_app_details
ask 20_long_log_file_search
ask 24_misconfigured_pvc
ask 28_permissions_error 🚧
ask 29_events_from_alert_manager ↪️
ask 39_failed_toolset
ask 41_setup_argo
ask 42_dns_issues_steps_new_tools
ask 43_current_datetime_from_prompt
ask 45_fetch_deployment_logs_simple
ask 51_logs_summarize_errors
ask 53_logs_find_term
ask 54_not_truncated_when_getting_pods
ask 59_label_based_counting
ask 60_count_less_than 🚧
ask 61_exact_match_counting
ask 63_fetch_error_logs_no_errors
ask 79_configmap_mount_issue
ask 83_secret_not_found
ask 86_configmap_like_but_secret
ask 93_calling_datadog
ask 93_calling_datadog
ask 93_calling_datadog
ask 97_logs_clarification_needed
ask 110_k8s_events_image_pull
ask 24a_misconfigured_pvc_basic
ask 13a_pending_node_selector_basic

Legend

  • ✅ the test was successful
  • ↪️ the test was skipped
  • ⚠️ the test failed but is known to be flaky or known to fail
  • 🚧 the test had a setup failure (not a code regression)
  • 🔧 the test failed due to mock data issues (not a code regression)
  • ❌ the test failed and should be fixed before merging the PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant