Skip to content

Conversation

georgestagg
Copy link
Contributor

@georgestagg georgestagg commented Oct 16, 2025

This PR has three main aims:

  • Rework how our internal prompts are built, introducing templating and allowing us to move logic from the extension's .ts source into .md files.

  • Create a way for developers to enable and disable parts of Assistant's prompts at runtime, to assist with testing and QA.

  • Replace our llms.txt implementation and instead integrate Assistant with Code OSS's notion of prompt files, custom instructions, modes, & commands. (See here for details).

Also included are some general fixes to prompting and tool calls, including the changes in #9991 and #9974. I will rebase those specific changes away from this branch as they hit main.

Internal prompt files

Assistant's prompt files may now have a yaml header with a mode property indicating in which modes the prompt should be included. Prompts are concatenated in the order defined by the order yaml property.

Templating is done using Squirrely, and this allows us to move some of the branching logic (e.g. different wording in the "inline editor", or when streaming edits is enabled) into the prompt files. This is the same template engine Databot currently uses, aligning us there.

The default prompt file has been split into different sections, reflecting the purpose of each section.

Prompts for commands have the command property, and also a mode property, defining the slash command and for which modes the command is active. A command's prompt document(s) is appended to the system prompt for the currently active mode.

Developer prompt management

A new command has been added, gated by isDevelopment (Q: What is this value in daily builds?). When enabled, the command allows a user to turn on and off parts of the Assistant system prompts.

This allows for a quick way to test the effect of individual prompt sections, and even return to the stock model if required:

Screenshot 2025-10-16 at 11 49 59

It might be nice if this was on for daily builds, but should definitely be off on release builds.

Note: Positron context info is still added even if all prompts are off.

LLM instructions

Our special handling for llms.txt has been removed. With this PR it is added to requests using a different mechanism (next section).

Custom prompt files

Upstream Code OSS has support for custom instructions, commands and chat modes. This PR makes this all work better with Positron. It also adds our own directories for the custom files, under .positron, to separate from the GitHub Copilot Chat naming.

Here are some examples:

Custom Command

Use the cog icon in Assistant, select Prompt Files, then create a new prompt file in the .positron/prompts folder.

Screenshot 2025-10-16 at 11 56 13

Here is an example defining a custom /spelling command, saved as `./positron/prompts/spelling.prompt.md

---
mode: agent
---
Check the document for spelling errors.

This should then be available as a command in the given chat mode:

Screenshot 2025-10-16 at 11 59 55

Custom Chat Mode

Screenshot 2025-10-16 at 12 00 38

Example: .positron/chatmodes/pirate.chatmode.md:

---
description: 'Speak like a pirate'
tools: []
---
You must speak like a pirate in this mode.
Screenshot 2025-10-16 at 12 02 17

Custom instructions

This already works as expected, in that AGENTS.md is loaded automatically into the context if it exists in the root of the project.

This PR tweaks the upstream mechanism to also include any of:

  • 'agents.md'
  • 'agent.md'
  • 'positron.md'
  • 'claude.md'
  • 'gemini.md'
  • 'llms.txt'

Project tree tool

I have simplified the project tree tool a lot. I found in testing the model often struggles to infer paths from the previous array-based output format. The tool now outputs paths as a newline separated list:

foo/bar/baz.txt
foo/bar.txt
abc/def.txt
efg.txt

If the number of entries is too long, the list will be limited by removing the longest paths first.

Still TODO

  • With the introduction of a prompt rendering mechanism it would probably be good to have some tests in place specifically for it.

  • Chat export can currently be used to get a dump of a full LLM conversation and tool calling loop, but it would be good if we had a testing entry point that allows us to test tools without starting a full blown Positron session. Probably that would be complex enough for a followup PR.

Copy link

github-actions bot commented Oct 16, 2025

E2E Tests 🚀
This PR will run tests tagged with: @:critical

readme  valid tags

@georgestagg georgestagg force-pushed the assistant/prompt-refactoring branch from d8c2a37 to bdb2404 Compare October 21, 2025 09:16
@georgestagg georgestagg force-pushed the assistant/prompt-refactoring branch from bdb2404 to 0dee526 Compare October 21, 2025 09:23
@georgestagg georgestagg marked this pull request as ready for review October 21, 2025 09:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant