Skip to content

Releases: Kilo-Org/kilocode

Release v4.111.1

27 Oct 18:09

Choose a tag to compare

[v4.111.1]

Release v4.111.0

26 Oct 19:28
8e0b6fa

Choose a tag to compare

[v4.111.0]

Patch Changes

Release v4.110.0

23 Oct 19:12

Choose a tag to compare

[v4.110.0]

  • #3104 3008656 Thanks @markijbema! - Support multiline quoted strings within Auto-approved commands (especially useful for git commit)

Patch Changes

Release v4.109.2

22 Oct 12:46

Choose a tag to compare

[v4.109.2]

Release v4.109.1

21 Oct 19:27

Choose a tag to compare

[v4.109.1]

Release v4.109.0

21 Oct 17:21

Choose a tag to compare

[v4.109.0]

Patch Changes

Release v4.108.0

20 Oct 22:07

Choose a tag to compare

[v4.108.0]

  • #2674 2836aed Thanks @mcowger! - add send message on enter setting with configurable behavior

  • #3090 261889f Thanks @mcowger! - Allow the use of native function calling for OpenAI-compatible, LM Studio, Chutes, DeepInfra, xAI and Z.ai providers.

Patch Changes

  • #3155 6242b03 Thanks @NikoDi2000! - Improved the Chinese translation of "run" from '命令' to '运行'

  • #3120 ced4857 Thanks @mcowger! - The apply_diff tool was implemented for experimental JSON-style tool calling

Release v4.107.0

17 Oct 22:53

Choose a tag to compare

[v4.107.0]

Patch Changes

Release v4.106.0

16 Oct 15:32

Choose a tag to compare

[v4.106.0]

  • #2833 0b8ef46 Thanks @mcowger! - (also thanks to @NaccOll for paving the way) - Preliminary support for native tool calling (a.k.a native function calling) was added.

    This feature is currently experimental and mostly intended for users interested in contributing to its development.
    It is so far only supported when using OpenRouter or Kilo Code providers. There are possible issues including, but not limited to:

    • Missing tools (e.g. apply_diff tool)
    • Tools calls not updating the UI until they are complete
    • Tools being used even though they are disabled (e.g. browser tool)
    • MCP servers not working
    • Errors specific to certain inference providers

    Native tool calling can be enabled in Providers Settings > Advanced Settings > Tool Call Style > JSON.
    It is enabled by default for Claude Haiku 4.5, because that model does not work at all otherwise.

  • #3050 357d438 Thanks @markijbema! - CMD-I now invokes the agent so you can give it more complex prompts

Release v4.105.0

15 Oct 21:54

Choose a tag to compare

[v4.105.0]

  • #3005 b87ae9c Thanks @kevinvandijk! - Improve the edit chat area to allow context and file drag and drop when editing messages. Align more with upstream edit functionality

Patch Changes

  • #2983 93e8243 Thanks @jrf0110! - Adds project usage tracking for Teams and Enterprise customers. Organization members can view and filter usage by project. Project identifier is automatically inferred from .git/config. It can be overwritten by writing a .kilocode/config.json file with the following contents:

    {
    	"project": {
    		"id": "my-project-id"
    	}
    }
  • #3057 69f5a18 Thanks @chrarnoldus! - Thanks Roo, support for Claude Haiku 4.5 to Anthropic, Bedrock and Vertex providers was added

  • #3046 1bd934f Thanks @chrarnoldus! - A warning is now shown when the webview memory usage crosses 90% of the limit (gray screen territory)

  • #2885 a34dab0 Thanks @shameez-struggles-to-commit! - Update VS Code Language Model API provider metadata to reflect current model limits:

    • Align context windows, prompt/input limits, and max output tokens with the latest provider data for matching models: gpt-3.5-turbo, gpt-4o-mini, gpt-4, gpt-4-0125-preview, gpt-4o, o3-mini, claude-3.5-sonnet, claude-sonnet-4, gemini-2.0-flash-001, gemini-2.5-pro, o4-mini-2025-04-16, gpt-4.1, gpt-5-mini, gpt-5.
    • Fixes an issue where a default 128k context was assumed for all models.
    • Notable: GPT-5 family now uses 264k context; o3-mini/o4-mini, Gemini, Claude, and 4o families have updated output and image support flags. GPT-5-mini max output explicitly set to 127,805.

    This ensures Kilo Code correctly enforces model token budgets with the VS Code LM integration.