Skip to content

[Bug]: Error: EMPTY_MESSAGE #516

@gbroques

Description

@gbroques

Opencommit Version

3.2.10

Node Version

22.17.1

NPM Version

10.9.2

What OS are you seeing the problem on?

Mac

What happened?

Whenever I run oco, I see the following error:

$ oco
┌  open-commit
│
◇  No files are staged
│
◇  Do you want to stage all files and generate commit message?
│  Yes
│
◇  Staged 1 files
┌  open-commit
│
◇  1 staged files:
  README.md
│
◇  ✖ Failed to generate the commit message
Error: EMPTY_MESSAGE
    at generateCommitMessageByDiff (/usr/local/lib/node_modules/opencommit/out/cli.cjs:67375:13)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async generateCommitMessageFromGitDiff (/usr/local/lib/node_modules/opencommit/out/cli.cjs:67595:25)
    at async trytm (/usr/local/lib/node_modules/opencommit/out/cli.cjs:67563:18)
    at async commit (/usr/local/lib/node_modules/opencommit/out/cli.cjs:67768:35)
    at async commit (/usr/local/lib/node_modules/opencommit/out/cli.cjs:67747:7)
│
└  ✖ EMPTY_MESSAGE

I'm attempting to use LM Studio and llama-3.2-3b-instruct.

It is generating the commit message and properly talking to LM Studio (see below LM Studio logs).

There seems to be a problem with opencommit's ability to extract the generated message from the response.

I also use fish as a shell, and delta for git diffs.

My full ~/.opencommit config file is:

OCO_MODEL=llama-3.2-3b-instruct
OCO_API_URL=http://127.0.0.1:1234/v1/chat/completions
OCO_API_KEY=undefined
OCO_API_CUSTOM_HEADERS=undefined
OCO_AI_PROVIDER=ollama
OCO_TOKENS_MAX_INPUT=4096
OCO_TOKENS_MAX_OUTPUT=500
OCO_DESCRIPTION=false
OCO_EMOJI=false
OCO_LANGUAGE=en
OCO_MESSAGE_TEMPLATE_PLACEHOLDER=$msg
OCO_PROMPT_MODULE=conventional-commit
OCO_ONE_LINE_COMMIT=false
OCO_TEST_MOCK_TYPE=commit-message
OCO_OMIT_SCOPE=false
OCO_GITPUSH=true
OCO_WHY=false
OCO_HOOK_AUTO_UNCOMMENT=false

Expected Behavior

I expect it to generate a commit message.

Current Behavior

It errors.

Possible Solution

No response

Steps to Reproduce

No response

Relevant log output

2025-08-18 10:54:31 [DEBUG]
 [Client=plugin:installed:lmstudio/js-code-sandbox] Client created.
2025-08-18 10:54:31 [DEBUG]
 [Client=plugin:installed:lmstudio/rag-v1] Client created.
2025-08-18 10:54:32  [INFO]
 [Plugin(lmstudio/js-code-sandbox)] stdout: [Tools Prvdr.] Register with LM Studio
2025-08-18 10:54:32  [INFO]
 [Plugin(lmstudio/rag-v1)] stdout: [PromptPreprocessor] Register with LM Studio
2025-08-18 10:54:32 [DEBUG]
 [Client=plugin:installed:lmstudio/js-code-sandbox][Endpoint=setToolsProvider] Registering tools provider.
2025-08-18 10:54:32 [DEBUG]
 [Client=plugin:installed:lmstudio/rag-v1][Endpoint=setPromptPreprocessor] Registering promptPreprocessor.
2025-08-18 11:03:03 [DEBUG]
 [ModelKit][INFO] Loading model from /Users/P3299121/.lmstudio/models/mlx-community/Llama-3.2-3B-Instruct-4bit...
2025-08-18 11:03:04 [DEBUG]
 [ModelKit][INFO] Model loaded successfully
2025-08-18 11:03:56  [INFO]
 [LM STUDIO SERVER] Success! HTTP server listening on port 1234
2025-08-18 11:03:56  [INFO]
2025-08-18 11:03:56  [INFO]
 [LM STUDIO SERVER] Supported endpoints:
2025-08-18 11:03:56  [INFO]
 [LM STUDIO SERVER] ->	GET  http://localhost:1234/v1/models
2025-08-18 11:03:56  [INFO]
 [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/chat/completions
2025-08-18 11:03:56  [INFO]
 [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/completions
2025-08-18 11:03:56  [INFO]
 [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/embeddings
2025-08-18 11:03:56  [INFO]
2025-08-18 11:03:56  [INFO]
 [LM STUDIO SERVER] Logs are saved into /Users/P3299121/.lmstudio/server-logs
2025-08-18 11:03:56  [INFO]
 Server started.
2025-08-18 11:03:56  [INFO]
 Just-in-time model loading active.
2025-08-18 11:04:39 [DEBUG]
 Received request: POST to /v1/chat/completions with body  {
  "model": "llama-3.2-3b-instruct",
  "messages": [
    {
      "role": "system",
      "content": "You are to act as an author of a commit message in... <Truncated in logs> ...4 characters. Use english for the commit message.\n"
    },
    {
      "role": "user",
      "content": "diff --git a/src/server.ts b/src/server.ts\n    ind... <Truncated in logs> ...r listening on port ${PORT}`);\n                });"
    },
    {
      "role": "assistant",
      "content": "fix(server.ts): change port variable case from low... <Truncated in logs> ...iable to be able to run app on a configurable port"
    },
    {
      "role": "user",
      "content": "diff --git a/README.md b/README.md\nindex adb85eb..... <Truncated in logs> ...n for educational purposes.\n \n ## Why study GPT-2?"
    }
  ],
  "options": {
    "temperature": 0,
    "top_p": 0.1
  },
  "stream": false
}
2025-08-18 11:04:39  [INFO]
 [LM STUDIO SERVER] Running chat completion on conversation with 4 messages.
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 1 tokens in response  docs
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 2 tokens in response  docs(
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 3 tokens in response  docs(README
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 4 tokens in response  docs(README.md
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 5 tokens in response  docs(README.md):
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 6 tokens in response  docs(README.md): add
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 7 tokens in response  docs(README.md): add test
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 8 tokens in response  docs(README.md): add test change
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 9 tokens in response  docs(README.md): add test change to
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 10 tokens in response  docs(README.md): add test change to demonstrate
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 11 tokens in response  docs(README.md): add test change to demonstrate file
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 12 tokens in response  docs(README.md): add test change to demonstrate file's
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 13 tokens in response  docs(README.md): add test change to demonstrate file's usage
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 14 tokens in response  docs(README.md): add test change to demonstrate file's usage and
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 15 tokens in response  docs(README.md): add test change to demonstrate file's usage and purpose
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Accumulated 15 tokens in response  docs(README.md): add test change to demonstrate file's usage and purpose
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Model generated tool calls:  []
2025-08-18 11:04:41  [INFO]
 [llama-3.2-3b-instruct] Generated prediction:  {
  "id": "chatcmpl-57pd1aaeppl674bk0o3ufw",
  "object": "chat.completion",
  "created": 1755533079,
  "model": "llama-3.2-3b-instruct",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "docs(README.md): add test change to demonstrate file's usage and purpose",
        "reasoning_content": "",
        "tool_calls": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 557,
    "completion_tokens": 16,
    "total_tokens": 573
  },
  "stats": {},
  "system_fingerprint": "llama-3.2-3b-instruct"
}
2025-08-18 11:05:18 [DEBUG]
 Received request: POST to /v1/chat/completions with body  {
  "model": "llama-3.2-3b-instruct",
  "messages": [
    {
      "role": "system",
      "content": "You are to act as an author of a commit message in... <Truncated in logs> ...4 characters. Use english for the commit message.\n"
    },
    {
      "role": "user",
      "content": "diff --git a/src/server.ts b/src/server.ts\n    ind... <Truncated in logs> ...r listening on port ${PORT}`);\n                });"
    },
    {
      "role": "assistant",
      "content": "fix(server.ts): change port variable case from low... <Truncated in logs> ...iable to be able to run app on a configurable port"
    },
    {
      "role": "user",
      "content": "diff --git a/README.md b/README.md\nindex adb85eb..... <Truncated in logs> ...n for educational purposes.\n \n ## Why study GPT-2?"
    }
  ],
  "options": {
    "temperature": 0,
    "top_p": 0.1
  },
  "stream": false
}
2025-08-18 11:05:18  [INFO]
 [LM STUDIO SERVER] Running chat completion on conversation with 4 messages.
2025-08-18 11:05:18 [DEBUG]
 [CacheWrapper][INFO] Trimmed 17 tokens from the prompt cache
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 1 tokens in response  docs
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 2 tokens in response  docs(
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 3 tokens in response  docs(README
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 4 tokens in response  docs(README.md
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 5 tokens in response  docs(README.md):
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 6 tokens in response  docs(README.md): add
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 7 tokens in response  docs(README.md): add test
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 8 tokens in response  docs(README.md): add test change
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 9 tokens in response  docs(README.md): add test change to
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 10 tokens in response  docs(README.md): add test change to update
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 11 tokens in response  docs(README.md): add test change to update README
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 12 tokens in response  docs(README.md): add test change to update README with
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 13 tokens in response  docs(README.md): add test change to update README with new
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 14 tokens in response  docs(README.md): add test change to update README with new implementation
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 15 tokens in response  docs(README.md): add test change to update README with new implementation information
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 16 tokens in response  docs(README.md): add test change to update README with new implementation information \n
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 17 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 18 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 19 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 20 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 21 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md):
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 22 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 23 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 24 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 25 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 26 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 27 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 28 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 29 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in G
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 30 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in GPT
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 31 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in GPT-
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 32 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in GPT-2
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 33 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in GPT-2 small
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 34 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in GPT-2 small implementation
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Accumulated 34 tokens in response  docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in GPT-2 small implementation
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Model generated tool calls:  []
2025-08-18 11:05:19  [INFO]
 [llama-3.2-3b-instruct] Generated prediction:  {
  "id": "chatcmpl-uqektu5mc6ch9w6y7b886p",
  "object": "chat.completion",
  "created": 1755533118,
  "model": "llama-3.2-3b-instruct",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "docs(README.md): add test change to update README with new implementation information \nfeat(README.md): update README content to reflect changes in GPT-2 small implementation",
        "reasoning_content": "",
        "tool_calls": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 557,
    "completion_tokens": 35,
    "total_tokens": 592
  },
  "stats": {},
  "system_fingerprint": "llama-3.2-3b-instruct"
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions