Skip to content

Conversation

ZaneH
Copy link

@ZaneH ZaneH commented Aug 7, 2025

What changed?

Resolves: #362

Copy link

coderabbitai bot commented Aug 7, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

ZaneH added 4 commits August 7, 2025 17:00
This commit is not a complete solution, but is most of the way there.
The final step is likely to not use the openai Python package when
using Ollama. Ollama provides a separate Python package with the
addition of the `think` argument.
# For now, we'll skip it to avoid the OpenAI client error
# The think parameter might need to be passed differently to Ollama's API

response = openai_client.chat.completions.create(**payload)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling this with think in the payload doesn't work. We may need to use Ollama's Python package, or create an HTTP request for a more manual approach.

Copy link
Author

@ZaneH ZaneH Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment out:

think = payload.pop('think', None)

to test what happens when included. For me it shows:

[mcp_agent.executor.executor] Error executing task: Completions.create() got an unexpected keyword argument 'think'
[mcp_agent.workflows.llm.augmented_llm_openai.daily_briefer] Error: Completions.create() got an unexpected keyword argument 'think'

I believe this is coming from the OpenAI Python package as a sort of validation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can exclude think from the request for non-ollama cases. I would even be inclined to duplicate some more parts of the openai class into ollama to get it to work cleanly

Copy link
Collaborator

@saqadri saqadri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZaneH this is already looking pretty great! Very sorry I missed this in my queue. Do you want me to pick it up and finish it, or would you like to continue this all the way through? Let me know how I can help

@saqadri
Copy link
Collaborator

saqadri commented Sep 3, 2025

@ZaneH this is already looking pretty great! Very sorry I missed this in my queue. Do you want me to pick it up and finish it, or would you like to continue this all the way through? Let me know how I can help

@ZaneH wanted to check if you wanted to complete this diff or if you'd prefer someone from the team to pick it up where you left off. Excited for your contribution to mcp-agent!

@ZaneH
Copy link
Author

ZaneH commented Sep 3, 2025

@ZaneH wanted to check if you wanted to complete this diff or if you'd prefer someone from the team to pick it up where you left off. Excited for your contribution to mcp-agent!

Hey @saqadri I got pretty busy and would prefer if someone else took over this PR! Thanks for all your help.

@saqadri
Copy link
Collaborator

saqadri commented Sep 4, 2025

Sounds good, thanks @ZaneH !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ollama: Add support for think option
2 participants