-
Notifications
You must be signed in to change notification settings - Fork 747
Add support for Ollama's think
option
#363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
This commit is not a complete solution, but is most of the way there. The final step is likely to not use the openai Python package when using Ollama. Ollama provides a separate Python package with the addition of the `think` argument.
# For now, we'll skip it to avoid the OpenAI client error | ||
# The think parameter might need to be passed differently to Ollama's API | ||
|
||
response = openai_client.chat.completions.create(**payload) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling this with think
in the payload doesn't work. We may need to use Ollama's Python package, or create an HTTP request for a more manual approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment out:
think = payload.pop('think', None)
to test what happens when included. For me it shows:
[mcp_agent.executor.executor] Error executing task: Completions.create() got an unexpected keyword argument 'think'
[mcp_agent.workflows.llm.augmented_llm_openai.daily_briefer] Error: Completions.create() got an unexpected keyword argument 'think'
I believe this is coming from the OpenAI Python package as a sort of validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can exclude think
from the request for non-ollama cases. I would even be inclined to duplicate some more parts of the openai class into ollama to get it to work cleanly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ZaneH this is already looking pretty great! Very sorry I missed this in my queue. Do you want me to pick it up and finish it, or would you like to continue this all the way through? Let me know how I can help
@ZaneH wanted to check if you wanted to complete this diff or if you'd prefer someone from the team to pick it up where you left off. Excited for your contribution to mcp-agent! |
Sounds good, thanks @ZaneH ! |
What changed?
think
option for OllamaResolves: #362