Skip to content

Conversation

@mengmeng-bot
Copy link

Added an example of how to set up local vllm server and connect to mini swe agent


Then we need to add vllm hosted model information into [this file](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json). When we setup mini-swe-agent, LiteLLM should have been installed. If not, run `pip install litellm`, then go into environment's `site-packages`, find LiteLLM and locate this file, add:
```python
"hosted_vllm/Qwen/Qwen3-Coder-30B-A3B-Instruct": {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for opening the PR! 👍

Why do we need this twice? The custom_llm_provider file above should have the exact same function (it's exactly so that you don't have to edit model_prices_and_context_window.json). Can you check if that works?

Basically this extra file is for additional models

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants