-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Description
Issue
Version 0.86.1
Error message:
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=local/qwen3-coder:30b
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more:
https://docs.litellm.ai/docs/providers
the error message makes no sense. The page at the given URL to "providers" makes no sense.
The responses [here|https://stackoverflow.com/a/79119200/3849157] are not helpful.
https://aider.chat/docs/llms/warnings.html isn't helpful here.
Example usage:
aider --model local/qwen3-coder:30b
Starts fine, but generates error on pretty much anything.
Models config:
cat $HOME/.aider.model.metadata.json
{
"local/qwen3-coder:30b": {
"max_tokens": 16384,
"max_output_tokens": 16384,
"max_input_tokens": 65535,
"input_cost_per_token": 0,
"output_cost_per_token": 0,
"supports_vision": "false",
"litellm_provider": "local",
"mode": "chat"
}
}
It turns out that I was trying to use OPENAI_API_BASE instead of OLLAMA_API_BASE. Then, I was trying to give the base with /v1 and then / at the end, both of which caused it to croak. And I needed to replace local with ollama_chat, export the OLLAMA_API_BASE as directed here, but it isn't clear why. The first part is magical??
Related question: Wouldn't it make sense for the metadata here to provide a default api-base? eg
...
"default_api_base": "http://192.168.8.101:11434",
...
Version and model info
No response