Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(feature request) optional setting to specify model per action #67

Open
omfgroflmfaol opened this issue Feb 27, 2025 · 4 comments
Open

Comments

@omfgroflmfaol
Copy link

It would be useful to be able to choose different models for different actions. For some tasks, I need reasoning models, in other tasks, they would be much too slow. I imagine simply being able to pass a string as the value for the model argument for the API call. If specified, this setting would overwrite the default model specified in the provider settings (ideally, only if this model is also available from that provider). Of course, this would be an optional setting.

@Sotis-Oph
Copy link

I too miss the model selection feature for actions.

@pfrankov
Copy link
Owner

Guys, I actually don't think that this is a good improvement. It will break as soon as you change a provider in AI Providers. You will end up with many actions that might work or be outdated — so you can not rely on actions anymore without checking them.

@omfgroflmfaol
Copy link
Author

omfgroflmfaol commented Feb 28, 2025

That's a good point.
But then again, this is kind of what aliases are for. That's why API providers usually assign aliases, such as "large", "fast" or "reasoning", to the available models. These aliases are updated by the provider whenever a new model is released which is now the largest, the fastest, or the best reasoner. This ensures that requests can be easily made by the user to whichever is the largest, fastest etc. model right now, even if the specific available models have changed. Ollama also supports creation of aliases for models via the command ollama cp name_of_model name_of_alias. (Even if this command means copy, this does not really copy the model, it only creates a new pointer.)

I think it is kind of the responsibility of the plugin user to ensure that their promt doesn't break, but maybe adding a separate section in AI providers plugin to set an alias yourself is a good way to make sure this does not happen? I think it could look like this:

AI providers

model 1
model 2
model 3
model 4
model 5

aliases:

default fast model: model 3
default large model: model 1
defaul reasoning model: model 1

And then in LocalGPT you would get the option to specify a model to use for each prompt from the items of the alias section, not from the AI providers section:

Local GPT

action 1: large model
action 2: fast model

I think this would be really valuable, because for some actions I need speed, for others I need precision. By only allowing one model for all actions, there is no good solution for that. With aliases, you can use different models for different "kinds of tasks", which require different types of modes, but you can make sure that it remains easy to change which model is used for each kind of tasked, without breaking all your actions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@pfrankov @Sotis-Oph @omfgroflmfaol and others