-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(feature request) optional setting to specify model per action #67
Comments
I too miss the model selection feature for actions. |
Guys, I actually don't think that this is a good improvement. It will break as soon as you change a provider in AI Providers. You will end up with many actions that might work or be outdated — so you can not rely on actions anymore without checking them. |
That's a good point. I think it is kind of the responsibility of the plugin user to ensure that their promt doesn't break, but maybe adding a separate section in AI providers plugin to set an alias yourself is a good way to make sure this does not happen? I think it could look like this: AI providers
aliases:default fast model: And then in LocalGPT you would get the option to specify a model to use for each prompt from the items of the alias section, not from the AI providers section: Local GPTaction 1: I think this would be really valuable, because for some actions I need speed, for others I need precision. By only allowing one model for all actions, there is no good solution for that. With aliases, you can use different models for different "kinds of tasks", which require different types of modes, but you can make sure that it remains easy to change which model is used for each kind of tasked, without breaking all your actions. |
It would be useful to be able to choose different models for different actions. For some tasks, I need reasoning models, in other tasks, they would be much too slow. I imagine simply being able to pass a string as the value for the model argument for the API call. If specified, this setting would overwrite the default model specified in the provider settings (ideally, only if this model is also available from that provider). Of course, this would be an optional setting.
The text was updated successfully, but these errors were encountered: