You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, unless I'm mistaken, there is no validation on whether the LLM can be reached when selecting its configuration in the settings page. If an Experimenter makes any mistake (e.g. typo on the Gemini key), he will not know until the experiment has started; and even then the error is only logged in the console. There probably should be a small UI element which shows the status of the current LLM instance. Alternatively, a check should be performed on the settings page either automatically, or after a request by the Experimenter (e.g. a "Test me!" button below the LLM section).
The check is straightforward to implement in the backend; for Ollama instances, we can re-use the code in ollama.api.test.ts. I'm not familiar with Gemini's API, but I suppose it would be a similar procedure for it and the OpenAI API (maybe give a warning about using a miniscule amount of credits). The only complication would be that we would need to define a LLM type-interface to enforce this check across all models, transforming the current function-type approaches into classes in functions/src/api.
I am recommending this feature because it's probably something that the Athens team would need in the final/grand experiment, and because I would personally find it very useful during debugging other issues. Let me know what you think!
The text was updated successfully, but these errors were encountered:
Yes, I think this is a good idea! I would recommend a "test me" button with a manual click (easier/cleaner than automatically checking every time something new is typed).
If you have bandwidth to take this on, feel free! (Otherwise someone else can pick it up - I'll move this request to the "agents" milestone now).
We have a "test" button for agents in the agent-refactor branch that achieves this purpose. (Maybe we can add the same button/endpoint call to the experimenter settings panel too?)
Right now, unless I'm mistaken, there is no validation on whether the LLM can be reached when selecting its configuration in the settings page. If an Experimenter makes any mistake (e.g. typo on the Gemini key), he will not know until the experiment has started; and even then the error is only logged in the console. There probably should be a small UI element which shows the status of the current LLM instance. Alternatively, a check should be performed on the settings page either automatically, or after a request by the Experimenter (e.g. a "Test me!" button below the LLM section).
The check is straightforward to implement in the backend; for Ollama instances, we can re-use the code in
ollama.api.test.ts
. I'm not familiar with Gemini's API, but I suppose it would be a similar procedure for it and the OpenAI API (maybe give a warning about using a miniscule amount of credits). The only complication would be that we would need to define a LLM type-interface to enforce this check across all models, transforming the current function-type approaches into classes infunctions/src/api
.I am recommending this feature because it's probably something that the Athens team would need in the final/grand experiment, and because I would personally find it very useful during debugging other issues. Let me know what you think!
The text was updated successfully, but these errors were encountered: