Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimenter should know if a LLM endpoint can be reached #433

Open
dimits-ts opened this issue Feb 2, 2025 · 3 comments
Open

Experimenter should know if a LLM endpoint can be reached #433

dimits-ts opened this issue Feb 2, 2025 · 3 comments
Assignees

Comments

@dimits-ts
Copy link
Collaborator

dimits-ts commented Feb 2, 2025

Right now, unless I'm mistaken, there is no validation on whether the LLM can be reached when selecting its configuration in the settings page. If an Experimenter makes any mistake (e.g. typo on the Gemini key), he will not know until the experiment has started; and even then the error is only logged in the console. There probably should be a small UI element which shows the status of the current LLM instance. Alternatively, a check should be performed on the settings page either automatically, or after a request by the Experimenter (e.g. a "Test me!" button below the LLM section).

The check is straightforward to implement in the backend; for Ollama instances, we can re-use the code in ollama.api.test.ts. I'm not familiar with Gemini's API, but I suppose it would be a similar procedure for it and the OpenAI API (maybe give a warning about using a miniscule amount of credits). The only complication would be that we would need to define a LLM type-interface to enforce this check across all models, transforming the current function-type approaches into classes in functions/src/api.

I am recommending this feature because it's probably something that the Athens team would need in the final/grand experiment, and because I would personally find it very useful during debugging other issues. Let me know what you think!

@vivtsai
Copy link
Collaborator

vivtsai commented Feb 3, 2025

Yes, I think this is a good idea! I would recommend a "test me" button with a manual click (easier/cleaner than automatically checking every time something new is typed).

If you have bandwidth to take this on, feel free! (Otherwise someone else can pick it up - I'll move this request to the "agents" milestone now).

@vivtsai vivtsai added this to the ✨ Add full-experiment agents milestone Feb 3, 2025
@dimits-ts
Copy link
Collaborator Author

Of course, I can take it on as soon as I finish the survey automation feature!

@vivtsai
Copy link
Collaborator

vivtsai commented Feb 27, 2025

We have a "test" button for agents in the agent-refactor branch that achieves this purpose. (Maybe we can add the same button/endpoint call to the experimenter settings panel too?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants