-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
System Info
main (d45137a)
Information
- The official example scripts
- My own modified scripts
🐛 Describe the bug
$ OPENAI_API_KEY=BOGUS uv run llama stack run --providers inference=remote::openai
...
$ curl http://localhost:8321/v1/models | jq
{
"data": []
}
Error logs
there are no errors -
INFO 2025-11-02 07:09:05,167 llama_stack.providers.utils.inference.inference_store:74 inference: Write queue
disabled for SQLite to avoid concurrency issues
INFO 2025-11-02 07:09:05,301 uvicorn.error:84 uncategorized: Started server process [590303]
INFO 2025-11-02 07:09:05,303 uvicorn.error:48 uncategorized: Waiting for application startup.
INFO 2025-11-02 07:09:05,309 llama_stack.core.server.server:172 core::server: Starting up Llama Stack server
(version: 0.3.0)
INFO 2025-11-02 07:09:05,311 llama_stack.core.stack:495 core: starting registry refresh task
INFO 2025-11-02 07:09:05,313 uvicorn.error:62 uncategorized: Application startup complete.
INFO 2025-11-02 07:09:05,315 uvicorn.error:216 uncategorized: Uvicorn running on http://0.0.0.0:8321 (Press CTRL+C
to quit)
INFO 2025-11-02 07:09:15,050 uvicorn.access:473 uncategorized: 127.0.0.1:41874 - "GET /v1/models HTTP/1.1" 200
Expected behavior
errors in stack log
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working