Skip to content

[Bug]: Mismatch in logic between v1/model/info endpoint and cost calculations #16884

@jlan-nl

Description

@jlan-nl

What happened?

Example:

When calling litellm with the gpt-51-chat model, cost tracking works fine. On litellm/llms/azure/cost_calculation.py:27, the get_model_info() function is called with model='azure/gpt-5.1-chat', which returns the correct costs.

But when you try to fetch the costs of the gpt-51-chat model through the v1/model/info endpoint, the same function is called (in get_litellm_model_info() at around litellm/proxy/proxy_server.py:3560) with the azure/ prefix removed. Since there is no entry in the cost mapping json for 'gpt-5.1-chat', the v1/model/info endpoint doesn't find any costs for the GPT 5.1 Chat model.

We haven't seen this issue with other models yet, but I suppose it's a broader issue that the logic isn't synced.

Relevant log output

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.77.5

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions