Skip to content

Standardizing the LLM parameters ? #225

@AbdealiLoKo

Description

@AbdealiLoKo

Hi, I am trying to out aisuite - and was wondering if there is any interest in standardizing LLM params like temperature, top_k, etc.
I notice that currently every provider has it's own independent set of params

Bedrock -

INFERENCE_PARAMETERS = ["maxTokens", "temperature", "topP", "stopSequences"]

Ollama -
def chat_completions_create(self, model, messages, **kwargs):

etc.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions