Skip to content

Commit

Permalink
Merge branch 'main' into rcp/ToolCalling-P1
Browse files Browse the repository at this point in the history
  • Loading branch information
rohitprasad15 authored Jan 20, 2025
2 parents f215a95 + 043c0c7 commit 7d4717d
Show file tree
Hide file tree
Showing 38 changed files with 1,510 additions and 265 deletions.
13 changes: 12 additions & 1 deletion .env.sample
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,21 @@ GOOGLE_REGION=
GOOGLE_PROJECT_ID=

# Hugging Face token
HUGGINGFACE_TOKEN=
HF_TOKEN=

# Fireworks
FIREWORKS_API_KEY=

# Together AI
TOGETHER_API_KEY=

# WatsonX
WATSONX_SERVICE_URL=
WATSONX_API_KEY=
WATSONX_PROJECT_ID=

# xAI
XAI_API_KEY=

# Sambanova
SAMBANOVA_API_KEY=
4 changes: 2 additions & 2 deletions .github/workflows/run_pytest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install poetry
poetry install
poetry install --all-extras --with test
- name: Test with pytest
run: poetry run pytest
run: poetry run pytest -m "not integration"

9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,12 @@ __pycache__/
env/
.env
.google-adc

# Testing
.coverage

# pyenv
.python-version

.DS_Store
**/.DS_Store
17 changes: 14 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# aisuite

[![PyPI](https://img.shields.io/pypi/v/aisuite)](https://pypi.org/project/aisuite/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

Simple, unified interface to multiple Generative AI providers.

`aisuite` makes it easy for developers to use multiple LLM through a standardized interface. Using an interface similar to OpenAI's, `aisuite` makes it easy to interact with the most popular LLMs and compare the results. It is a thin wrapper around python client libraries, and allows creators to seamlessly swap out and test responses from different LLM providers without changing their code. Today, the library is primarily focussed on chat completions. We will expand it cover more use cases in near future.

Currently supported providers are -
OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, HuggingFace and Ollama.
OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, HuggingFace Ollama, Sambanova and Watsonx.
To maximize stability, `aisuite` uses either the HTTP endpoint or the SDK for making calls to the provider.

## Installation
Expand All @@ -21,11 +22,13 @@ pip install aisuite
```

This installs aisuite along with anthropic's library.

```shell
pip install 'aisuite[anthropic]'
```

This installs all the provider-specific libraries

```shell
pip install 'aisuite[all]'
```
Expand All @@ -41,12 +44,14 @@ You can use tools like [`python-dotenv`](https://pypi.org/project/python-dotenv/
Here is a short example of using `aisuite` to generate chat completion responses from gpt-4o and claude-3-5-sonnet.

Set the API keys.

```shell
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
```

Use the python client.

```python
import aisuite as ai
client = ai.Client()
Expand All @@ -67,6 +72,7 @@ for model in models:
print(response.choices[0].message.content)

```

Note that the model name in the create() call uses the format - `<provider>:<model-name>`.
`aisuite` will call the appropriate provider with the right parameters based on the provider value.
For a list of provider values, you can look at the directory - `aisuite/providers/`. The list of supported providers are of the format - `<provider>_provider.py` in that directory. We welcome providers adding support to this library by adding an implementation file in this directory. Please see section below for how to contribute.
Expand All @@ -79,9 +85,10 @@ aisuite is released under the MIT License. You are free to use, modify, and dist

## Contributing

If you would like to contribute, please read our [Contributing Guide](CONTRIBUTING.md) and join our [Discord](https://discord.gg/T6Nvn8ExSb) server!
If you would like to contribute, please read our [Contributing Guide](https://github.com/andrewyng/aisuite/blob/main/CONTRIBUTING.md) and join our [Discord](https://discord.gg/T6Nvn8ExSb) server!

## Adding support for a provider

We have made easy for a provider or volunteer to add support for a new platform.

### Naming Convention for Provider Modules
Expand All @@ -91,20 +98,24 @@ We follow a convention-based approach for loading providers, which relies on str
- The provider's module file must be named in the format `<provider>_provider.py`.
- The class inside this module must follow the format: the provider name with the first letter capitalized, followed by the suffix `Provider`.

#### Examples:
#### Examples

- **Hugging Face**:
The provider class should be defined as:

```python
class HuggingfaceProvider(BaseProvider)
```

in providers/huggingface_provider.py.

- **OpenAI**:
The provider class should be defined as:

```python
class OpenaiProvider(BaseProvider)
```

in providers/openai_provider.py

This convention simplifies the addition of new providers and ensures consistency across provider implementations.
2 changes: 1 addition & 1 deletion aisuite/framework/message.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Interface to hold contents of api responses when they do not conform to the OpenAI style response"""
"""Interface to hold contents of api responses when they do not confirm to the OpenAI style response"""

from pydantic import BaseModel
from typing import Literal, Optional
Expand Down
2 changes: 1 addition & 1 deletion aisuite/providers/aws_provider.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ class BedrockConfig:

def __init__(self, **config):
self.region_name = config.get(
"region_name", os.getenv("AWS_REGION_NAME", "us-west-2")
"region_name", os.getenv("AWS_REGION", "us-west-2")
)

def create_client(self):
Expand Down
37 changes: 37 additions & 0 deletions aisuite/providers/cohere_provider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import os
import cohere

from aisuite.framework import ChatCompletionResponse
from aisuite.provider import Provider


class CohereProvider(Provider):
def __init__(self, **config):
"""
Initialize the Cohere provider with the given configuration.
Pass the entire configuration dictionary to the Cohere client constructor.
"""
# Ensure API key is provided either in config or via environment variable
config.setdefault("api_key", os.getenv("CO_API_KEY"))
if not config["api_key"]:
raise ValueError(
" API key is missing. Please provide it in the config or set the CO_API_KEY environment variable."
)
self.client = cohere.ClientV2(**config)

def chat_completions_create(self, model, messages, **kwargs):
response = self.client.chat(
model=model,
messages=messages,
**kwargs # Pass any additional arguments to the Cohere API
)

return self.normalize_response(response)

def normalize_response(self, response):
"""Normalize the reponse from Cohere API to match OpenAI's response format."""
normalized_response = ChatCompletionResponse()
normalized_response.choices[0].message.content = response.message.content[
0
].text
return normalized_response
34 changes: 34 additions & 0 deletions aisuite/providers/deepseek_provider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import openai
import os
from aisuite.provider import Provider, LLMError


class DeepseekProvider(Provider):
def __init__(self, **config):
"""
Initialize the DeepSeek provider with the given configuration.
Pass the entire configuration dictionary to the OpenAI client constructor.
"""
# Ensure API key is provided either in config or via environment variable
config.setdefault("api_key", os.getenv("DEEPSEEK_API_KEY"))
if not config["api_key"]:
raise ValueError(
"DeepSeek API key is missing. Please provide it in the config or set the OPENAI_API_KEY environment variable."
)
config["base_url"] = "https://api.deepseek.com"

# NOTE: We could choose to remove above lines for api_key since OpenAI will automatically
# infer certain values from the environment variables.
# Eg: OPENAI_API_KEY, OPENAI_ORG_ID, OPENAI_PROJECT_ID. Except for OPEN_AI_BASE_URL which has to be the deepseek url

# Pass the entire config to the OpenAI client constructor
self.client = openai.OpenAI(**config)

def chat_completions_create(self, model, messages, **kwargs):
# Any exception raised by OpenAI will be returned to the caller.
# Maybe we should catch them and raise a custom LLMError.
return self.client.chat.completions.create(
model=model,
messages=messages,
**kwargs # Pass any additional arguments to the OpenAI API
)
4 changes: 2 additions & 2 deletions aisuite/providers/huggingface_provider.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ def __init__(self, **config):
The token is fetched from the config or environment variables.
"""
# Ensure API key is provided either in config or via environment variable
self.token = config.get("token") or os.getenv("HUGGINGFACE_TOKEN")
self.token = config.get("token") or os.getenv("HF_TOKEN")
if not self.token:
raise ValueError(
"Hugging Face token is missing. Please provide it in the config or set the HUGGINGFACE_TOKEN environment variable."
"Hugging Face token is missing. Please provide it in the config or set the HF_TOKEN environment variable."
)

# Initialize the InferenceClient with the specified model and timeout if provided
Expand Down
31 changes: 31 additions & 0 deletions aisuite/providers/nebius_provider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import os
from aisuite.provider import Provider
from openai import Client


BASE_URL = "https://api.studio.nebius.ai/v1"


class NebiusProvider(Provider):
def __init__(self, **config):
"""
Initialize the Nebius AI Studio provider with the given configuration.
Pass the entire configuration dictionary to the OpenAI client constructor.
"""
# Ensure API key is provided either in config or via environment variable
config.setdefault("api_key", os.getenv("NEBIUS_API_KEY"))
if not config["api_key"]:
raise ValueError(
"Nebius AI Studio API key is missing. Please provide it in the config or set the NEBIUS_API_KEY environment variable. You can get your API key at https://studio.nebius.ai/settings/api-keys"
)

config["base_url"] = BASE_URL
# Pass the entire config to the OpenAI client constructor
self.client = Client(**config)

def chat_completions_create(self, model, messages, **kwargs):
return self.client.chat.completions.create(
model=model,
messages=messages,
**kwargs # Pass any additional arguments to the Nebius API
)
30 changes: 30 additions & 0 deletions aisuite/providers/sambanova_provider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import os
from aisuite.provider import Provider
from openai import OpenAI


class SambanovaProvider(Provider):
def __init__(self, **config):
"""
Initialize the SambaNova provider with the given configuration.
Pass the entire configuration dictionary to the OpenAI client constructor.
"""
# Ensure API key is provided either in config or via environment variable
config.setdefault("api_key", os.getenv("SAMBANOVA_API_KEY"))
if not config["api_key"]:
raise ValueError(
"Sambanova API key is missing. Please provide it in the config or set the SAMBANOVA_API_KEY environment variable."
)

config["base_url"] = "https://api.sambanova.ai/v1/"
# Pass the entire config to the OpenAI client constructor
self.client = OpenAI(**config)

def chat_completions_create(self, model, messages, **kwargs):
# Any exception raised by Sambanova will be returned to the caller.
# Maybe we should catch them and raise a custom LLMError.
return self.client.chat.completions.create(
model=model,
messages=messages,
**kwargs # Pass any additional arguments to the Sambanova API
)
39 changes: 39 additions & 0 deletions aisuite/providers/watsonx_provider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
from aisuite.provider import Provider
import os
from ibm_watsonx_ai import Credentials
from ibm_watsonx_ai.foundation_models import ModelInference
from aisuite.framework import ChatCompletionResponse


class WatsonxProvider(Provider):
def __init__(self, **config):
self.service_url = config.get("service_url") or os.getenv("WATSONX_SERVICE_URL")
self.api_key = config.get("api_key") or os.getenv("WATSONX_API_KEY")
self.project_id = config.get("project_id") or os.getenv("WATSONX_PROJECT_ID")

if not self.service_url or not self.api_key or not self.project_id:
raise EnvironmentError(
"Missing one or more required WatsonX environment variables: "
"WATSONX_SERVICE_URL, WATSONX_API_KEY, WATSONX_PROJECT_ID. "
"Please refer to the setup guide: /guides/watsonx.md."
)

def chat_completions_create(self, model, messages, **kwargs):
model = ModelInference(
model_id=model,
credentials=Credentials(
api_key=self.api_key,
url=self.service_url,
),
project_id=self.project_id,
)

res = model.chat(messages=messages, params=kwargs)
return self.normalize_response(res)

def normalize_response(self, response):
openai_response = ChatCompletionResponse()
openai_response.choices[0].message.content = response["choices"][0]["message"][
"content"
]
return openai_response
Loading

0 comments on commit 7d4717d

Please sign in to comment.