Skip to content

Commit 16d39ba

Browse files
authored
Merge branch 'main' into main
2 parents 7d4e093 + 1d1b83e commit 16d39ba

19 files changed

+531
-191
lines changed

.env.sample

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,16 @@ GOOGLE_REGION=
1818
GOOGLE_PROJECT_ID=
1919

2020
# Hugging Face token
21-
HUGGINGFACE_TOKEN=
21+
HF_TOKEN=
2222

2323
# Fireworks
2424
FIREWORKS_API_KEY=
2525

2626
# Together AI
2727
TOGETHER_API_KEY=
28+
29+
# xAI
30+
XAI_API_KEY=
31+
32+
# Sambanova
33+
SAMBANOVA_API_KEY=

.github/workflows/run_pytest.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818
run: |
1919
python -m pip install --upgrade pip
2020
pip install poetry
21-
poetry install
21+
poetry install --with test
2222
- name: Test with pytest
2323
run: poetry run pytest
2424

.gitignore

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,9 @@ __pycache__/
44
env/
55
.env
66
.google-adc
7+
8+
# Testing
9+
.coverage
10+
11+
# pyenv
12+
.python-version

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
# aisuite
22

3+
[![PyPI](https://img.shields.io/pypi/v/aisuite)](https://pypi.org/project/aisuite/)
34
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
45

56
Simple, unified interface to multiple Generative AI providers.
67

78
`aisuite` makes it easy for developers to use multiple LLM through a standardized interface. Using an interface similar to OpenAI's, `aisuite` makes it easy to interact with the most popular LLMs and compare the results. It is a thin wrapper around python client libraries, and allows creators to seamlessly swap out and test responses from different LLM providers without changing their code. Today, the library is primarily focussed on chat completions. We will expand it cover more use cases in near future.
89

910
Currently supported providers are -
10-
OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, HuggingFace and Ollama.
11+
OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, HuggingFace Ollama and Sambanova.
1112
To maximize stability, `aisuite` uses either the HTTP endpoint or the SDK for making calls to the provider.
1213

1314
## Installation
@@ -79,7 +80,7 @@ aisuite is released under the MIT License. You are free to use, modify, and dist
7980

8081
## Contributing
8182

82-
If you would like to contribute, please read our [Contributing Guide](CONTRIBUTING.md) and join our [Discord](https://discord.gg/T6Nvn8ExSb) server!
83+
If you would like to contribute, please read our [Contributing Guide](https://github.com/andrewyng/aisuite/blob/main/CONTRIBUTING.md) and join our [Discord](https://discord.gg/T6Nvn8ExSb) server!
8384

8485
## Adding support for a provider
8586
We have made easy for a provider or volunteer to add support for a new platform.

aisuite/framework/message.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
"""Interface to hold contents of api responses when they do not conform to the OpenAI style response"""
1+
"""Interface to hold contents of api responses when they do not confirm to the OpenAI style response"""
22

33

44
class Message:

aisuite/providers/huggingface_provider.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ def __init__(self, **config):
1919
The token is fetched from the config or environment variables.
2020
"""
2121
# Ensure API key is provided either in config or via environment variable
22-
self.token = config.get("token") or os.getenv("HUGGINGFACE_TOKEN")
22+
self.token = config.get("token") or os.getenv("HF_TOKEN")
2323
if not self.token:
2424
raise ValueError(
25-
"Hugging Face token is missing. Please provide it in the config or set the HUGGINGFACE_TOKEN environment variable."
25+
"Hugging Face token is missing. Please provide it in the config or set the HF_TOKEN environment variable."
2626
)
2727

2828
# Optionally set a custom timeout (default to 30s)
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
import os
2+
from aisuite.provider import Provider
3+
from openai import OpenAI
4+
5+
6+
class SambanovaProvider(Provider):
7+
def __init__(self, **config):
8+
"""
9+
Initialize the SambaNova provider with the given configuration.
10+
Pass the entire configuration dictionary to the OpenAI client constructor.
11+
"""
12+
# Ensure API key is provided either in config or via environment variable
13+
config.setdefault("api_key", os.getenv("SAMBANOVA_API_KEY"))
14+
if not config["api_key"]:
15+
raise ValueError(
16+
"Sambanova API key is missing. Please provide it in the config or set the SAMBANOVA_API_KEY environment variable."
17+
)
18+
19+
config["base_url"] = "https://api.sambanova.ai/v1/"
20+
# Pass the entire config to the OpenAI client constructor
21+
self.client = OpenAI(**config)
22+
23+
def chat_completions_create(self, model, messages, **kwargs):
24+
# Any exception raised by Sambanova will be returned to the caller.
25+
# Maybe we should catch them and raise a custom LLMError.
26+
return self.client.chat.completions.create(
27+
model=model,
28+
messages=messages,
29+
**kwargs # Pass any additional arguments to the Sambanova API
30+
)

aisuite/providers/xai_provider.py

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
import os
2+
import httpx
3+
from aisuite.provider import Provider, LLMError
4+
from aisuite.framework import ChatCompletionResponse
5+
6+
7+
class XaiProvider(Provider):
8+
"""
9+
xAI Provider using httpx for direct API calls.
10+
"""
11+
12+
BASE_URL = "https://api.x.ai/v1/chat/completions"
13+
14+
def __init__(self, **config):
15+
"""
16+
Initialize the xAI provider with the given configuration.
17+
The API key is fetched from the config or environment variables.
18+
"""
19+
self.api_key = config.get("api_key", os.getenv("XAI_API_KEY"))
20+
if not self.api_key:
21+
raise ValueError(
22+
"xAI API key is missing. Please provide it in the config or set the XAI_API_KEY environment variable."
23+
)
24+
25+
# Optionally set a custom timeout (default to 30s)
26+
self.timeout = config.get("timeout", 30)
27+
28+
def chat_completions_create(self, model, messages, **kwargs):
29+
"""
30+
Makes a request to the xAI chat completions endpoint using httpx.
31+
"""
32+
headers = {
33+
"Authorization": f"Bearer {self.api_key}",
34+
"Content-Type": "application/json",
35+
}
36+
37+
data = {
38+
"model": model,
39+
"messages": messages,
40+
**kwargs, # Pass any additional arguments to the API
41+
}
42+
43+
try:
44+
# Make the request to xAI endpoint.
45+
response = httpx.post(
46+
self.BASE_URL, json=data, headers=headers, timeout=self.timeout
47+
)
48+
response.raise_for_status()
49+
except httpx.HTTPStatusError as http_err:
50+
raise LLMError(f"xAI request failed: {http_err}")
51+
except Exception as e:
52+
raise LLMError(f"An error occurred: {e}")
53+
54+
# Return the normalized response
55+
return self._normalize_response(response.json())
56+
57+
def _normalize_response(self, response_data):
58+
"""
59+
Normalize the response to a common format (ChatCompletionResponse).
60+
"""
61+
normalized_response = ChatCompletionResponse()
62+
normalized_response.choices[0].message.content = response_data["choices"][0][
63+
"message"
64+
]["content"]
65+
return normalized_response

examples/client.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@
122122
"source": [
123123
"# IMP NOTE: Azure expects model endpoint to be passed in the format of \"azure:<model_name>\".\n",
124124
"# The model name is the deployment name in Project/Deployments.\n",
125-
"# In the exmaple below, the model is \"mistral-large-2407\", but the name given to the\n",
125+
"# In the example below, the model is \"mistral-large-2407\", but the name given to the\n",
126126
"# deployment is \"aisuite-mistral-large-2407\" under the deployments section in Azure.\n",
127127
"client.configure({\"azure\" : {\n",
128128
" \"api_key\": os.environ[\"AZURE_API_KEY\"],\n",
@@ -142,7 +142,7 @@
142142
"source": [
143143
"# HuggingFace expects the model to be passed in the format of \"huggingface:<model_name>\".\n",
144144
"# The model name is the full name of the model in HuggingFace.\n",
145-
"# In the exmaple below, the model is \"mistralai/Mistral-7B-Instruct-v0.3\".\n",
145+
"# In the example below, the model is \"mistralai/Mistral-7B-Instruct-v0.3\".\n",
146146
"# The model is deployed as serverless inference endpoint in HuggingFace.\n",
147147
"hf_model = \"huggingface:mistralai/Mistral-7B-Instruct-v0.3\"\n",
148148
"response = client.chat.completions.create(model=hf_model, messages=messages)\n",
@@ -159,7 +159,7 @@
159159
"\n",
160160
"# Groq expects the model to be passed in the format of \"groq:<model_name>\".\n",
161161
"# The model name is the full name of the model in Groq.\n",
162-
"# In the exmaple below, the model is \"llama3-8b-8192\".\n",
162+
"# In the example below, the model is \"llama3-8b-8192\".\n",
163163
"groq_llama3_8b = \"groq:llama3-8b-8192\"\n",
164164
"# groq_llama3_70b = \"groq:llama3-70b-8192\"\n",
165165
"response = client.chat.completions.create(model=groq_llama3_8b, messages=messages)\n",

guides/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ Here're the instructions for:
99
- [Google](google.md)
1010
- [Hugging Face](huggingface.md)
1111
- [OpenAI](openai.md)
12+
- [SambaNova](sambanova.md)
13+
- [xAI](xai.md)
1214

1315
Unless otherwise stated, these guides have not been endorsed by the providers.
1416

guides/google.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Set the `GOOGLE_PROJECT_ID` environment variable to the ID of your project. You
2222

2323
### Set your preferred region in an environment variable.
2424

25-
Set the `GOOGLE_REGION` environment variable to the ID of your project. You can find the Project ID by visiting the project dashboard in the "Project Info" section toward the top of the page.
25+
Set the `GOOGLE_REGION` environment variable. You can find the region by going to Project Dashboard under VertexAI side navigation menu, and then scrolling to the bottom of the page.
2626

2727
## Create a Service Account For API Access
2828

@@ -89,4 +89,4 @@ response = client.chat.completions.create(
8989
print(response.choices[0].message.content)
9090
```
9191

92-
Happy coding! If you would like to contribute, please read our [Contributing Guide](../CONTRIBUTING.md).
92+
Happy coding! If you would like to contribute, please read our [Contributing Guide](../CONTRIBUTING.md).

guides/groq.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Groq
2+
3+
To use Groq with `aisuite`, you’ll need a free [Groq account](https://console.groq.com/). After logging in, go to the [API Keys](https://console.groq.com/keys) section in your account settings and generate a new Groq API key. Once you have your key, add it to your environment as follows:
4+
5+
```shell
6+
export GROQ_API_KEY="your-groq-api-key"
7+
```
8+
9+
## Create a Python Chat Completion
10+
11+
1. First, install the `groq` Python client library:
12+
13+
```shell
14+
pip install groq
15+
```
16+
17+
2. Now you can simply create your first chat completion with the following example code or customize by swapoping out the `model_id` with any of the other available [models powered by Groq](https://console.groq.com/docs/models) and `messages` array with whatever you'd like:
18+
```python
19+
import aisuite as ai
20+
client = ai.Client()
21+
22+
provider = "groq"
23+
model_id = "llama-3.2-3b-preview"
24+
25+
messages = [
26+
{"role": "system", "content": "You are a helpful assistant."},
27+
{"role": "user", "content": "What’s the weather like in San Francisco?"},
28+
]
29+
30+
response = client.chat.completions.create(
31+
model=f"{provider}:{model_id}",
32+
messages=messages,
33+
)
34+
35+
print(response.choices[0].message.content)
36+
```
37+
38+
39+
Happy coding! If you’d like to contribute, please read our [Contributing Guide](CONTRIBUTING.md).

guides/huggingface.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ After setting up your model, you'll need to gather the following information:
1818
Set the following environment variables to make authentication and requests easy:
1919

2020
```shell
21-
export HUGGINGFACE_TOKEN="your-api-token"
21+
export HF_TOKEN="your-api-token"
2222
```
2323

2424
## Create a Chat Completion

guides/sambanova.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# Sambanova
2+
3+
To use Sambanova with `aisuite`, you’ll need a [Sambanova Cloud](https://cloud.sambanova.ai/) account. After logging in, go to the [API](https://cloud.sambanova.ai/apis) section and generate a new key. Once you have your key, add it to your environment as follows:
4+
5+
```shell
6+
export SAMBANOVA_API_KEY="your-sambanova-api-key"
7+
```
8+
9+
## Create a Chat Completion
10+
11+
Install the `openai` Python client:
12+
13+
Example with pip:
14+
```shell
15+
pip install openai
16+
```
17+
18+
Example with poetry:
19+
```shell
20+
poetry add openai
21+
```
22+
23+
In your code:
24+
```python
25+
import aisuite as ai
26+
client = ai.Client()
27+
28+
provider = "sambanova"
29+
model_id = "Meta-Llama-3.1-405B-Instruct"
30+
31+
messages = [
32+
{"role": "system", "content": "You are a helpful assistant."},
33+
{"role": "user", "content": "What’s the weather like in San Francisco?"},
34+
]
35+
36+
response = client.chat.completions.create(
37+
model=f"{provider}:{model_id}",
38+
messages=messages,
39+
)
40+
41+
print(response.choices[0].message.content)
42+
```
43+
44+
Happy coding! If you’d like to contribute, please read our [Contributing Guide](CONTRIBUTING.md).

guides/xai.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# xAI
2+
3+
To use xAI with `aisuite`, you’ll need an [API key](https://console.x.ai/). Generate a new key and once you have your key, add it to your environment as follows:
4+
5+
```shell
6+
export XAI_API_KEY="your-xai-api-key"
7+
```
8+
9+
## Create a Chat Completion
10+
11+
Sample code:
12+
```python
13+
import aisuite as ai
14+
client = ai.Client()
15+
16+
models = ["xai:grok-beta"]
17+
18+
messages = [
19+
{"role": "system", "content": "Respond in Pirate English."},
20+
{"role": "user", "content": "Tell me a joke."},
21+
]
22+
23+
for model in models:
24+
response = client.chat.completions.create(
25+
model=model,
26+
messages=messages,
27+
temperature=0.75
28+
)
29+
print(response.choices[0].message.content)
30+
31+
```
32+
33+
Happy coding! If you’d like to contribute, please read our [Contributing Guide](CONTRIBUTING.md).

0 commit comments

Comments
 (0)