Skip to content

vballoli/autogen-openaiext-client

Repository files navigation

autogen-openaiext-client

This Autogen client is to help interface quickly with non-OpenAI LLMs through the OpenAI API.

See here for more information on using with custom LLMs.

This repository simply include clients you can use to initialize your LLMs easily - since the Autogen >v0.4 supports the non-OpenAI LLMs within the autogen_ext package itself with a really nice and clean changes from jackgerrits here.

=======

Install

pip install autogen-openaiext-client

Usage

from autogen_openaiext_client import GeminiChatCompletionClient
import asyncio

# Initialize the client
client = GeminiChatCompletionClient(model="gemini-1.5-flash", api_key=os.environ["GEMINI_API_KEY"])

# use the client like any other autogen client. For example:
result = asyncio.run(
    client.create(
        [UserMessage(content="What is the capital of France?", source="user")]
    )
)
print(result.content)
# Paris

Currently, Gemini, TogetherAI and Groq clients are supported through the GeminiChatCompletionClient, TogetherAIChatCompletionClient and GroqChatCompletionClient respectively.

Demo

YouTube

Magnetic-One example using Gemini client.

Install Magentic-One and run python examples/magentic_one_example.py --hil_mode --logs_dir ./logs for a complete run.

Contributing

  1. Adding a new model to existing external providers
    1. For example, adding a new model to GeminiChatCompletionClient includes modifying the GeminiInfo class in info.py and adding the new model to _MODEL_CAPABILITIES and _MODEL_TOKEN_LIMITS dictionaries.
  2. Adding a new external provider
    1. Add a new client class in client.py, relevant ProviderInfo class in info.py and add it to __init__.py for easy import.

Disclaimer

This is a community project for Autogen. Feel free to contribute via issues and PRs and I will try my best to get to it every 3 days.