This Autogen client is to help interface quickly with non-OpenAI LLMs through the OpenAI API.
See here for more information on using with custom LLMs.
This repository simply include clients you can use to initialize your LLMs easily - since the Autogen >v0.4 supports the non-OpenAI LLMs within the
autogen_ext
package itself with a really nice and clean changes from jackgerrits here.
=======
pip install autogen-openaiext-client
from autogen_openaiext_client import GeminiChatCompletionClient
import asyncio
# Initialize the client
client = GeminiChatCompletionClient(model="gemini-1.5-flash", api_key=os.environ["GEMINI_API_KEY"])
# use the client like any other autogen client. For example:
result = asyncio.run(
client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
)
print(result.content)
# Paris
Currently, Gemini
, TogetherAI
and Groq
clients are supported through the GeminiChatCompletionClient
, TogetherAIChatCompletionClient
and GroqChatCompletionClient
respectively.
Magnetic-One example using Gemini client.
Install Magentic-One and run python examples/magentic_one_example.py --hil_mode --logs_dir ./logs
for a complete run.
- Adding a new model to existing external providers
- For example, adding a new model to
GeminiChatCompletionClient
includes modifying theGeminiInfo
class ininfo.py
and adding the new model to_MODEL_CAPABILITIES
and_MODEL_TOKEN_LIMITS
dictionaries.
- For example, adding a new model to
- Adding a new external provider
- Add a new client class in
client.py
, relevantProviderInfo
class ininfo.py
and add it to__init__.py
for easy import.
- Add a new client class in
This is a community project for Autogen. Feel free to contribute via issues and PRs and I will try my best to get to it every 3 days.