Skip to content

An integration module that connects AutoGen 0.4.0 with an LLM proxy, enabling seamless communication between automated agents and LLMs.

License

Notifications You must be signed in to change notification settings

arananet/autogen-proxy-client

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoGen Proxy Client

🤖 An integration module that connects AutoGen 0.4.0 with an LLM proxy, enabling seamless communication between automated agents and LLMs.

❓What is an LLM Proxy?

These solutions facilitate API integration while also harnessing the benefits of traditional proxy servers, including logging, monitoring, load balancing, and more. This setup enables the selection of the "right model" for each task with minimal to no code modifications at the application level, steering clear of a one-size-fits-all approach. An LLM proxy is specifically designed to manage and optimize the use of Large Language Models (LLMs), ensuring their efficient and effective deployment across different tasks.

🌟 Key Features

  • Seamless integration with AutoGen 0.4.0 framework
  • Support proxy configurations that requires client_id, client_secret and direct access to LLMs (without model declaration) via url.
  • Built-in tool management and function calling
  • Robust error handling and response processing
  • Token usage tracking and optimization
  • Currently supports synchronous responses, streaming is not yet implemented
  • Currently support OpenAI compatible payload structure (find an example below)

⚙️ Examples

Proxy information

Payload expected by the proxy (you can test it using postman)

  "messages": [
    {
      "content": "You are a helpful assistant.",
      "role": "system"
    },
    {
      "content": "Hello, can you help me to know more about packman?",
      "role": "user",
      "name": "test_coordinator"
    }
  ]
}

🛠️ Usage

Prerequisites

  • create a python environment with version 3.10 or above < 3.13

  • python3.11 -m venv env

  • source env/bin/activate

  • Install the required packages

  • pip install --upgrade autogen-proxy-client (needs to be installed from the repository (artifact) or downloading the module and placing it inside the env/lib/python3.11/site-packages directory).

  • pip install --upgrade autogen-agentchat>=0.4 --pre

  • Request access to a proxy URL, setting up environment variables PROXY_CLIENT_ID, PROXY_CLIENT_SECRET and PROXY_LLM_URL. Remember, any sensitive information should be stored on a vault or on a local .env file.

Refer to here for more detailed examples.

Code section

Importing dependencies:

import os

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_proxy_client.config import ProxyClientConfiguration
from autogen_proxy_client.client import ProxyChatCompletionClient
from dotenv import load_dotenv

load_dotenv() # load environment variables from .env file

Create a Proxy client

proxy_config = ProxyClientConfiguration(
    client_id=os.getenv("PROXY_CLIENT_ID"),
    client_secret=os.getenv("PROXY_CLIENT_SECRET"),
    url=os.getenv("PROXY_LLM_URL"),
)

proxy_client = ProxyChatCompletionClient(**proxy_config)

Define an agent using the proxy client and register a dummy tool for querying weather

async def get_weather(city: str) -> str:
    return f"The weather in {city} is 73 degrees and Sunny."

async def main() -> None:
    weather_agent = AssistantAgent(
        name="weather_agent",
        model_client=proxy_client,
        tools=[get_weather],
    )
    
    termination = TextMentionTermination("TERMINATE")
    agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination)
    
    print("\n🌟 Starting chat about weather in New York...\n")
    
    # Create and process stream
    stream = agent_team.run_stream(task="What is the weather in New York?")
    
    async for message in stream:
        if hasattr(message, 'source') and hasattr(message, 'content'):
            # Format the source name nicely
            source = message.source.replace('_', ' ').title()
            
            # Handle different types of content
            if isinstance(message.content, list):
                # Handle function calls
                if hasattr(message.content[0], 'name'):
                    print(f"\n📡 {source} is calling: {message.content[0].name}")
                    print(f"   with arguments: {message.content[0].arguments}")
                else:
                    print(f"\n{source}: {message.content[0].content}")
            else:
                # Handle regular text messages
                print(f"\n💬 {source}: {message.content}")
            
            # Show token usage when available
            if hasattr(message, 'models_usage') and message.models_usage:
                print(f"   🔢 (Tokens - Prompt: {message.models_usage.prompt_tokens}, "
                      f"Completion: {message.models_usage.completion_tokens})")
        
        print("─" * 50)  # Separator line

await main()

⚠️ Disclaimer

  • This project is still in a very early stage under development.
  • Based on the code from autogen-watsonx-client by tsinggggg .

🤝 Contributions open

Whether it's:

  • 🐛 Bug fixes - Help us improve reliability
  • New features - Add exciting capabilities
  • 📚 Documentation improvements - Make things clearer
  • 🧪 Test enhancements - Ensure quality

✍️ Developer

Eduardo Arana

About

An integration module that connects AutoGen 0.4.0 with an LLM proxy, enabling seamless communication between automated agents and LLMs.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%