Skip to content

Releases: microsoft/autogen

autogenstudio-v0.4.1

08 Feb 00:22
9494ac9
Compare
Choose a tag to compare
autogenstudio-v0.4.1 Pre-release
Pre-release

Whats New

AutoGen Studio Declarative Configuration

  • in #5172, you can now build your agents in python and export to a json format that works in autogen studio

AutoGen studio now used the same declarative configuration interface as the rest of the AutoGen library. This means you can create your agent teams in python and then dump_component() it into a JSON spec that can be directly used in AutoGen Studio! This eliminates compatibility (or feature inconsistency) errors between AGS/AgentChat Python as the exact same specs can be used across.

See a video tutorial on AutoGen Studio v0.4 (02/25) - https://youtu.be/oum6EI7wohM

A Friendly Introduction to AutoGen Studio v0.4

Here's an example of an agent team and how it is converted to a JSON file:

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.conditions import  TextMentionTermination

agent = AssistantAgent(
        name="weather_agent",
        model_client=OpenAIChatCompletionClient(
            model="gpt-4o-mini",
        ),
    )

agent_team = RoundRobinGroupChat([agent], termination_condition=TextMentionTermination("TERMINATE"))
config = agent_team.dump_component()
print(config.model_dump_json())
{
  "provider": "autogen_agentchat.teams.RoundRobinGroupChat",
  "component_type": "team",
  "version": 1,
  "component_version": 1,
  "description": "A team that runs a group chat with participants taking turns in a round-robin fashion\n    to publish a message to all.",
  "label": "RoundRobinGroupChat",
  "config": {
    "participants": [
      {
        "provider": "autogen_agentchat.agents.AssistantAgent",
        "component_type": "agent",
        "version": 1,
        "component_version": 1,
        "description": "An agent that provides assistance with tool use.",
        "label": "AssistantAgent",
        "config": {
          "name": "weather_agent",
          "model_client": {
            "provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
            "component_type": "model",
            "version": 1,
            "component_version": 1,
            "description": "Chat completion client for OpenAI hosted models.",
            "label": "OpenAIChatCompletionClient",
            "config": { "model": "gpt-4o-mini" }
          },
          "tools": [],
          "handoffs": [],
          "model_context": {
            "provider": "autogen_core.model_context.UnboundedChatCompletionContext",
            "component_type": "chat_completion_context",
            "version": 1,
            "component_version": 1,
            "description": "An unbounded chat completion context that keeps a view of the all the messages.",
            "label": "UnboundedChatCompletionContext",
            "config": {}
          },
          "description": "An agent that provides assistance with ability to use tools.",
          "system_message": "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
          "model_client_stream": false,
          "reflect_on_tool_use": false,
          "tool_call_summary_format": "{result}"
        }
      }
    ],
    "termination_condition": {
      "provider": "autogen_agentchat.conditions.TextMentionTermination",
      "component_type": "termination",
      "version": 1,
      "component_version": 1,
      "description": "Terminate the conversation if a specific text is mentioned.",
      "label": "TextMentionTermination",
      "config": { "text": "TERMINATE" }
    }
  }
}

Note: If you are building custom agents and want to use them in AGS, you will need to inherit from the AgentChat BaseChat agent and Component class.

Note: This is a breaking change in AutoGen Studio. You will need to update your AGS specs for any teams created with version autogenstudio <0.4.1

Ability to Test Teams in Team Builder

  • in #5392, you can now test your teams as you build them. No need to switch between team builder and playground sessions to test.

You can now test teams directly as you build them in the team builder UI. As you edit your team (either via drag and drop or by editing the JSON spec)

Image Image

New Default Agents in Gallery (Web Agent Team, Deep Research Team)

  • in #5416, adds an implementation of a Web Agent Team and Deep Research Team in the default gallery.

The default gallery now has two additional default agents that you can build on and test:

  • Web Agent Team - A team with 3 agents - a Web Surfer agent that can browse the web, a Verification Assistant that verifies and summarizes information, and a User Proxy that provides human feedback when needed.
  • Deep Research Team - A team with 3 agents - a Research Assistant that performs web searches and analyzes information, a Verifier that ensures research quality and completeness, and a Summary Agent that provides a detailed markdown summary of the research as a report to the user.

Other Improvements

Older features that are currently possible in v0.4.1

  • Real-time agent updates streaming to the frontend
  • Run control: You can now stop agents mid-execution if they're heading in the wrong direction, adjust the team, and continue
  • Interactive feedback: Add a UserProxyAgent to get human input through the UI during team runs
  • Message flow visualization: See how agents communicate with each other
  • Ability to import specifications from external galleries
  • Ability to wrap agent teams into an API using the AutoGen Studio CLI

To update to the latest version:

pip install -U autogenstudio

Overall roadmap for AutoGen Studion is here #4006 .
Contributions welcome!

python-v0.4.5

01 Feb 04:11
756e2a4
Compare
Choose a tag to compare

What's New

Streaming for AgentChat agents and teams

  • Introduce ModelClientStreamingChunkEvent for streaming model output and update handling in agents and console by @ekzhu in #5208

To enable streaming from an AssistantAgent, set model_client_stream=True when creating it. The token stream will be available when you run the agent directly, or as part of a team when you call run_stream.

If you want to see tokens streaming in your console application, you can use Console directly.

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
    agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"), model_client_stream=True)
    await Console(agent.run_stream(task="Write a short story with a surprising ending."))

asyncio.run(main())

If you are handling the messages yourself and streaming to the frontend, you can handle
autogen_agentchat.messages.ModelClientStreamingChunkEvent message.

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
    agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"), model_client_stream=True)
    async for message in agent.run_stream(task="Write 3 line poem."):
        print(message)

asyncio.run(main())
source='user' models_usage=None content='Write 3 line poem.' type='TextMessage'
source='assistant' models_usage=None content='Silent' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' whispers' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' glide' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=',' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='  \n' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='Moon' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='lit' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' dreams' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' dance' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' through' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' the' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' night' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=',' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='  \n' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='Stars' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' watch' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' from' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' above' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='.' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0) content='Silent whispers glide,  \nMoonlit dreams dance through the night,  \nStars watch from above.' type='TextMessage'
TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write 3 line poem.', type='TextMessage'), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content='Silent whispers glide,  \nMoonlit dreams dance through the night,  \nStars watch from above.', type='TextMessage')], stop_reason=None)

Read more here: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#streaming-tokens

Also, see the sample showing how to stream a team's messages to ChainLit frontend: https://github.com/microsoft/autogen/tree/python-v0.4.5/python/samples/agentchat_chainlit

R1-style reasoning output

  • Support R1 reasoning text in model create result; enhance API docs by @ekzhu in #5262
import asyncio
from autogen_core.models import UserMessage, ModelFamily
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
    model_client = OpenAIChatCompletionClient(
        model="deepseek-r1:1.5b",
        api_key="placeholder",
        base_url="http://localhost:11434/v1",
        model_info={
            "function_calling": False,
            "json_output": False,
            "vision": False,
            "family": ModelFamily.R1,
        }
    )

    # Test basic completion with the Ollama deepseek-r1:1.5b model.
    create_result = await model_client.create(
        messages=[
            UserMessage(
                content="Taking two balls from a bag of 10 green balls and 20 red balls, "
                "what is the probability of getting a green and a red balls?",
                source="user",
            ),
        ]
    )

    # CreateResult.thought field contains the thinking content.
    print(create_result.thought)
    print(create_result.content)

asyncio.run(main())

Streaming is also supported with R1-style reasoning output.

See the sample showing R1 playing chess: https://github.com/microsoft/autogen/tree/python-v0.4.5/python/samples/agentchat_chess_game

FunctionTool for partial functions

Now you can define function tools from partial functions, where some parameters have been set before hand.

import json
from functools import partial
from autogen_core.tools import FunctionTool


def get_weather(country: str, city: str) -> str:
    return f"The temperature in {city}, {country} is 75°"


partial_function = partial(get_weather, "Germany")
tool = FunctionTool(partial_function, description="Partial function tool.")

print(json.dumps(tool.schema, indent=2))
{
  "name": "get_weather",
  "description": "Partial function tool.",
  "parameters": {
    "type": "object",
    "properties": {
      "city": {
        "description": "city",
        "title": "City",
        "type": "string"
      }
    },
    "required": [
      "city"
    ]
  }
}

CodeExecutorAgent update

  • Added an optional sources parameter to CodeExecutorAgent by @afourney in #5259

New Samples

  • Streamlit + AgentChat sample by @husseinkorly in #5306
  • ChainLit + AgentChat sample with streaming by @ekzhu in #5304
  • Chess sample showing R1-Style reasoning for planning and strategizing by @ekzhu in #5285

Documentation update:

  • Add Semantic Kernel Adapter documentation and usage examples in user guides by @ekzhu in #5256
  • Update human-in-the-loop tutorial with better system message to signal termination condition by @ekzhu in #5253

Moves

Bug Fixes

  • fix: handle non-string function arguments in tool calls and add corresponding warnings by @ekzhu in #5260
  • Add default_header support by @nour-bouzid in #5249
  • feat: update OpenAIAssistantAgent to support AsyncAzureOpenAI client by @ekzhu in #5312

All Other Python Related Changes

  • Update website for v0.4.4 by @ekzhu in #5246
  • update dependencies to work with protobuf 5 by @MohMaz in #5195
  • Adjusted M1 agent system prompt to remove TERMINATE by @afourney in #5263
    #5270
  • chore: update package versions to 0.4.5 and remove deprecated requirements by @ekzhu in #5280
  • Update Distributed Agent Runtime Cross-platform Sample by @linznin in #5164
  • fix: windows check ci failure by @bassmang in #5287
  • fix: type issues in streamlit sample and add streamlit to dev dependencies by @ekzhu in #5309
  • chore: add asyncio_atexit dependency to docker requirements by @ekzhu in #5307
  • feat: add o3 to model info; update chess example by @ekzhu in #5311

New Contributors

Full Changelog: v0.4.4...python-v0.4.5

python-v0.4.4

29 Jan 03:32
bd5a24b
Compare
Choose a tag to compare

What's New

Serializable Configuration for AgentChat

This new feature allows you to serialize an agent or a team to a JSON string, and deserialize them back into objects. Make sure to also read about save_state and load_state: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/state.html.

You now can serialize and deserialize both the configurations and the state of agents and teams.

For example, create a RoundRobinGroupChat, and serialize its configuration and state.

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team
from autogen_agentchat.ui import Console
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def dump_team_config() -> None:
    model_client = OpenAIChatCompletionClient(model="gpt-4o")
    assistant = AssistantAgent(
        "assistant",
        model_client=model_client,
        system_message="You are a helpful assistant.",
    )
    critic = AssistantAgent(
        "critic",
        model_client=model_client,
        system_message="Provide feedback. Reply with 'APPROVE' if the feedback has been addressed.",
    )
    termination = TextMentionTermination("APPROVE", sources=["critic"])
    group_chat = RoundRobinGroupChat(
        [assistant, critic], termination_condition=termination
    )
    # Run the group chat.
    await Console(group_chat.run_stream(task="Write a short poem about winter."))
    # Dump the team configuration to a JSON file.
    config = group_chat.dump_component()
    with open("team_config.json", "w") as f:
        f.write(config.model_dump_json(indent=4))
    # Dump the team state to a JSON file.
    state = await group_chat.save_state()
    with open("team_state.json", "w") as f:
        f.write(json.dumps(state, indent=4))

asyncio.run(dump_team_config())

Produces serialized team configuration and state. Truncated for illustration purpose.

{
    "provider": "autogen_agentchat.teams.RoundRobinGroupChat",
    "component_type": "team",
    "version": 1,
    "component_version": 1,
    "description": "A team that runs a group chat with participants taking turns in a round-robin fashion\n    to publish a message to all.",
    "label": "RoundRobinGroupChat",
    "config": {
        "participants": [
            {
                "provider": "autogen_agentchat.agents.AssistantAgent",
                "component_type": "agent",
                "version": 1,
                "component_version": 1,
                "description": "An agent that provides assistance with tool use.",
                "label": "AssistantAgent",
                "config": {
                    "name": "assistant",
                    "model_client": {
                        "provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
                        "component_type": "model",
                        "version": 1,
                        "component_version": 1,
                        "description": "Chat completion client for OpenAI hosted models.",
                        "label": "OpenAIChatCompletionClient",
                        "config": {
                            "model": "gpt-4o"
                        }
{
    "type": "TeamState",
    "version": "1.0.0",
    "agent_states": {
        "group_chat_manager/25763eb1-78b2-4509-8607-7224ae383575": {
            "type": "RoundRobinManagerState",
            "version": "1.0.0",
            "message_thread": [
                {
                    "source": "user",
                    "models_usage": null,
                    "content": "Write a short poem about winter.",
                    "type": "TextMessage"
                },
                {
                    "source": "assistant",
                    "models_usage": {
                        "prompt_tokens": 25,
                        "completion_tokens": 150
                    },
                    "content": "Amidst the still and silent air,  \nWhere frost adorns the branches bare,  \nThe world transforms in shades of white,  \nA wondrous, shimmering, quiet sight.\n\nThe whisper of the wind is low,  \nAs snowflakes drift and dance and glow.  \nEach crystal, delicate and bright,  \nFalls gently through the silver night.\n\nThe earth is hushed in pure embrace,  \nA tranquil, glistening, untouched space.  \nYet warmth resides in hearts that roam,  \nFinding solace in the hearth of home.\n\nIn winter\u2019s breath, a promise lies,  \nBeneath the veil of cold, clear skies:  \nThat spring will wake the sleeping land,  \nAnd life will bloom where now we stand.",
                    "type": "TextMessage"

Load the configuration and state back into objects.

import asyncio
import json
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team

async def load_team_config() -> None:
    # Load the team configuration from a JSON file.
    with open("team_config.json", "r") as f:
        config = json.load(f)
    group_chat = Team.load_component(config)
    # Load the team state from a JSON file.
    with open("team_state.json", "r") as f:
        state = json.load(f)
    await group_chat.load_state(state)
    assert isinstance(group_chat, RoundRobinGroupChat)

asyncio.run(load_team_config())

This new feature allows you to manage persistent sessions across server-client based user interaction.

Azure AI Client for Azure-Hosted Models

This allows you to use Azure and GitHub-hosted models, including Phi-4, Mistral models, and Cohere models.

import asyncio
import os

from autogen_core.models import UserMessage
from autogen_ext.models.azure import AzureAIChatCompletionClient
from azure.core.credentials import AzureKeyCredential


async def main() -> None:
    client = AzureAIChatCompletionClient(
        model="Phi-4",
        endpoint="https://models.inference.ai.azure.com",
        # To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
        # Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
        credential=AzureKeyCredential(os.environ["GITHUB_TOKEN"]),
        model_info={
            "json_output": False,
            "function_calling": False,
            "vision": False,
            "family": "unknown",
        },
    )
    result = await client.create(
        [UserMessage(content="What is the capital of France?", source="user")]
    )
    print(result)


asyncio.run(main())

Rich Console UI for Magentic One CLI

You can now enable pretty printed output for m1 command line tool by adding --rich argument.

m1 --rich "Find information about AutoGen"
Screenshot 2025-01-28 191752

Default In-Memory Cache for ChatCompletionCache

This allows you to cache model client calls without specifying an external cache service.

import asyncio

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache


async def main() -> None:
    # Create a model client.
    client = OpenAIChatCompletionClient(model="gpt-4o")

    # Create a cached wrapper around the model client.
    cached_client = ChatCompletionCache(client)

    # Call the cached client.
    result = await cached_client.create(
        [UserMessage(content="What is the capital of France?", source="user")]
    )
    print(result.content, result.cached)

    # Call the cached client again.
    result = await cached_client.create(
        [UserMessage(content="What is the capital of France?", source="user")]
    )
    print(result.content, result.cached)


asyncio.run(main())
The capital of France is Paris. False
The capital of France is Paris. True

Docs Update

  • Update model client documentation add Ollama, Gemini, Azure AI models by @ekzhu in #5196
  • Add Model Client Cache section to migration guide by @ekzhu in #5197
  • docs: Enhance documentation for SingleThreadedAgentRuntime with usage examples and clarifications; undeprecate process_next by @ekzhu in #5230
  • docs: Update user guide notebooks to enhance clarity and add structured output by @ekzhu in #5224
  • docs: Core API doc update: split out model context from model clients; separate framework and components by @ekzhu in #5171
  • docs: Add a helpful comment to swarm.ipynb by @withsmilo in https://github.com/microsoft/autogen...
Read more

python-v0.4.3

22 Jan 16:14
da1c2bf
Compare
Choose a tag to compare

What's new

This is the first release since 0.4.0 with significant new features! We look forward to hearing feedback and suggestions from the community.

Chat completion model cache

One of the big missing features from 0.2 was the ability to seamlessly cache model client completions. This release adds ChatCompletionCache which can wrap any other ChatCompletionClient and cache completions.

There is a CacheStore interface to allow for easy implementation of new caching backends. The currently available implementations are:

import asyncio
import tempfile

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache, CHAT_CACHE_VALUE_TYPE
from autogen_ext.cache_store.diskcache import DiskCacheStore
from diskcache import Cache


async def main():
    with tempfile.TemporaryDirectory() as tmpdirname:
        openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")

        cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](Cache(tmpdirname))
        cache_client = ChatCompletionCache(openai_model_client, cache_store)

        response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
        print(response)  # Should print response from OpenAI
        response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
        print(response)  # Should print cached response


asyncio.run(main())

ChatCompletionCache is not yet supported by the declarative component config, see the issue to track progress.

#4924 by @srjoglekar246

GraphRAG

This releases adds support for GraphRAG as a tool agents can call. You can find a sample for how to use this integration here, and docs for LocalSearchTool and GlobalSearchTool.

import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
from autogen_ext.tools.graphrag import GlobalSearchTool
from autogen_agentchat.agents import AssistantAgent


async def main():
    # Initialize the OpenAI client
    openai_client = OpenAIChatCompletionClient(
        model="gpt-4o-mini",
    )

    # Set up global search tool
    global_tool = GlobalSearchTool.from_settings(settings_path="./settings.yaml")

    # Create assistant agent with the global search tool
    assistant_agent = AssistantAgent(
        name="search_assistant",
        tools=[global_tool],
        model_client=openai_client,
        system_message=(
            "You are a tool selector AI assistant using the GraphRAG framework. "
            "Your primary task is to determine the appropriate search tool to call based on the user's query. "
            "For broader, abstract questions requiring a comprehensive understanding of the dataset, call the 'global_search' function."
        ),
    )

    # Run a sample query
    query = "What is the overall sentiment of the community reports?"
    await Console(assistant_agent.run_stream(task=query))


if __name__ == "__main__":
    asyncio.run(main())

#4612 by @lspinheiro

Semantic Kernel model adapter

Semantic Kernel has an extensive collection of AI connectors. In this release we added support to adapt a Semantic Kernel AI Connector to an AutoGen ChatCompletionClient using the SKChatCompletionAdapter.

Currently this requires passing the kernel during create, and so cannot be used with AssistantAgent directly yet. This will be fixed in a future release (#5144).

#4851 by @lspinheiro

AutoGen to Semantic Kernel tool adapter

We also added a tool adapter, but this time to allow AutoGen tools to be added to a Kernel, called KernelFunctionFromTool.

#4851 by @lspinheiro

Jupyter Code Executor

This release also brings forward Jupyter code executor functionality that we had in 0.2, as the JupyterCodeExecutor.

Please note that this currently on supports local execution and should be used with caution.

#4885 by @Leon0402

Memory

It's still early on but we merged the interface for agent memory in this release. This allows agents to enrich their context from a memory store and save information to it. The interface is defined in core and AssistantAgent in agentchat accepts memory as a parameter now. There is an initial example memory implementation which simply injects all memories as system messages for the agent. The intention is for the memory interface to be able to be used for both RAG and agent memory systems going forward.

#4438 by @victordibia, #5053 by @ekzhu

Declarative config

We're continuing to expand support for declarative configs throughout the framework. In this release, we've added support for termination conditions and base chat agents. Once we're done with this, you'll be able to configure and entire team of agents with a single config file and have it work seamlessly with AutoGen studio. Stay tuned!

#4984, #5055 by @victordibia

Other

  • Add sources field to TextMentionTermination by @Leon0402 in #5106
  • Update gpt-4o model version to 2024-08-06 by @ekzhu in #5117

Bug fixes

  • Retry multiple times when M1 selects an invalid agent. Make agent sel… by @afourney in #5079
  • fix: normalize finish reason in CreateResult response by @ekzhu in #5085
  • Pass context between AssistantAgent for handoffs by @ekzhu in #5084
  • fix: ensure proper handling of structured output in OpenAI client and improve test coverage for structured output by @ekzhu in #5116
  • fix: use tool_calls field to detect tool calls in OpenAI client; add integration tests for OpenAI and Gemini by @ekzhu in #5122

Other changes

Read more

python-v0.4.2

16 Jan 00:07
Compare
Choose a tag to compare
  • Change async input strategy in order to remove unintentional and accidentally added GPL dependency (#5060)

Full Changelog: v0.4.1...v0.4.2

python-v0.4.1

13 Jan 23:50
cf8446b
Compare
Choose a tag to compare

What's Important

All Changes since v0.4.0

New Contributors

Full Changelog: v0.4.0...v0.4.1

python-v0.4.0

10 Jan 00:01
78ac9f8
Compare
Choose a tag to compare

What's Important

🎉 🎈 Our first stable release of v0.4! 🎈 🎉

To upgrade from v0.2, read the migration guide. For a basic setup:

pip install -U "autogen-agentchat" "autogen-ext[openai]"

You can refer to our updated README for more information about the new API.

Major Changes from v0.4.0.dev13

Change Log from v0.4.0.dev13: v0.4.0.dev13...v0.4.0

New Contributors to v0.4.0

❤️ Big thanks to all the contributors since the first preview version was open sourced. ❤️

Changes from v0.2.36

Read more

v0.4.0.dev13

30 Dec 22:32
fb1094d
Compare
Choose a tag to compare
v0.4.0.dev13 Pre-release
Pre-release

What's new

  • An initial version of the migration guide is ready. Find it here! (#4765)
  • Model family is now available in the model client (#4856)

Breaking changes

  • Previously deprecated module paths have been removed (#4853)
  • SingleThreadedAgentRuntime.process_next is now blocking and has moved to be an internal API (#4855)

Fixes

Doc changes

  • Migration guide for 0.4 by @ekzhu in #4765
  • Clarify tool use in agent tutorial by @ekzhu in #4860
  • AgentChat tutorial update to include model context usage and langchain tool by @ekzhu in #4843
  • Add missing model context attribute by @Leon0402 in #4848

Other

New Contributors

Full Changelog: v0.4.0.dev12...v0.4.0.dev13

v0.4.0.dev12

27 Dec 20:09
d933b9a
Compare
Choose a tag to compare
v0.4.0.dev12 Pre-release
Pre-release

Important Changes

  • run and run_stream now support a list of messages as task input.
  • Introduces AgentEvent union type in AgentChat, for all messages that are not meant to be consumed by other agents. Replace AgentMessage with AgentEvent | ChatMessage union type in your code, e.g., in your custom selector function for SelectorGroupChat and processing code for TaskResult.messages.
  • Introduce ToolCallSummaryMessage to ChatMessage for tool call results from agents. Read AssistantAgent Doc
  • Introduce ModelContext parameter for AssistantAgent, allow usage of BufferedChatCompletionContext to limit context window size sent to model.
  • Introduce ComponentConfig and add configuration loader for ChatCompletionClient. See Component Config
  • Moved autogen_core.tools.PythonCodeExecutorTool to autogen_ext.tools.code_execution.PythonCodeExecutionTool.
  • Documentation updates.

Upcoming Changes

  • Deprecating @message_handler. Use @event or @rpc to annotate handlers instead. @message_handler will be kept with a deprecation warning until further notice. #4828
  • Token counting mechanism bug fixes #4719

New Contributors

Full Changelog: v0.4.0.dev11...v0.4.0.dev12

v0.2.40

15 Dec 06:11
3b4c017
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.2.39...v0.2.40