Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions agent_starter_pack/agents/ag2/.template/templateconfig.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

description: "Multi-agent system powered by AG2 framework"
settings:
requires_data_ingestion: false
deployment_targets: ["cloud_run", "none"]
extra_dependencies:
- "ag2[openai,gemini]>=0.11.4,<1.0"
- "python-dotenv>=1.0.0,<2.0.0"
tags: ["ag2"]
frontend_type: "None"
example_question: "What's the weather in San Francisco?"
50 changes: 50 additions & 0 deletions agent_starter_pack/agents/ag2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# AG2 Multi-Agent Template

A multi-agent system template powered by [AG2](https://ag2.ai) (formerly AutoGen), an open-source framework with 500K+ monthly PyPI downloads.

## Overview

This template demonstrates a ReAct-style agent built with AG2, featuring:
- **Tool Use**: Decorator-based function registration for LLM tool calling
- **Multi-Agent Ready**: Easy to extend with GroupChat for multi-agent orchestration
- **Dual LLM Support**: Vertex AI (Gemini) by default, with Google API key fallback

## Architecture

```
User Message
|
v
+--------------+ +---------------+
| UserProxy |---->| Assistant |
| (executor) |<----| (LLM agent) |
+--------------+ +---------------+
| |
v v
+----------+ +----------+
| Tools | | Gemini |
| (Python | | / GPT |
| funcs) | | |
+----------+ +----------+
```

## Quick Start

```bash
agent-starter-pack create my-agent --agent ag2
cd my-agent
make install
make playground
```

## Customization

- Add tools: Define functions with `@user_proxy.register_for_execution()` + `@assistant.register_for_llm()`
- Change model: Update `llm_config` in `app/agent.py`
- Multi-agent: Add GroupChat (see `notebooks/getting_started.ipynb`)

## Resources

- [AG2 Documentation](https://docs.ag2.ai)
- [AG2 GitHub](https://github.com/ag2ai/ag2)
- [Agent Starter Pack Docs](https://googlecloudplatform.github.io/agent-starter-pack/)
15 changes: 15 additions & 0 deletions agent_starter_pack/agents/ag2/app/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""AG2 multi-agent template for Agent Starter Pack."""
120 changes: 120 additions & 0 deletions agent_starter_pack/agents/ag2/app/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# ruff: noqa
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using # ruff: noqa to disable all linting for the entire file is discouraged. It is better to address specific linting issues or use targeted suppressions if Jinja syntax causes false positives during template linting. This ensures the template follows the project's code quality standards.

References
  1. The project uses ruff for Python linting to maintain code quality. (link)

# Copyright 2026 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""AG2 multi-agent implementation with tool use."""

import os
from typing import Annotated

from dotenv import load_dotenv

from autogen import AssistantAgent, UserProxyAgent, LLMConfig

load_dotenv()
{%- if not cookiecutter.use_google_api_key %}

import google.auth

_, project_id = google.auth.default()
os.environ.setdefault("GOOGLE_CLOUD_PROJECT", project_id)
os.environ.setdefault("GOOGLE_CLOUD_LOCATION", "us-central1")

llm_config = LLMConfig(
{
"model": "gemini-2.5-flash",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model name gemini-2.5-flash appears to be a typo. As of now, the available stable versions are gemini-1.5-flash or gemini-2.0-flash.

        "model": "gemini-1.5-flash",

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The model name gemini-2.5-flash appears to be a typo, as Gemini 2.5 has not been released. It should likely be gemini-1.5-flash or gemini-2.0-flash.

        "model": "gemini-1.5-flash",

"api_type": "google",
"project_id": os.environ["GOOGLE_CLOUD_PROJECT"],
"location": os.environ["GOOGLE_CLOUD_LOCATION"],
}
)
Comment on lines +34 to +41
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The LLMConfig instantiation is incorrect and uses an invalid model name. In ag2, LLMConfig expects a config_list (a list of dictionaries) rather than a single dictionary. Additionally, gemini-2.5-flash is not a valid model name; it should be gemini-2.0-flash or gemini-1.5-flash.

llm_config = LLMConfig(
    config_list=[
        {
            "model": "gemini-2.0-flash",
            "api_type": "google",
            "project_id": os.environ["GOOGLE_CLOUD_PROJECT"],
            "location": os.environ["GOOGLE_CLOUD_LOCATION"],
        }
    ]
)

{%- else %}

llm_config = LLMConfig(
{
"model": "gemini-2.5-flash",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model name gemini-2.5-flash appears to be a typo. As of now, the available stable versions are gemini-1.5-flash or gemini-2.0-flash.

        "model": "gemini-1.5-flash",

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The model name gemini-2.5-flash appears to be a typo, as Gemini 2.5 has not been released. It should likely be gemini-1.5-flash or gemini-2.0-flash.

        "model": "gemini-1.5-flash",

"api_key": os.environ.get("GOOGLE_API_KEY", ""),
"api_type": "google",
}
)
Comment on lines +44 to +50
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The LLMConfig instantiation for the Google API key backend should also use the config_list parameter and a valid model name.

llm_config = LLMConfig(
    config_list=[
        {
            "model": "gemini-2.0-flash",
            "api_key": os.environ.get("GOOGLE_API_KEY", ""),
            "api_type": "google",
        }
    ]
)

{%- endif %}


# --- Agent Setup ---

assistant = AssistantAgent(
name="Assistant",
system_message=(
"You are a helpful AI assistant. Use the available tools to answer "
"questions accurately. Reply TERMINATE when the task is complete."
),
llm_config=llm_config,
)

user_proxy = UserProxyAgent(
name="User",
human_input_mode="NEVER",
max_consecutive_auto_reply=5,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config=False,
)


# --- Tool Definitions ---


@user_proxy.register_for_execution()
@assistant.register_for_llm(description="Get the current weather for a given location")
def get_weather(
location: Annotated[str, "The city name to get weather for"],
) -> str:
"""Simulates a web search. Use it get information on weather."""
if "sf" in location.lower() or "san francisco" in location.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."


# --- Entry Point ---


def run_agent(message: str) -> str:
"""Run the AG2 agent with the given user message.

Args:
message: The user's input message.

Returns:
The assistant's final response text.
"""
assistant.reset()
user_proxy.reset()

response = user_proxy.run(assistant, message=message)
response.process()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The ChatResult object returned by agent.run() (or initiate_chat) does not have a process() method. This line will cause an AttributeError at runtime. The result is already populated and ready for use.

    response = user_proxy.run(assistant, message=message)


# Extract the summary or last assistant message
if response.summary:
return response.summary.replace("TERMINATE", "").strip()

for msg in reversed(response.messages):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The ChatResult object uses the attribute chat_history to store the conversation log, not messages. This will cause an AttributeError.

    for msg in reversed(response.chat_history):

if msg.get("role") == "assistant":
content = msg.get("content", "")
return content.replace("TERMINATE", "").strip()

return "No response generated."


if __name__ == "__main__":
response = run_agent("What's the weather in San Francisco?")
print(response)
179 changes: 179 additions & 0 deletions agent_starter_pack/agents/ag2/notebooks/getting_started.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting Started with the AG2 Agent Template\n",
"\n",
"This notebook demonstrates the AG2 (formerly AutoGen) agent template from Agent Starter Pack.\n",
"\n",
"[AG2](https://ag2.ai) is an open-source multi-agent framework with 500K+ monthly PyPI downloads and 4,300+ GitHub stars. It enables building multi-agent systems where agents collaborate, use tools, and solve complex tasks.\n",
"\n",
"## What you'll learn\n",
"- How the AG2 template works\n",
"- How to customize tools and system prompts\n",
"- How to add multiple agents with GroupChat"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install \"ag2[openai]>=0.11.4,<1.0\" python-dotenv -q"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Since the template defaults to using Gemini (Vertex AI), the gemini extra should be included in the installation command to ensure all necessary dependencies are present.

Suggested change
"%pip install \"ag2[openai]>=0.11.4,<1.0\" python-dotenv -q"
"%pip install \"ag2[openai,gemini]>=0.11.4,<1.0\" python-dotenv -q"

]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage\n",
"\n",
"The template's `run_agent()` function handles agent setup and execution:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-key-here\" # Or use Vertex AI\n",
"\n",
"from app.agent import run_agent\n",
"\n",
"response = run_agent(\"What's the weather in San Francisco?\")\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Customizing: Adding Your Own Tools\n",
"\n",
"You can extend the agent by registering additional tools. AG2 uses a decorator pattern for tool registration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Annotated\n",
"from autogen import AssistantAgent, UserProxyAgent, LLMConfig\n",
"\n",
"llm_config = LLMConfig(config_list=[{\n",
" \"model\": \"gpt-4o-mini\",\n",
" \"api_key\": os.environ.get(\"OPENAI_API_KEY\"),\n",
" \"api_type\": \"openai\",\n",
"}])\n",
"\n",
"assistant = AssistantAgent(\n",
" name=\"Assistant\",\n",
" system_message=\"You are a helpful assistant. Use tools to answer questions. Reply TERMINATE when done.\",\n",
" llm_config=llm_config,\n",
")\n",
"\n",
"user_proxy = UserProxyAgent(\n",
" name=\"User\",\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=5,\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config=False,\n",
")\n",
"\n",
"# Register a custom tool\n",
"@user_proxy.register_for_execution()\n",
"@assistant.register_for_llm(description=\"Calculate the square of a number\")\n",
"def square(n: Annotated[int, \"The number to square\"]) -> int:\n",
" return n * n\n",
"\n",
"result = user_proxy.run(assistant, message=\"What is 42 squared?\").process()"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

As noted in the main agent implementation, ChatResult does not have a process() method. This will cause the notebook cell to fail.

Suggested change
"result = user_proxy.run(assistant, message=\"What is 42 squared?\").process()"
"result = user_proxy.run(assistant, message=\"What is 42 squared?\")"

]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced: Multi-Agent GroupChat\n",
"\n",
"AG2's key differentiator is native multi-agent orchestration. Here's how to set up a GroupChat with specialized agents:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen import GroupChat, GroupChatManager\n",
"\n",
"researcher = AssistantAgent(\n",
" name=\"Researcher\",\n",
" system_message=\"You research topics thoroughly and present findings.\",\n",
" llm_config=llm_config,\n",
")\n",
"\n",
"writer = AssistantAgent(\n",
" name=\"Writer\",\n",
" system_message=\"You write clear, concise content based on research. Reply TERMINATE when done.\",\n",
" llm_config=llm_config,\n",
")\n",
"\n",
"coordinator = UserProxyAgent(\n",
" name=\"Coordinator\",\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=0,\n",
" code_execution_config=False,\n",
")\n",
"\n",
"group_chat = GroupChat(\n",
" agents=[coordinator, researcher, writer],\n",
" messages=[],\n",
" max_round=6,\n",
" speaker_selection_method=\"auto\",\n",
")\n",
"\n",
"manager = GroupChatManager(\n",
" groupchat=group_chat,\n",
" llm_config=llm_config,\n",
")\n",
"\n",
"result = coordinator.run(\n",
" manager,\n",
" message=\"Write a brief overview of how AI is used in weather forecasting.\",\n",
").process()"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .process() method is not available on the ChatResult object returned by the agent execution.

    )"

]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Steps\n",
"\n",
"- Add more tools in `app/agent.py` for your specific use case\n",
"- Deploy to Cloud Run using `make deploy`\n",
"- Explore [AG2 Documentation](https://docs.ag2.ai) for advanced patterns\n",
"- See [AG2 GroupChat Tutorial](https://docs.ag2.ai/docs/tutorial/conversation-patterns) for complex workflows"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading