Skip to content

Commit 4d2fa9d

Browse files
committed
Merge branch 'main' of github.com:openai/openai-agents-python into alex/inline-snapshot
2 parents 5aba0b5 + cdbf6b0 commit 4d2fa9d

32 files changed

+461
-60
lines changed
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
---
2+
name: Custom model providers
3+
about: Questions or bugs about using non-OpenAI models
4+
title: ''
5+
labels: bug
6+
assignees: ''
7+
8+
---
9+
10+
### Please read this first
11+
12+
- **Have you read the custom model provider docs, including the 'Common issues' section?** [Model provider docs](https://openai.github.io/openai-agents-python/models/#using-other-llm-providers)
13+
- **Have you searched for related issues?** Others may have faced similar issues.
14+
15+
### Describe the question
16+
A clear and concise description of what the question or bug is.
17+
18+
### Debug information
19+
- Agents SDK version: (e.g. `v0.0.3`)
20+
- Python version (e.g. Python 3.10)
21+
22+
### Repro steps
23+
Ideally provide a minimal python script that can be run to reproduce the issue.
24+
25+
### Expected behavior
26+
A clear and concise description of what you expected to happen.
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
### Summary
2+
3+
<!-- Please give a short summary of the change and the problem this solves. -->
4+
5+
### Test plan
6+
7+
<!-- Please explain how this was tested -->
8+
9+
### Issue number
10+
11+
<!-- For example: "Closes #1234" -->
12+
13+
### Checks
14+
15+
- [ ] I've added new tests (if relevant)
16+
- [ ] I've added/updated the relevant documentation
17+
- [ ] I've run `make lint` and `make format`
18+
- [ ] I've made sure tests pass

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,11 @@ print(result.final_output)
4747

4848
(_If running this, ensure you set the `OPENAI_API_KEY` environment variable_)
4949

50+
(_For Jupyter notebook users, see [hello_world_jupyter.py](examples/basic/hello_world_jupyter.py)_)
51+
5052
## Handoffs example
5153

52-
```py
54+
```python
5355
from agents import Agent, Runner
5456
import asyncio
5557

@@ -140,7 +142,7 @@ The Agents SDK is designed to be highly flexible, allowing you to model a wide r
140142

141143
## Tracing
142144

143-
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), and [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk). For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing).
145+
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk), [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration), and [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent). For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing).
144146

145147
## Development (only needed if you need to edit the SDK/examples)
146148

docs/agents.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,15 @@ The most common properties of an agent you'll configure are:
1313
```python
1414
from agents import Agent, ModelSettings, function_tool
1515

16+
@function_tool
1617
def get_weather(city: str) -> str:
1718
return f"The weather in {city} is sunny"
1819

1920
agent = Agent(
2021
name="Haiku agent",
2122
instructions="Always respond in haiku form",
2223
model="o3-mini",
23-
tools=[function_tool(get_weather)],
24+
tools=[get_weather],
2425
)
2526
```
2627

docs/context.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ class UserInfo: # (1)!
3636
name: str
3737
uid: int
3838

39+
@function_tool
3940
async def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str: # (2)!
4041
return f"User {wrapper.context.name} is 47 years old"
4142

@@ -44,7 +45,7 @@ async def main():
4445

4546
agent = Agent[UserInfo]( # (4)!
4647
name="Assistant",
47-
tools=[function_tool(fetch_user_age)],
48+
tools=[fetch_user_age],
4849
)
4950

5051
result = await Runner.run(

docs/models.md

Lines changed: 35 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -53,21 +53,41 @@ async def main():
5353

5454
## Using other LLM providers
5555

56-
Many providers also support the OpenAI API format, which means you can pass a `base_url` to the existing OpenAI model implementations and use them easily. `ModelSettings` is used to configure tuning parameters (e.g., temperature, top_p) for the model you select.
56+
You can use other LLM providers in 3 ways (examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/)):
5757

58-
```python
59-
external_client = AsyncOpenAI(
60-
api_key="EXTERNAL_API_KEY",
61-
base_url="https://api.external.com/v1/",
62-
)
58+
1. [`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
59+
2. [`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
60+
3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py).
61+
62+
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](tracing.md).
63+
64+
!!! note
65+
66+
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
67+
68+
## Common issues with using other LLM providers
69+
70+
### Tracing client error 401
71+
72+
If you get errors related to tracing, this is because traces are uploaded to OpenAI servers, and you don't have an OpenAI API key. You have three options to resolve this:
73+
74+
1. Disable tracing entirely: [`set_tracing_disabled(True)`][agents.set_tracing_disabled].
75+
2. Set an OpenAI key for tracing: [`set_tracing_export_api_key(...)`][agents.set_tracing_export_api_key]. This API key will only be used for uploading traces, and must be from [platform.openai.com](https://platform.openai.com/).
76+
3. Use a non-OpenAI trace processor. See the [tracing docs](tracing.md#custom-tracing-processors).
77+
78+
### Responses API support
79+
80+
The SDK uses the Responses API by default, but most other LLM providers don't yet support it. You may see 404s or similar issues as a result. To resolve, you have two options:
81+
82+
1. Call [`set_default_openai_api("chat_completions")`][agents.set_default_openai_api]. This works if you are setting `OPENAI_API_KEY` and `OPENAI_BASE_URL` via environment vars.
83+
2. Use [`OpenAIChatCompletionsModel`][agents.models.openai_chatcompletions.OpenAIChatCompletionsModel]. There are examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/).
84+
85+
### Structured outputs support
86+
87+
Some model providers don't have support for [structured outputs](https://platform.openai.com/docs/guides/structured-outputs). This sometimes results in an error that looks something like this:
6388

64-
spanish_agent = Agent(
65-
name="Spanish agent",
66-
instructions="You only speak Spanish.",
67-
model=OpenAIChatCompletionsModel(
68-
model="EXTERNAL_MODEL_NAME",
69-
openai_client=external_client,
70-
),
71-
model_settings=ModelSettings(temperature=0.5),
72-
)
7389
```
90+
BadRequestError: Error code: 400 - {'error': {'message': "'response_format.type' : value is not one of the allowed values ['text','json_object']", 'type': 'invalid_request_error'}}
91+
```
92+
93+
This is a shortcoming of some model providers - they support JSON outputs, but don't allow you to specify the `json_schema` to use for the output. We are working on a fix for this, but we suggest relying on providers that do have support for JSON schema output, because otherwise your app will often break because of malformed JSON.

docs/running_agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ async def main():
7878
# San Francisco
7979

8080
# Second turn
81-
new_input = output.to_input_list() + [{"role": "user", "content": "What state is it in?"}]
81+
new_input = result.to_input_list() + [{"role": "user", "content": "What state is it in?"}]
8282
result = await Runner.run(agent, new_input)
8383
print(result.final_output)
8484
# California

docs/tracing.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ async def main():
5050

5151
with trace("Joke workflow"): # (1)!
5252
first_result = await Runner.run(agent, "Tell me a joke")
53-
second_result = await Runner.run(agent, f"Rate this joke: {first_output.final_output}")
53+
second_result = await Runner.run(agent, f"Rate this joke: {first_result.final_output}")
5454
print(f"Joke: {first_result.final_output}")
5555
print(f"Rating: {second_result.final_output}")
5656
```
@@ -93,3 +93,5 @@ External trace processors include:
9393
- [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk)
9494
- [Pydantic Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents)
9595
- [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk)
96+
- [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration))
97+
- [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent)

examples/agent_patterns/input_guardrails.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ async def math_guardrail(
5353

5454
return GuardrailFunctionOutput(
5555
output_info=final_output,
56-
tripwire_triggered=not final_output.is_math_homework,
56+
tripwire_triggered=final_output.is_math_homework,
5757
)
5858

5959

examples/basic/hello_world_jupyter.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
from agents import Agent, Runner
2+
3+
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
4+
5+
# Intended for Jupyter notebooks where there's an existing event loop
6+
result = await Runner.run(agent, "Write a haiku about recursion in programming.") # type: ignore[top-level-await] # noqa: F704
7+
print(result.final_output)
8+
9+
# Code within code loops,
10+
# Infinite mirrors reflect—
11+
# Logic folds on self.

0 commit comments

Comments
 (0)