Skip to content

Commit 4b48042

Browse files
committed
Merge branch 'main' into hironow/add-voice-from-openai-fm
2 parents 2f6d8b5 + 382500d commit 4b48042

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

81 files changed

+3088
-594
lines changed

.github/workflows/issues.yml

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ jobs:
1717
stale-issue-label: "stale"
1818
stale-issue-message: "This issue is stale because it has been open for 7 days with no activity."
1919
close-issue-message: "This issue was closed because it has been inactive for 3 days since being marked as stale."
20-
days-before-pr-stale: -1
21-
days-before-pr-close: -1
22-
any-of-labels: 'question,needs-more-info'
20+
any-of-issue-labels: 'question,needs-more-info'
21+
days-before-pr-stale: 10
22+
days-before-pr-close: 7
23+
stale-pr-label: "stale"
24+
stale-pr-message: "This PR is stale because it has been open for 10 days with no activity."
25+
close-pr-message: "This PR was closed because it has been inactive for 7 days since being marked as stale."
2326
repo-token: ${{ secrets.GITHUB_TOKEN }}

.github/workflows/tests.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ on:
88
branches:
99
- main
1010

11+
env:
12+
UV_FROZEN: "1"
13+
1114
jobs:
1215
lint:
1316
runs-on: ubuntu-latest

.gitignore

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -135,10 +135,10 @@ dmypy.json
135135
cython_debug/
136136

137137
# PyCharm
138-
#.idea/
138+
.idea/
139139

140140
# Ruff stuff:
141141
.ruff_cache/
142142

143143
# PyPI configuration file
144-
.pypirc
144+
.pypirc

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ sync:
55
.PHONY: format
66
format:
77
uv run ruff format
8+
uv run ruff check --fix
89

910
.PHONY: lint
1011
lint:
@@ -36,7 +37,6 @@ snapshots-create:
3637
.PHONY: old_version_tests
3738
old_version_tests:
3839
UV_PROJECT_ENVIRONMENT=.venv_39 uv run --python 3.9 -m pytest
39-
UV_PROJECT_ENVIRONMENT=.venv_39 uv run --python 3.9 -m mypy .
4040

4141
.PHONY: build-docs
4242
build-docs:

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ source env/bin/activate
3030
pip install openai-agents
3131
```
3232

33-
For voice support, install with the optional `voice` group: `pip install openai-agents[voice]`.
33+
For voice support, install with the optional `voice` group: `pip install 'openai-agents[voice]'`.
3434

3535
## Hello world example
3636

docs/agents.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -142,4 +142,6 @@ Supplying a list of tools doesn't always mean the LLM will use a tool. You can f
142142

143143
!!! note
144144

145-
If requiring tool use, you should consider setting [`Agent.tool_use_behavior`] to stop the Agent from running when a tool output is produced. Otherwise, the Agent might run in an infinite loop, where the LLM produces a tool call , and the tool result is sent to the LLM, and this infinite loops because the LLM is always forced to use a tool.
145+
To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.
146+
147+
If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.

docs/assets/images/graph.png

92.8 KB
Loading

docs/assets/images/mcp-tracing.jpg

398 KB
Loading

docs/context.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,14 @@ async def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str: # (2)!
4141
return f"User {wrapper.context.name} is 47 years old"
4242

4343
async def main():
44-
user_info = UserInfo(name="John", uid=123) # (3)!
44+
user_info = UserInfo(name="John", uid=123)
4545

46-
agent = Agent[UserInfo]( # (4)!
46+
agent = Agent[UserInfo]( # (3)!
4747
name="Assistant",
4848
tools=[fetch_user_age],
4949
)
5050

51-
result = await Runner.run(
51+
result = await Runner.run( # (4)!
5252
starting_agent=agent,
5353
input="What is the age of the user?",
5454
context=user_info,

docs/guardrails.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Output guardrails run in 3 steps:
2929

3030
!!! Note
3131

32-
Output guardrails are intended to run on the final agent input, so an agent's guardrails only run if the agent is the *last* agent. Similar to the input guardrails, we do this because guardrails tend to be related to the actual Agent - you'd run different guardrails for different agents, so colocating the code is useful for readability.
32+
Output guardrails are intended to run on the final agent output, so an agent's guardrails only run if the agent is the *last* agent. Similar to the input guardrails, we do this because guardrails tend to be related to the actual Agent - you'd run different guardrails for different agents, so colocating the code is useful for readability.
3333

3434
## Tripwires
3535

0 commit comments

Comments
 (0)