Skip to content

Commit 1b195ac

Browse files
committed
Merge branch 'main' into hironow/chore-doc-type
# Conflicts: # tests/docs/config.md # tests/docs/guardrails.md # tests/docs/tracing.md
2 parents 14e1e43 + e8bccd2 commit 1b195ac

File tree

164 files changed

+48
-13342
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

164 files changed

+48
-13342
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33

44
# Byte-compiled / optimized / DLL files
55
__pycache__/
6+
**/__pycache__/
67
*.py[cod]
78
*$py.class
89

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ When you call `Runner.run()`, we run a loop until we get a final output.
118118
2. The LLM returns a response, which may include tool calls.
119119
3. If the response has a final output (see below for the more on this), we return it and end the loop.
120120
4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
121-
5. We process the tool calls (if any) and append the tool responses messsages. Then we go to step 1.
121+
5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
122122

123123
There is a `max_turns` parameter that you can use to limit the number of times the loop executes.
124124

docs/multi_agent.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,11 @@ This pattern is great when the task is open-ended and you want to rely on the in
2727

2828
## Orchestrating via code
2929

30-
While orchestrating via LLM is powerful, orchestrating via LLM makes tasks more deterministic and predictable, in terms of speed, cost and performance. Common patterns here are:
30+
While orchestrating via LLM is powerful, orchestrating via code makes tasks more deterministic and predictable, in terms of speed, cost and performance. Common patterns here are:
3131

3232
- Using [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) to generate well formed data that you can inspect with your code. For example, you might ask an agent to classify the task into a few categories, and then pick the next agent based on the category.
3333
- Chaining multiple agents by transforming the output of one into the input of the next. You can decompose a task like writing a blog post into a series of steps - do research, write an outline, write the blog post, critique it, and then improve it.
3434
- Running the agent that performs the task in a `while` loop with an agent that evaluates and provides feedback, until the evaluator says the output passes certain criteria.
3535
- Running multiple agents in parallel, e.g. via Python primitives like `asyncio.gather`. This is useful for speed when you have multiple tasks that don't depend on each other.
3636

37-
We have a number of examples in [`examples/agent_patterns`](https://github.com/openai/openai-agents-python/examples/agent_patterns).
37+
We have a number of examples in [`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns).

docs/results.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The [`new_items`][agents.result.RunResultBase.new_items] property contains the n
3232

3333
- [`MessageOutputItem`][agents.items.MessageOutputItem] indicates a message from the LLM. The raw item is the message generated.
3434
- [`HandoffCallItem`][agents.items.HandoffCallItem] indicates that the LLM called the handoff tool. The raw item is the tool call item from the LLM.
35-
- [`HandoffOutputItem`][agents.items.HandoffOutputItem] indicates that a handoff occured. The raw item is the tool response to the handoff tool call. You can also access the source/target agents from the item.
35+
- [`HandoffOutputItem`][agents.items.HandoffOutputItem] indicates that a handoff occurred. The raw item is the tool response to the handoff tool call. You can also access the source/target agents from the item.
3636
- [`ToolCallItem`][agents.items.ToolCallItem] indicates that the LLM invoked a tool.
3737
- [`ToolCallOutputItem`][agents.items.ToolCallOutputItem] indicates that a tool was called. The raw item is the tool response. You can also access the tool output from the item.
3838
- [`ReasoningItem`][agents.items.ReasoningItem] indicates a reasoning item from the LLM. The raw item is the reasoning generated.

examples/agent_patterns/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,4 +51,4 @@ You can definitely do this without any special Agents SDK features by using para
5151

5252
This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
5353

54-
See the [`guardrails.py`](./guardrails.py) file for an example of this.
54+
See the [`input_guardrails.py`](./input_guardrails.py) and [`output_guardrails.py`](./output_guardrails.py) files for examples.

examples/research_bot/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,5 +21,5 @@ If you're building your own research bot, some ideas to add to this are:
2121

2222
1. Retrieval: Add support for fetching relevant information from a vector store. You could use the File Search tool for this.
2323
2. Image and file upload: Allow users to attach PDFs or other files, as baseline context for the research.
24-
3. More planning and thinking: Models often produce better results given more time to think. Improve the planning process to come up with a better plan, and add an evaluation step so that the model can choose to improve it's results, search for more stuff, etc.
24+
3. More planning and thinking: Models often produce better results given more time to think. Improve the planning process to come up with a better plan, and add an evaluation step so that the model can choose to improve its results, search for more stuff, etc.
2525
4. Code execution: Allow running code, which is useful for data analysis.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.

examples/tools/computer_use.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
import asyncio
22
import base64
3-
import logging
43
from typing import Literal, Union
54

65
from playwright.async_api import Browser, Page, Playwright, async_playwright
@@ -16,8 +15,10 @@
1615
trace,
1716
)
1817

19-
logging.getLogger("openai.agents").setLevel(logging.DEBUG)
20-
logging.getLogger("openai.agents").addHandler(logging.StreamHandler())
18+
# Uncomment to see very verbose logs
19+
# import logging
20+
# logging.getLogger("openai.agents").setLevel(logging.DEBUG)
21+
# logging.getLogger("openai.agents").addHandler(logging.StreamHandler())
2122

2223

2324
async def main():

pyproject.toml

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "openai-agents"
3-
version = "0.0.2"
3+
version = "0.0.3"
44
description = "OpenAI Agents SDK"
55
readme = "README.md"
66
requires-python = ">=3.9"
@@ -9,7 +9,7 @@ authors = [
99
{ name = "OpenAI", email = "[email protected]" },
1010
]
1111
dependencies = [
12-
"openai>=1.66.0",
12+
"openai>=1.66.2",
1313
"pydantic>=2.10, <3",
1414
"griffe>=1.5.6, <2",
1515
"typing-extensions>=4.12.2, <5",

src/agents/_run_impl.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
ActionWait,
2424
)
2525
from openai.types.responses.response_input_param import ComputerCallOutput
26-
from openai.types.responses.response_output_item import Reasoning
26+
from openai.types.responses.response_reasoning_item import ResponseReasoningItem
2727

2828
from . import _utils
2929
from .agent import Agent
@@ -288,7 +288,7 @@ def process_model_response(
288288
items.append(ToolCallItem(raw_item=output, agent=agent))
289289
elif isinstance(output, ResponseFunctionWebSearch):
290290
items.append(ToolCallItem(raw_item=output, agent=agent))
291-
elif isinstance(output, Reasoning):
291+
elif isinstance(output, ResponseReasoningItem):
292292
items.append(ReasoningItem(raw_item=output, agent=agent))
293293
elif isinstance(output, ResponseComputerToolCall):
294294
items.append(ToolCallItem(raw_item=output, agent=agent))

src/agents/agent_output.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ def _type_to_str(t: type[Any]) -> str:
138138
# It's a simple type like `str`, `int`, etc.
139139
return t.__name__
140140
elif args:
141-
args_str = ', '.join(_type_to_str(arg) for arg in args)
141+
args_str = ", ".join(_type_to_str(arg) for arg in args)
142142
return f"{origin.__name__}[{args_str}]"
143143
else:
144144
return str(t)

src/agents/items.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
ResponseStreamEvent,
2020
)
2121
from openai.types.responses.response_input_item_param import ComputerCallOutput, FunctionCallOutput
22-
from openai.types.responses.response_output_item import Reasoning
22+
from openai.types.responses.response_reasoning_item import ResponseReasoningItem
2323
from pydantic import BaseModel
2424
from typing_extensions import TypeAlias
2525

@@ -136,10 +136,10 @@ class ToolCallOutputItem(RunItemBase[Union[FunctionCallOutput, ComputerCallOutpu
136136

137137

138138
@dataclass
139-
class ReasoningItem(RunItemBase[Reasoning]):
139+
class ReasoningItem(RunItemBase[ResponseReasoningItem]):
140140
"""Represents a reasoning item."""
141141

142-
raw_item: Reasoning
142+
raw_item: ResponseReasoningItem
143143
"""The raw reasoning item."""
144144

145145
type: Literal["reasoning_item"] = "reasoning_item"

src/agents/model_settings.py

+1
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ class ModelSettings:
1111
This class holds optional model configuration parameters (e.g. temperature,
1212
top_p, penalties, truncation, etc.).
1313
"""
14+
1415
temperature: float | None = None
1516
top_p: float | None = None
1617
frequency_penalty: float | None = None

src/agents/models/openai_responses.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -361,7 +361,7 @@ def _convert_tool(cls, tool: Tool) -> tuple[ToolParam, IncludeLiteral | None]:
361361
includes = "file_search_call.results" if tool.include_search_results else None
362362
elif isinstance(tool, ComputerTool):
363363
converted_tool = {
364-
"type": "computer-preview",
364+
"type": "computer_use_preview",
365365
"environment": tool.computer.environment,
366366
"display_width": tool.computer.dimensions[0],
367367
"display_height": tool.computer.dimensions[1],

src/agents/tool.py

+2
Original file line numberDiff line numberDiff line change
@@ -284,3 +284,5 @@ def decorator(real_func: ToolFunction[...]) -> FunctionTool:
284284
return _create_function_tool(real_func)
285285

286286
return decorator
287+
return decorator
288+
return decorator

tests/LICENSE

-21
This file was deleted.

tests/Makefile

-37
This file was deleted.

tests/README.md

-174
This file was deleted.

0 commit comments

Comments
 (0)