Skip to content

Commit 9b118a1

Browse files
committed
Merge branch 'main' into hironow/chore-doc-type
# Conflicts: # tests/docs/config.md # tests/docs/guardrails.md # tests/docs/tracing.md
2 parents c827ecb + b6c9572 commit 9b118a1

File tree

164 files changed

+48
-13342
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

164 files changed

+48
-13342
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33

44
# Byte-compiled / optimized / DLL files
55
__pycache__/
6+
**/__pycache__/
67
*.py[cod]
78
*$py.class
89

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ When you call `Runner.run()`, we run a loop until we get a final output.
118118
2. The LLM returns a response, which may include tool calls.
119119
3. If the response has a final output (see below for the more on this), we return it and end the loop.
120120
4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
121-
5. We process the tool calls (if any) and append the tool responses messsages. Then we go to step 1.
121+
5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
122122

123123
There is a `max_turns` parameter that you can use to limit the number of times the loop executes.
124124

docs/multi_agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,11 @@ This pattern is great when the task is open-ended and you want to rely on the in
2727

2828
## Orchestrating via code
2929

30-
While orchestrating via LLM is powerful, orchestrating via LLM makes tasks more deterministic and predictable, in terms of speed, cost and performance. Common patterns here are:
30+
While orchestrating via LLM is powerful, orchestrating via code makes tasks more deterministic and predictable, in terms of speed, cost and performance. Common patterns here are:
3131

3232
- Using [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) to generate well formed data that you can inspect with your code. For example, you might ask an agent to classify the task into a few categories, and then pick the next agent based on the category.
3333
- Chaining multiple agents by transforming the output of one into the input of the next. You can decompose a task like writing a blog post into a series of steps - do research, write an outline, write the blog post, critique it, and then improve it.
3434
- Running the agent that performs the task in a `while` loop with an agent that evaluates and provides feedback, until the evaluator says the output passes certain criteria.
3535
- Running multiple agents in parallel, e.g. via Python primitives like `asyncio.gather`. This is useful for speed when you have multiple tasks that don't depend on each other.
3636

37-
We have a number of examples in [`examples/agent_patterns`](https://github.com/openai/openai-agents-python/examples/agent_patterns).
37+
We have a number of examples in [`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns).

docs/results.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The [`new_items`][agents.result.RunResultBase.new_items] property contains the n
3232

3333
- [`MessageOutputItem`][agents.items.MessageOutputItem] indicates a message from the LLM. The raw item is the message generated.
3434
- [`HandoffCallItem`][agents.items.HandoffCallItem] indicates that the LLM called the handoff tool. The raw item is the tool call item from the LLM.
35-
- [`HandoffOutputItem`][agents.items.HandoffOutputItem] indicates that a handoff occured. The raw item is the tool response to the handoff tool call. You can also access the source/target agents from the item.
35+
- [`HandoffOutputItem`][agents.items.HandoffOutputItem] indicates that a handoff occurred. The raw item is the tool response to the handoff tool call. You can also access the source/target agents from the item.
3636
- [`ToolCallItem`][agents.items.ToolCallItem] indicates that the LLM invoked a tool.
3737
- [`ToolCallOutputItem`][agents.items.ToolCallOutputItem] indicates that a tool was called. The raw item is the tool response. You can also access the tool output from the item.
3838
- [`ReasoningItem`][agents.items.ReasoningItem] indicates a reasoning item from the LLM. The raw item is the reasoning generated.

examples/agent_patterns/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,4 +51,4 @@ You can definitely do this without any special Agents SDK features by using para
5151

5252
This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
5353

54-
See the [`guardrails.py`](./guardrails.py) file for an example of this.
54+
See the [`input_guardrails.py`](./input_guardrails.py) and [`output_guardrails.py`](./output_guardrails.py) files for examples.

examples/research_bot/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,5 +21,5 @@ If you're building your own research bot, some ideas to add to this are:
2121

2222
1. Retrieval: Add support for fetching relevant information from a vector store. You could use the File Search tool for this.
2323
2. Image and file upload: Allow users to attach PDFs or other files, as baseline context for the research.
24-
3. More planning and thinking: Models often produce better results given more time to think. Improve the planning process to come up with a better plan, and add an evaluation step so that the model can choose to improve it's results, search for more stuff, etc.
24+
3. More planning and thinking: Models often produce better results given more time to think. Improve the planning process to come up with a better plan, and add an evaluation step so that the model can choose to improve its results, search for more stuff, etc.
2525
4. Code execution: Allow running code, which is useful for data analysis.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.

0 commit comments

Comments
 (0)