Skip to content

Commit 2d8d01e

Browse files
tightened up language to make the responses cookbooks more readable. (#1847)
1 parent 56f72c5 commit 2d8d01e

File tree

3 files changed

+21
-16
lines changed

3 files changed

+21
-16
lines changed

examples/responses_api/reasoning_items.ipynb

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -287,7 +287,8 @@
287287
"metadata": {},
288288
"source": [
289289
"## Caching\n",
290-
"As illustrated above, reasoning models produce both reasoning tokens and completion tokens that are treated differently in the API today. This also has implications for cache utilization and latency. To illustrate the point, we include this helpful sketch.\n",
290+
"\n",
291+
"As shown above, reasoning models generate both reasoning tokens and completion tokens, which the API handles differently. This distinction affects how caching works and impacts both performance and latency. The following diagram illustrates these concepts:\n",
291292
"\n",
292293
"![reasoning-context](../../images/responses-diagram.png)"
293294
]
@@ -296,7 +297,7 @@
296297
"cell_type": "markdown",
297298
"metadata": {},
298299
"source": [
299-
"In turn 2, reasoning items from turn 1 are ignored and stripped, since the model doesn't reuse reasoning items from previous turns. This makes it impossible to get a full cache hit on the fourth API call in the diagram above, as the prompt now omits those reasoning items. However, including them does no harm—the API will automatically remove any reasoning items that aren't relevant for the current turn. Note that caching only matters for prompts longer than 1024 tokens. In our tests, switching from Completions to the Responses API increased cache utilization from 40% to 80%. Better cache utilization means better economics, since cached tokens are billed much less: for `o4-mini`, cached input tokens are 75% cheaper than uncached ones. Latency also improves."
300+
"In turn 2, any reasoning items from turn 1 are ignored and removed, since the model does not reuse reasoning items from previous turns. As a result, the fourth API call in the diagram cannot achieve a full cache hit, because those reasoning items are missing from the prompt. However, including them is harmless—the API will simply discard any reasoning items that arent relevant for the current turn. Keep in mind that caching only impacts prompts longer than 1024 tokens. In our tests, switching from the Completions API to the Responses API boosted cache utilization from 40% to 80%. Higher cache utilization leads to lower costs (for example, cached input tokens for `o4-mini` are 75% cheaper than uncached ones) and improved latency."
300301
]
301302
},
302303
{
@@ -305,13 +306,15 @@
305306
"source": [
306307
"## Encrypted Reasoning Items\n",
307308
"\n",
308-
"For organizations that can't use the Responses API statefully due to compliance or data retention requirements (such as [Zero Data Retention](https://openai.com/enterprise-privacy/)), we've introduced [encrypted reasoning items](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#encrypted-reasoning-items). This lets you get all the benefits of reasoning items while keeping your workflow stateless.\n",
309+
"Some organizations—such as those with [Zero Data Retention (ZDR)](https://openai.com/enterprise-privacy/) requirements—cannot use the Responses API in a stateful way due to compliance or data retention policies. To support these cases, OpenAI offers [encrypted reasoning items](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#encrypted-reasoning-items), allowing you to keep your workflow stateless while still benefiting from reasoning items.\n",
309310
"\n",
310-
"To use this, simply add `[\"reasoning.encrypted_content\"]` to the `include` field. You'll receive an encrypted version of the reasoning tokens, which you can pass back to the API just as you would with regular reasoning items.\n",
311+
"To use encrypted reasoning items:\n",
312+
"- Add `[\"reasoning.encrypted_content\"]` to the `include` field in your API call.\n",
313+
"- The API will return an encrypted version of the reasoning tokens, which you can pass back in future requests just like regular reasoning items.\n",
311314
"\n",
312-
"For Zero Data Retention (ZDR) organizations, OpenAI enforces `store=false` at the API level. When a request arrives, the API checks for any `encrypted_content` in the payload. If present, it's decrypted in-memory using keys only OpenAI can access. This decrypted reasoning (chain-of-thought) is never written to disk and is used only for generating the next response. Any new reasoning tokens are immediately encrypted and returned to you. All transient data—including decrypted inputs and model outputs—is securely discarded after the response, with no intermediate state persisted, ensuring full ZDR compliance.\n",
315+
"For ZDR organizations, OpenAI enforces `store=false` automatically. When a request includes `encrypted_content`, it is decrypted in-memory (never written to disk), used for generating the next response, and then securely discarded. Any new reasoning tokens are immediately encrypted and returned to you, ensuring no intermediate state is ever persisted.\n",
313316
"\n",
314-
"Here’s a quick update to the earlier code snippet to show how this works:"
317+
"Here’s a quick code update to show how this works:"
315318
]
316319
},
317320
{
@@ -451,7 +454,7 @@
451454
"cell_type": "markdown",
452455
"metadata": {},
453456
"source": [
454-
"Reasoning summary text enables you to design user experiences where users can peek into the model's thought process. For example, in conversations involving multiple function calls, users can see not only which function calls are made, but also the reasoning behind each tool call—without having to wait for the final assistant message. This provides greater transparency and interactivity in your application's UX."
457+
"Reasoning summary text lets you give users a window into the models thought process. For example, during conversations with multiple function calls, users can see both which functions were called and the reasoning behind each call—without waiting for the final assistant message. This adds transparency and interactivity to your application’s user experience."
455458
]
456459
},
457460
{

examples/responses_api/responses_example.ipynb

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,19 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"## What is the Responses API\n",
7+
"## What is the Responses API?\n",
88
"\n",
9-
"The Responses API is a new API that focuses on greater simplicity and greater expressivity when using our APIs. It is designed for multiple tools, multiple turns, and multiple modalities — as opposed to current APIs, which either have these features bolted onto an API designed primarily for text in and out (chat completions) or need a lot bootstrapping to perform simple actions (assistants api).\n",
9+
"The Responses API is a new way to interact with OpenAI models, designed to be simpler and more flexible than previous APIs. It makes it easy to build advanced AI applications that use multiple tools, handle multi-turn conversations, and work with different types of data (not just text).\n",
1010
"\n",
11-
"Here I will show you a couple of new features that the Responses API has to offer and tie it all together at the end.\n",
12-
"`responses` solves for a number of user painpoints with our current set of APIs. During our time with the completions API, we found that folks wanted:\n",
11+
"Unlike older APIs—such as Chat Completions, which were built mainly for text, or the Assistants API, which can require a lot of setup—the Responses API is built from the ground up for:\n",
1312
"\n",
14-
"- the ability to easily perform multi-turn model interactions in a single API call\n",
15-
"- to have access to our hosted tools (file_search, web_search, code_interpreter)\n",
16-
"- granular control over the context sent to the model\n",
13+
"- Seamless multi-turn interactions (carry on a conversation across several steps in a single API call)\n",
14+
"- Easy access to powerful hosted tools (like file search, web search, and code interpreter)\n",
15+
"- Fine-grained control over the context you send to the model\n",
1716
"\n",
18-
"As models start to develop longer running reasoning and thinking capabilities, users will want an async-friendly and stateful primitive. Response solves for this. \n"
17+
"As AI models become more capable of complex, long-running reasoning, developers need an API that is both asynchronous and stateful. The Responses API is designed to meet these needs.\n",
18+
"\n",
19+
"In this guide, you'll see some of the new features the Responses API offers, along with practical examples to help you get started."
1920
]
2021
},
2122
{
@@ -181,7 +182,7 @@
181182
"\n",
182183
"Another benefit of the Responses API is that it adds support for hosted tools like `file_search` and `web_search`. Instead of manually calling the tools, simply pass in the tools and the API will decide which tool to use and use it.\n",
183184
"\n",
184-
"Here is an example of using the `web_search` tool to incorporate web search results into the response. You may already be familiar with how ChatGPT can search the web. You can now build similar experiences too! The web search tool uses the OpenAI Index, the one that powers the web search in ChatGPT, having being optimized for chat applications.\n"
185+
"Here is an example of using the `web_search` tool to incorporate web search results into the response."
185186
]
186187
},
187188
{

registry.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
# should build pages for, and indicates metadata such as tags, creation date and
55
# authors for each page.
66

7+
78
- title: Better performance from reasoning models using the Responses API
89
path: examples/responses_api/reasoning_items.ipynb
910
date: 2025-05-11

0 commit comments

Comments
 (0)