You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Confirm this is an issue with the Python library and not an underlying OpenAI API
This is an issue with the Python library
Describe the bug
Description
I am currently using Python 3.11.9 and the OpenAI Python SDK version 1.53.0. In my service, I am leveraging the beta feature of GPT structured output with streaming mode. However, I have encountered an intermittent error:
EOF while parsing a value at line 3 column 0
This error occurs during the for loop in the following code snippet:
Upon investigation, I discovered that this issue is caused by leading whitespace (\n\n) in the JSON schema being returned by the API. Instead of receiving a clean JSON object like {"foo": "bar"}, the response sometimes includes leading newlines, such as \n\n{"foo": "bar"}. This appears to trigger the EOF error during streaming.
The root cause seems to lie in the following code from openai/lib/streaming/chat/_completions.py:
if (
choice_snapshot.message.contentandnotchoice_snapshot.message.refusalandis_given(self._rich_response_format)
):
choice_snapshot.message.parsed=from_json(
bytes(choice_snapshot.message.content, "utf-8"),
partial_mode=True,
)
Steps Taken
To address this issue, I tried modifying my prompt to explicitly enforce strict JSON formatting. My prompt included the following instructions:
Note: You MUST generate structured JSON responses.
- Respond ONLY with a valid JSON object.
- DO NOT include any leading or trailing whitespace, newlines (`\n`), or characters outside the JSON object.
- The response MUST start with `{{` and end with `}}`.
Unfortunately, this did not resolve the issue.
Proposed Solution
I created a pull request to address this problem. The PR ensures that leading whitespace (e.g., \n\n) is handled properly during streaming to prevent this EOF error.
I have tested this solution locally, and it resolves the issue in my environment. However, if there are any potential side effects or areas that might be impacted by this change, I would appreciate feedback from the team.
Thank you!
To Reproduce
Use the beta version of GPT's structured output feature in combination with streaming mode.
Observe that the issue occurs intermittently. Sometimes the streaming works as expected, but other times it fails with the following error:
I came across this issue and reviewed it, but my situation seems to be slightly different. In my case, I am using structured output in combination with streaming mode, which doesn't appear to be the setup described in this issue. For this reason, I have opened a separate issue to address my specific scenario.
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
Description
I am currently using Python 3.11.9 and the OpenAI Python SDK version 1.53.0. In my service, I am leveraging the beta feature of GPT structured output with streaming mode. However, I have encountered an intermittent error:
This error occurs during the for loop in the following code snippet:
Upon investigation, I discovered that this issue is caused by leading whitespace (\n\n) in the JSON schema being returned by the API. Instead of receiving a clean JSON object like
{"foo": "bar"}
, the response sometimes includes leading newlines, such as\n\n{"foo": "bar"}
. This appears to trigger the EOF error during streaming.The root cause seems to lie in the following code from
openai/lib/streaming/chat/_completions.py
:Steps Taken
To address this issue, I tried modifying my prompt to explicitly enforce strict JSON formatting. My prompt included the following instructions:
Unfortunately, this did not resolve the issue.
Proposed Solution
I created a pull request to address this problem. The PR ensures that leading whitespace (e.g., \n\n) is handled properly during streaming to prevent this EOF error.
I have tested this solution locally, and it resolves the issue in my environment. However, if there are any potential side effects or areas that might be impacted by this change, I would appreciate feedback from the team.
Thank you!
To Reproduce
Use the beta version of GPT's structured output feature in combination with streaming mode.
Observe that the issue occurs intermittently. Sometimes the streaming works as expected, but other times it fails with the following error:
Code snippets
OS
MacOS Sequoia 15.2 (24C101)
Python version
Python v3.11.9
Library version
openai v1.53.0
The text was updated successfully, but these errors were encountered: