-
Notifications
You must be signed in to change notification settings - Fork 140
Responses API Streaming: token too long error #368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We've also seen this issue, it happens when an SSE event is larger than This is the construction site for the Scanner instance being overflowed: openai-go/packages/ssestream/ssestream.go Lines 24 to 38 in c0414f1
Would you be able to add an option to pass a desried max size as a RequestOption? |
Experiencing same issue here with GPT-4.1, specifically with the For now, just doing: // Check for any stream errors
if err := stream.Err(); err != nil {
if stream.Current().Type == string(constant.ResponseOutputItemDone("").Default()) {
return
}
logger.Error(ctx, "Streaming error: %v", err)
errorChannel <- fmt.Errorf("streaming error: %w", err)
} |
When using the client.Responses.NewStreaming() method, I have noticed intermittent errors with the bufio.Scanner, resulting in this error:
bufio.Scanner: token too long
.The text was updated successfully, but these errors were encountered: