-
Notifications
You must be signed in to change notification settings - Fork 129
fix(bigquery): Prevent Job.waitFor() from hanging on failed query jobs #3982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
9809f7d to
53d6cd8
Compare
When a query job fails and the subsequent `getQueryResults` API call also fails with a retryable error (e.g., `rateLimitExceeded`), the `Job.waitFor()` method would enter a retry loop without checking the underlying job's status. This caused the client to hang indefinitely, only ending when the total timeout was reached. This fix addresses the issue by intercepting the retryable exception within the `waitForQueryResults` polling loop. Before proceeding with a retry, the code now makes an additional `getJob()` call to check the job's actual status. If the job is already in a terminal `DONE` state, the retry loop is immediately terminated, and the final job status is returned to the user. A regression test has been added to simulate this specific failure scenario, ensuring the client no longer hangs and correctly returns the failed job. Fixes: b/451741841
| public QueryResponse call() { | ||
| return bigquery.getQueryResults(getJobId(), resultsOptions); | ||
| try { | ||
| return bigquery.getQueryResults(getJobId(), resultsOptions); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think code-wise this probably will work, but I'm not sure I'm the best person to review for logic (yet). I think for this PR someone more familiar with bigquery's internals would be better since this impacts all the RPCs.
I don't fully understand jobs vs queries yet and to me, this seems odd. Shouldn't the queryResults be checking the job status to even get the query results? How does the query get results if the job didn't finish? Seems odd that if the server returns rate limit exception, we attempt to bypass the response and directly poll for the job result and return a dummy value.
If there is a RateLimitException from the server, I think the first thing would be to have stronger backoff requirements so we ease quota/ load. I think that the default 12 hour timeout is too much (unless BQ has long running jobs which I'm not familiar enough to know).
I think if we do support user configurations, then perhaps the user should shorten the total timeout and increase the backoff so they're not sitting waiting 12+ hours.
Perhaps Phong could give us more insight?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Lawrence for the feedback. @PhongChuong, could you please take a look?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should work but I think we should be reductant to add even more logic to our already complicated retry logic.
Can we verify a few things first before proceeding? I'm thinking out loud:
- It seems weird to me that the shouldRetry on 494 should be True when semantically, we expect prevResponse.getCompleted() to be true since the Job == DONE. The value is set here based on GetQueryResultsResponse. Can you verify if the server response with the correct JobStatus that matches that of getJob?
- I wonder if we should add the getJob + jobStatus check logic into the shouldRetry section instead.
Regarding the timeout, it is currently possible to set that value. However, our default is as @lqiu96 said extremely long at 12 hours. IIRC, we had a brief discussion regarding changing this value but there was no consensus moving forward. It might be useful t bring this up again during the next BigQuery meeting.
When a query job fails and the subsequent
getQueryResultsAPI call also fails with a retryable error (e.g.,rateLimitExceeded), theJob.waitFor()method would enter a retry loop without checking the underlying job's status. This caused the client to hang indefinitely, only ending when the total timeout was reached.This fix addresses the issue by intercepting the retryable exception within the
waitForQueryResultspolling loop. Before proceeding with a retry, the code now makes an additionalgetJob()call to check the job's actual status. If the job is already in a terminalDONEstate, the retry loop is immediately terminated, and the final job status is returned to the user.A regression test has been added to simulate a similar failure scenario, ensuring the client no longer hangs and correctly returns the failed job.
Fixes: b/451741841