Skip to content

Conversation

@jinseopkim0
Copy link
Contributor

@jinseopkim0 jinseopkim0 commented Oct 20, 2025

When a query job fails and the subsequent getQueryResults API call also fails with a retryable error (e.g., rateLimitExceeded), the Job.waitFor() method would enter a retry loop without checking the underlying job's status. This caused the client to hang indefinitely, only ending when the total timeout was reached.

This fix addresses the issue by intercepting the retryable exception within the waitForQueryResults polling loop. Before proceeding with a retry, the code now makes an additional getJob() call to check the job's actual status. If the job is already in a terminal DONE state, the retry loop is immediately terminated, and the final job status is returned to the user.

A regression test has been added to simulate a similar failure scenario, ensuring the client no longer hangs and correctly returns the failed job.

Fixes: b/451741841

@product-auto-label product-auto-label bot added size: m Pull request size is medium. api: bigquery Issues related to the googleapis/java-bigquery API. labels Oct 20, 2025
@jinseopkim0 jinseopkim0 force-pushed the job-wait branch 6 times, most recently from 9809f7d to 53d6cd8 Compare October 20, 2025 21:39
When a query job fails and the subsequent `getQueryResults` API call also fails with a retryable error (e.g., `rateLimitExceeded`), the `Job.waitFor()` method would enter a retry loop without checking the underlying job's status. This caused the client to hang indefinitely, only ending when the total timeout was reached.

This fix addresses the issue by intercepting the retryable exception within the `waitForQueryResults` polling loop. Before proceeding with a retry, the code now makes an additional `getJob()` call to check the job's actual status. If the job is already in a terminal `DONE` state, the retry loop is immediately terminated, and the final job status is returned to the user.

A regression test has been added to simulate this specific failure scenario, ensuring the client no longer hangs and correctly returns the failed job.

Fixes: b/451741841
@jinseopkim0 jinseopkim0 added kokoro:force-run Add this label to force Kokoro to re-run the tests. kokoro:run Add this label to force Kokoro to re-run the tests. labels Oct 20, 2025
@yoshi-kokoro yoshi-kokoro removed kokoro:run Add this label to force Kokoro to re-run the tests. kokoro:force-run Add this label to force Kokoro to re-run the tests. labels Oct 20, 2025
@jinseopkim0 jinseopkim0 marked this pull request as ready for review October 20, 2025 23:15
@jinseopkim0 jinseopkim0 requested a review from a team as a code owner October 20, 2025 23:15
@jinseopkim0 jinseopkim0 requested review from lqiu96 and suzmue October 20, 2025 23:15
public QueryResponse call() {
return bigquery.getQueryResults(getJobId(), resultsOptions);
try {
return bigquery.getQueryResults(getJobId(), resultsOptions);
Copy link
Member

@lqiu96 lqiu96 Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think code-wise this probably will work, but I'm not sure I'm the best person to review for logic (yet). I think for this PR someone more familiar with bigquery's internals would be better since this impacts all the RPCs.

I don't fully understand jobs vs queries yet and to me, this seems odd. Shouldn't the queryResults be checking the job status to even get the query results? How does the query get results if the job didn't finish? Seems odd that if the server returns rate limit exception, we attempt to bypass the response and directly poll for the job result and return a dummy value.

If there is a RateLimitException from the server, I think the first thing would be to have stronger backoff requirements so we ease quota/ load. I think that the default 12 hour timeout is too much (unless BQ has long running jobs which I'm not familiar enough to know).

I think if we do support user configurations, then perhaps the user should shorten the total timeout and increase the backoff so they're not sitting waiting 12+ hours.

Perhaps Phong could give us more insight?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Lawrence for the feedback. @PhongChuong, could you please take a look?

Copy link
Contributor

@PhongChuong PhongChuong Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this should work but I think we should be reductant to add even more logic to our already complicated retry logic.

Can we verify a few things first before proceeding? I'm thinking out loud:

  1. It seems weird to me that the shouldRetry on 494 should be True when semantically, we expect prevResponse.getCompleted() to be true since the Job == DONE. The value is set here based on GetQueryResultsResponse. Can you verify if the server response with the correct JobStatus that matches that of getJob?
  2. I wonder if we should add the getJob + jobStatus check logic into the shouldRetry section instead.

Regarding the timeout, it is currently possible to set that value. However, our default is as @lqiu96 said extremely long at 12 hours. IIRC, we had a brief discussion regarding changing this value but there was no consensus moving forward. It might be useful t bring this up again during the next BigQuery meeting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: bigquery Issues related to the googleapis/java-bigquery API. size: m Pull request size is medium.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants