-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_quantized_training
is flaky
#6703
Comments
This is perhaps due to the nondeterministic result among different environments. I'll look into this and provide a more deterministic check for this test case. |
The second failure just happened: https://github.com/microsoft/LightGBM/actions/runs/12387337319/job/34576641552?pr=6761#step:5:7146
Interestingly, that numbers are the same. |
Two more:
|
Here's another one:
|
CUDA again, same numbers: |
Here's another one, exact same numbers: https://github.com/microsoft/LightGBM/actions/runs/13105210299/job/36558841829?pr=6808#step:5:7777 I think it's really interesting that the number are identical out to 10+ significant digits. |
Just noticed the test failure in
master
withcuda 11.8.0 pip (ubuntu20.04, clang, Python 3.11)
job.LightGBM/tests/python_package_test/test_engine.py
Lines 4564 to 4579 in 4a60a53
I don't think we should do anything right now. Just posting this issue to count future failures similarly how we do in #4074.
cc @shiyu1994 @jameslamb
The text was updated successfully, but these errors were encountered: