Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #5 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 3 3
Lines 281 372 +91
=========================================
+ Hits 281 372 +91 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Tests & coverage passing, ready for review or merge. @oscardssmith I have contributor rights on the repo obv but I think it's a good idea to have a second pair of eyes on code before merging. |
| end | ||
| end | ||
|
|
||
| function _loggamma(x::Float32) |
There was a problem hiding this comment.
Can we unify this (ideally with both the Float64 and BigFloat) nothing that we're doing here should be Float32 specific.
There was a problem hiding this comment.
Yes we can, I split it because the implementations may diverge in the future but I'm happy to merge them with type based dispatch if you prefer that
There was a problem hiding this comment.
Bigfloat we can't, because the polynomial approximations that are hard coded don't scale to arbitrary precision, that's why different approaches have to be used for bigfloats
There was a problem hiding this comment.
most of what I think could be unified for the big float version is the negative integer/nonfinite checks
| elseif isinf(x) | ||
| return x > 0 ? (x, 1) : (BigFloat(NaN), 1) | ||
| elseif x > 0 | ||
| return real(_loggamma_complex_bigfloat(Complex{BigFloat}(x, zero(BigFloat)))), 1 |
There was a problem hiding this comment.
Any easy optimizations for real gamma for BigFloat? I'd think that at the very least the polynomials should be nicer...
There was a problem hiding this comment.
Yes, squeezing out optimal for bigfloats is hard but better than routing to complex bigfloats is easy. I'll give it a shot
| zinv * @evalpoly(t, _STIRLING_COEFFS_32...) | ||
| end | ||
|
|
||
| function _loggamma(z::Complex{Float32}) |
There was a problem hiding this comment.
similarly to above, can this be unified with the other Complex{T} methods?
Perhaps something more sophisticated can be done but this is just the same methods but without upconverting to Float64, dropping some terms in the series evals and creating the constants as Float32 literals too.
I tried doing this with Float16 too but Float16 has no convenient constructors and was a lot more subtle on what a good quality approximation should return, so I left it for now - though I did change it to upconvert to Float32 instead of Float64.
I also significantly hardened the test cases, now we tests 10k+ cases for real and complex values of Float16, Float32 and Float64, hopefully this will contribute towards preventing accuracy regression in the future.