Right now NFFT.jl gives the user no hint on how to chose the kernel size and the oversampling parameter. This was on purpose since I don't want too much "magic" within the code. Keeping m and sigma static has the additional advantage that it allows to compare different libraries on a fine level and understand bottlenecks better.
However, for the enduser we need an automatic parameter selection. The idea is that the user inputs the desired accuracy and then m and sigma are chosen to give the best performance. The tasks to be implemented are:
- Good accuracy model: There currently is the function
https://github.com/JuliaMath/NFFT.jl/blob/master/AbstractNFFTs/src/misc.jl#L55, which, however needs to be adapted. I am not sure if we can / should take literature accuracy bounds here or just develop a generic cost model and tune that in a data driven fashion. This means developing some function gamma*(alpha / sigma)^(beta*m) and tune the parameters gamma, alpha and beta.
- Performance model: We then need some performance model. Here I am right now not sure, how much heuristics and how much measurements we should do. A measurement would sweep over the kernel size and automatically chose the appropriate oversampling factor. This would be similar to
FFTW_MEASURE. I would, however, also like some heuristic FFTW_ESITMATE like code.
These are my initial thoughts. ducc has an implementation for this here: https://gitlab.mpcdf.mpg.de/mtr/ducc/-/blob/ducc0/src/ducc0/nufft/nufft.h#L147
Here is a small explanation from @mreineck regarding their idea:
To determine the oversampling factor I'm using a simple cost model: the higher the oversampling factor, the more expensive the FFT, but the lower the kernel support and therefore the gridding/degridding cost. For all the possible oversampling factors I estimate the computation time using heuristic formulas and pick the one that is expected to run fastest.
Right now NFFT.jl gives the user no hint on how to chose the kernel size and the oversampling parameter. This was on purpose since I don't want too much "magic" within the code. Keeping
mandsigmastatic has the additional advantage that it allows to compare different libraries on a fine level and understand bottlenecks better.However, for the enduser we need an automatic parameter selection. The idea is that the user inputs the desired accuracy and then
mandsigmaare chosen to give the best performance. The tasks to be implemented are:https://github.com/JuliaMath/NFFT.jl/blob/master/AbstractNFFTs/src/misc.jl#L55, which, however needs to be adapted. I am not sure if we can / should take literature accuracy bounds here or just develop a generic cost model and tune that in a data driven fashion. This means developing some functiongamma*(alpha / sigma)^(beta*m)and tune the parameters gamma, alpha and beta.FFTW_MEASURE. I would, however, also like some heuristicFFTW_ESITMATElike code.These are my initial thoughts.
ducchas an implementation for this here: https://gitlab.mpcdf.mpg.de/mtr/ducc/-/blob/ducc0/src/ducc0/nufft/nufft.h#L147Here is a small explanation from @mreineck regarding their idea: