-
Notifications
You must be signed in to change notification settings - Fork 1
Description
First of all, thanks for making a translation of NeuralFoil from Python to Julia. It is really nice to be able to run NeuralFoil natively from Julia.
Problem description
I started to play around with NeuralFoil.jl and compare its results against the original Python package. Unfortunately, it seems that NeuralFoil.jl almost always gives different predictions than the official Python package. In many cases, the difference is not just numerical noise; it is indeed meaningful, sometimes reaching the unit order.
In my opinion, NeuralFoil.jl should produce the same predictions as the Python package, ideally up to floating-point tolerance, but small tolerances (~1e-3) would also be acceptable. However, differences of around one unit may indicate something more fundamental. What is somewhat strange is that NeuralFoil.jl passes its tests (even though they cover a single airfoil). To test NeuralFoil.jl thoroughly I set up a small playground (https://github.com/gabrielbdsantos/TestNeuralFoil.jl). It contains the entire UIUC airfoil database (around 1600 samples).
From the test results, it seems that the central issue is the way NeuralFoil.jl computes the Kulfan parameters. Even though the method is correctly implemented, it does not produce the same parameters as the Python package. I understand that finding the Kulfan parameters requires an optimization, and that different solvers are likely to yield different results. However, the network was trained to output the aerodynamic coefficients considering the "Python package way" of computing the Kulfan parameters for an airfoil. Unfortunately, as one of the tests shows, the slightest differences in the Kulfan parameters can significantly affect the network predictions.
In my opinion, there are perhaps two possible solutions:
-
Tweak the way NeuralFoil.jl computes the Kulfan parameters to output almost the same ones as the Python package. The bad news is that to achieve consistent results, the Kulfan parameters of both packages should differ by no more than ~1e-6.
-
If that fails, the other possible solution would be to retrain the neural network using the Kulfan parameters computed by NeuralFoil.jl.
It should perhaps be noted though that the original neural network is itself an approximation. So, in essence, minor differences would fall within the uncertainty associated with such predictions. However, in many cases the network confidence is pretty high (> 0.9) and thus, any discrepancies in these cases are not just a matter of approximation.
My hope here is to contribute towards a drop-in replacement for the original Python package. I would be happy to help if necessary.