-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relatively high errors in gradients #956
Comments
Adding uniform noise of magnitude (in front of arrow) results in magnitude (behind arrow):
|
IF ( 1e-11 -> ~3e-09 AND same trend for easy/lazy/numpy ) THEN |
More data:
Evaluating your conditional @jbfreund, you might be a believer? 🙂 |
Assert(@jbfreund is a believer) I do think we're seeing roundoff errors. |
A possible explanation for lower errors near the origin is that the Jacobians arise from differentiating the nodal field (aka the interpolant of the local-to-global map). Those increase as you get away from the origin (that's their job), and so the Jacobians pick up increasing amounts of cancellation error as you get away from the origin. |
We have noticed that there seems to be some numerical artifacts in the result of gradient (or any weak differentiation operator) which we import from$A$ on a periodic 2d mesh [-0.5, 0.5 ] x [-0.5, 0.5], and then computes the gradient with weak operators using a central flux ($\frac{A^- + A^+}{2}\hat{\mathbf{n}})$ . The exact answer is 0, so we expect the computed answer to be 0(ish). For weak operators, the results seem to have abnormally high magnitude and an unexplained spatial dependence. Strong form gradients appear to have nominal behavior.
grudge
. The example below prescribes a uniform constant fieldThe magnitude of the weak operator errors increases with increasing distance from the absolute physical space origin (0, 0, 0). The error behaves as roundoff, proportional to the magnitude of the input field, and to the inverse of the grid spacing.
As a consequence of these errors, comparisons of results computed between different array contexts (eager vs. lazy vs. numpy) often fails when compared quantities involve derivatives.
Snapshot of the computed weak form$\nabla{A}, A=10^5$ :
![Screenshot 2023-08-22 at 12 23 28 PM](https://private-user-images.githubusercontent.com/8116930/262433242-18ef0849-cfc7-4b0d-bef7-c9b778c97eab.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzODY0NTcsIm5iZiI6MTczOTM4NjE1NywicGF0aCI6Ii84MTE2OTMwLzI2MjQzMzI0Mi0xOGVmMDg0OS1jZmM3LTRiMGQtYmVmNy1jOWI3NzhjOTdlYWIucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxMiUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTJUMTg0OTE3WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YjJkYTk4ZWJkNjAzNTYwMjM1OTA5NzkzYjgxN2E4NTNmNTI5OGE1ZTQ5NDQxMTcyZTc3MGIwMzE0MDdhMzBkNyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.ab1gFeMLCpLP6piMqz9aMmHMBaLhg7mn-2up83NZkho)
Snapshot of the computed strong form gradient ($\nabla{A}, A=10^5$ ):
![Screenshot 2023-08-22 at 11 44 20 AM](https://private-user-images.githubusercontent.com/8116930/262424266-87892a9f-7d1c-4e00-9d43-153a303abd70.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzODY0NTcsIm5iZiI6MTczOTM4NjE1NywicGF0aCI6Ii84MTE2OTMwLzI2MjQyNDI2Ni04Nzg5MmE5Zi03ZDFjLTRlMDAtOWQ0My0xNTNhMzAzYWJkNzAucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxMiUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTJUMTg0OTE3WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YWRlYjg4MmRhMmNjN2VjNTY0N2I4MTcxZmU5OWYzZGFlMDE2NTJkNmQ3MTQ0M2IyZmMyNzBlNjhiY2QxOTc1MyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.NcoAtnuA6B8Zbo6klKx50GL6TiRA-zoaWT5xEja-ifU)
The code to reproduce:
From offline discussions:
blah blah blah
Open questions:
HWCode = Codes-1.1
(code that goes with Hesthaven/Warburton Nodal DG book) appears to be subject to roundoff, but not like mirgecomGradient (HWCode/Grad2D) magnitude for constant ($A = 10^5$ ) scalar field:
HWCode/Grad2D:
The text was updated successfully, but these errors were encountered: