Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relatively high errors in gradients #956

Open
2 of 4 tasks
MTCam opened this issue Aug 18, 2023 · 5 comments
Open
2 of 4 tasks

Relatively high errors in gradients #956

MTCam opened this issue Aug 18, 2023 · 5 comments

Comments

@MTCam
Copy link
Member

MTCam commented Aug 18, 2023

We have noticed that there seems to be some numerical artifacts in the result of gradient (or any weak differentiation operator) which we import from grudge. The example below prescribes a uniform constant field $A$ on a periodic 2d mesh [-0.5, 0.5 ] x [-0.5, 0.5], and then computes the gradient with weak operators using a central flux ($\frac{A^- + A^+}{2}\hat{\mathbf{n}})$. The exact answer is 0, so we expect the computed answer to be 0(ish). For weak operators, the results seem to have abnormally high magnitude and an unexplained spatial dependence. Strong form gradients appear to have nominal behavior.

The magnitude of the weak operator errors increases with increasing distance from the absolute physical space origin (0, 0, 0). The error behaves as roundoff, proportional to the magnitude of the input field, and to the inverse of the grid spacing.

As a consequence of these errors, comparisons of results computed between different array contexts (eager vs. lazy vs. numpy) often fails when compared quantities involve derivatives.

Snapshot of the computed weak form $\nabla{A}, A=10^5$:
Screenshot 2023-08-22 at 12 23 28 PM

Snapshot of the computed strong form gradient ($\nabla{A}, A=10^5$):
Screenshot 2023-08-22 at 11 44 20 AM

The code to reproduce:

"""Simple periodic gradient example."""
import logging
from meshmode.mesh import BTAG_ALL, BTAG_NONE  # noqa
from grudge.trace_pair import interior_trace_pairs
import grudge.op as op
import grudge.geometry as geo
from grudge.shortcuts import make_visualizer
from mirgecom.discretization import create_discretization_collection


def _grad_flux_central(dcoll, u_tpair):
    u = u_tpair

    actx = u_tpair.int.array_context
    normal = geo.normal(actx, dcoll, u_tpair.dd)

    flux_weak = normal*u.avg

    return op.project(dcoll, u_tpair.dd, "all_faces", flux_weak)


def grad_operator(dcoll, u, *, comm_tag=None):
    """Compute a simple gradient."""
    bnd_term = op.face_mass(dcoll, sum(_grad_flux_central(dcoll, u_tpair=tpair)
                                       for tpair in interior_trace_pairs(
                                        dcoll, u, comm_tag=(_GradTag, comm_tag))))  # noqa

    return op.inverse_mass(dcoll, bnd_term - op.weak_local_grad(dcoll, u))


class _GradTag:
    pass


def main(actx_class):
    """Drive the example."""
    from mirgecom.array_context import initialize_actx
    actx = initialize_actx(actx_class, None)

    dim = 2
    nel_1d = 16
    from meshmode.mesh.generation import generate_regular_rect_mesh

    mesh = generate_regular_rect_mesh(
        a=(-0.5,)*dim,
        b=(0.5,)*dim,
        nelements_per_axis=(nel_1d,)*dim,
        periodic=(True,)*dim)

    order = 3

    dcoll = create_discretization_collection(actx, mesh, order=order)
    nodes = actx.thaw(dcoll.nodes())
    zeros = actx.np.zeros_like(nodes[0])
    ones = zeros + 1.0

    print("%d elements" % mesh.nelements)

    u_val = 10000.0
    u = u_val * ones

    vis = make_visualizer(dcoll)

    grad_u = grad_operator(dcoll, u)
    vis.write_vtk_file("grad.vtu",
                       [("u", u), ("grad_u", grad_u),], overwrite=True)


if __name__ == "__main__":
    logging.basicConfig(format="%(message)s", level=logging.INFO)

    import argparse
    parser = argparse.ArgumentParser(description="Simple Periodic Gradient")
    parser.add_argument("--lazy", action="store_true",
        help="switch to a lazy computation mode")
    parser.add_argument("--numpy", action="store_true",
        help="use numpy-based eager actx.")
    args = parser.parse_args()

    from mirgecom.array_context import get_reasonable_array_context_class
    actx_class = get_reasonable_array_context_class(lazy=args.lazy,
                                                    distributed=False,
                                                    profiling=False,
                                                    numpy=args.numpy)

    main(actx_class)

From offline discussions:

  • Have a look at metrics, diff ops in grudge.op (jacobian seems ok)
  • Try a different code (nudge code non-zero, but nominal errors)

blah blah blah

A weak $\nabla{A}$ strong $\nabla{A}$
$10^0$ $4.3\times10^{-13}$ $6.4\times10^{-14}$
$10^1$ $4.3\times10^{-12}$ $6.9\times10^{-13}$
$10^2$ $4.5\times10^{-11}$ $7.2\times10^{-12}$
$10^3$ $4.5\times10^{-10}$ $6.6\times10^{-11}$
$10^4$ $4.3\times10^{-9}$ $7.0\times10^{-10}$

Open questions:

  • Why is it smaller at the origin?
  • Why is the result different between eager and lazy?
  • Is it really just cancellation or roundoff? (seems to be, yes)
  • Do other DG codes do the same?
    • Grad2D function of HWCode = Codes-1.1 (code that goes with Hesthaven/Warburton Nodal DG book) appears to be subject to roundoff, but not like mirgecom
    • HWCode does not have the same spatial dependence; the HWCode errors seem ~uniformly distributed in space (see below)

Gradient (HWCode/Grad2D) magnitude for constant ($A = 10^5$) scalar field:

Screenshot 2023-08-21 at 9 35 17 PM

HWCode/Grad2D:

A $\nabla{A}$
$10^0$ $1.21\times10^{-13}$
$10^1$ $1.26\times10^{-12}$
$10^2$ $1.22\times10^{-11}$
$10^3$ $1.16\times10^{-10}$
$10^4$ $1.16\times10^{-9}$
@inducer
Copy link
Contributor

inducer commented Aug 18, 2023

Adding uniform noise of magnitude (in front of arrow) results in magnitude (behind arrow):

    # 0 -> 1.4e-11
    # 1e-14 -> 1.7e-11
    # 1e-13 -> 3.5e-11
    # 1e-12 -> 2.9e-10

@jbfreund
Copy link

IF ( 1e-11 -> ~3e-09 AND same trend for easy/lazy/numpy ) THEN
“I might be a believer”
END IF

@inducer
Copy link
Contributor

inducer commented Aug 21, 2023

More data:

# Eager

# 0 -> 1.4e-11
# 1e-14 -> 1.7e-11
# 1e-13 -> 3.5e-11
# 1e-12 -> 2.9e-10
# 1e-11 -> 3e-9

# Lazy

# 0 -> 1.1e-11
# 1e-14 -> 1.3e-11
# 1e-13 -> 3e-11
# 1e-12 -> 3.1e-10
# 1e-11 -> 2.8e-9

Evaluating your conditional @jbfreund, you might be a believer? 🙂

@jbfreund
Copy link

Assert(@jbfreund is a believer)

I do think we're seeing roundoff errors.

@inducer
Copy link
Contributor

inducer commented Aug 25, 2023

A possible explanation for lower errors near the origin is that the Jacobians arise from differentiating the nodal field (aka the interpolant of the local-to-global map). Those increase as you get away from the origin (that's their job), and so the Jacobians pick up increasing amounts of cancellation error as you get away from the origin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants