Skip to content

WIP: ENH/TST: xp_assert_ enhancements #267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
218 changes: 134 additions & 84 deletions src/array_api_extra/_lib/_testing.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,27 +5,37 @@
See also ..testing for public testing utilities.
"""

from __future__ import annotations

import math
from types import ModuleType
from typing import cast
from typing import Any, cast

import numpy as np
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this OK? Sometimes it was imported within test functions below.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Today it is OK, as this module is not imported automatically from the outer scope.
In the long run though, we want to move this module to public at which point it won't be a good design anymore (although it remains to be seen if any Array library in real life can achieve not to have numpy as a hard dependency...)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think it would be fine to make these public API once they are ready, with the caveat that NumPy is required. We are really striving for minimal runtime dependencies rather than test time dependencies downstream, at least for now.

Copy link
Contributor

@crusaderky crusaderky Apr 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This module heavily relies on np.testing.assert* anyway.
We'll just need to add a test that import array_api_extra doesn't import numpy.

import pytest

from ._utils._compat import (
array_namespace,
is_array_api_strict_namespace,
is_cupy_namespace,
is_dask_namespace,
is_jax_namespace,
is_numpy_namespace,
is_pydata_sparse_namespace,
is_torch_namespace,
to_device,
)
from ._utils._typing import Array
from ._utils._typing import Array, Device

__all__ = ["xp_assert_close", "xp_assert_equal"]
__all__ = ["as_numpy_array", "xp_assert_close", "xp_assert_equal", "xp_assert_less"]


def _check_ns_shape_dtype(
actual: Array, desired: Array
actual: Array,
desired: Array,
check_dtype: bool,
check_shape: bool,
check_scalar: bool,
) -> ModuleType: # numpydoc ignore=RT03
"""
Assert that namespace, shape and dtype of the two arrays match.
Expand All @@ -47,43 +57,67 @@ def _check_ns_shape_dtype(
msg = f"namespaces do not match: {actual_xp} != f{desired_xp}"
assert actual_xp == desired_xp, msg

actual_shape = actual.shape
desired_shape = desired.shape
if is_dask_namespace(desired_xp):
# Dask uses nan instead of None for unknown shapes
if any(math.isnan(i) for i in cast(tuple[float, ...], actual_shape)):
actual_shape = actual.compute().shape # type: ignore[attr-defined] # pyright: ignore[reportAttributeAccessIssue]
if any(math.isnan(i) for i in cast(tuple[float, ...], desired_shape)):
desired_shape = desired.compute().shape # type: ignore[attr-defined] # pyright: ignore[reportAttributeAccessIssue]

msg = f"shapes do not match: {actual_shape} != f{desired_shape}"
assert actual_shape == desired_shape, msg

msg = f"dtypes do not match: {actual.dtype} != {desired.dtype}"
assert actual.dtype == desired.dtype, msg
if check_shape:
actual_shape = actual.shape
desired_shape = desired.shape
Comment on lines +61 to +62
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may fail if we start using it in scipy, because scipy overrides array_namespace to return numpy for scalars and lists. Maybe out of scope for this PR I though?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh. Yeah let's do l consider that in a SciPy PR that attempts to use this there. Then we can decide whether/what changes are needed.

if is_dask_namespace(desired_xp):
# Dask uses nan instead of None for unknown shapes
if any(math.isnan(i) for i in cast(tuple[float, ...], actual_shape)):
actual_shape = actual.compute().shape # type: ignore[attr-defined] # pyright: ignore[reportAttributeAccessIssue]
if any(math.isnan(i) for i in cast(tuple[float, ...], desired_shape)):
desired_shape = desired.compute().shape # type: ignore[attr-defined] # pyright: ignore[reportAttributeAccessIssue]

msg = f"shapes do not match: {actual_shape} != f{desired_shape}"
assert actual_shape == desired_shape, msg

if check_dtype:
msg = f"dtypes do not match: {actual.dtype} != {desired.dtype}"
assert actual.dtype == desired.dtype, msg

if is_numpy_namespace(actual_xp) and check_scalar:
Copy link
Contributor Author

@mdhaber mdhaber Apr 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if is_numpy_namespace(actual_xp) and check_scalar:
if is_numpy_namespace(actual_xp) and check_arrayness:

?
I seem to remember some discussion about naming this parameter when adding to SciPy (check_0d). I don't really care what it is called. check_type is also on the table.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scalar sounds fine to me.

# only NumPy distinguishes between scalars and arrays; we do if check_scalar.
_msg = (
"array-ness does not match:\n Actual: "
f"{type(actual)}\n Desired: {type(desired)}"
)
assert np.isscalar(actual) == np.isscalar(desired), _msg

return desired_xp


def _prepare_for_test(array: Array, xp: ModuleType) -> Array:
def as_numpy_array(array: Array, *, xp: ModuleType) -> np.typing.NDArray[Any]: # type: ignore[explicit-any]
"""
Ensure that the array can be compared with xp.testing or np.testing.

This involves transferring it from GPU to CPU memory, densifying it, etc.
Convert array to NumPy, bypassing GPU-CPU transfer guards and densification guards.
"""
if is_torch_namespace(xp):
return array.cpu() # type: ignore[attr-defined] # pyright: ignore[reportAttributeAccessIssue]
if is_cupy_namespace(xp):
return xp.asnumpy(array)
if is_pydata_sparse_namespace(xp):
return array.todense() # type: ignore[attr-defined] # pyright: ignore[reportAttributeAccessIssue]

if is_torch_namespace(xp):
array = to_device(array, "cpu")
if is_array_api_strict_namespace(xp):
# Note: we deliberately did not add a `.to_device` method in _typing.pyi
# even if it is required by the standard as many backends don't support it
return array.to_device(xp.Device("CPU_DEVICE")) # type: ignore[attr-defined] # pyright: ignore[reportAttributeAccessIssue]
# Note: nothing to do for CuPy, because it uses a bespoke test function
return array
cpu: Device = xp.Device("CPU_DEVICE")
array = to_device(array, cpu)
if is_jax_namespace(xp):
import jax

# Note: only needed if the transfer guard is enabled
cpu = cast(Device, jax.devices("cpu")[0])
array = to_device(array, cpu)

def xp_assert_equal(actual: Array, desired: Array, err_msg: str = "") -> None:
return np.asarray(array)


def xp_assert_equal(
actual: Array,
desired: Array,
*,
err_msg: str = "",
check_dtype: bool = True,
check_shape: bool = True,
check_scalar: bool = False,
) -> None:
"""
Array-API compatible version of `np.testing.assert_array_equal`.

Expand All @@ -95,34 +129,56 @@ def xp_assert_equal(actual: Array, desired: Array, err_msg: str = "") -> None:
The expected array (typically hardcoded).
err_msg : str, optional
Error message to display on failure.
check_dtype, check_shape : bool, default: True
Whether to check agreement between actual and desired dtypes and shapes
check_scalar : bool, default: False
NumPy only: whether to check agreement between actual and desired types -
0d array vs scalar.
Comment on lines +134 to +136
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default for this is the opposite as in scipy

Copy link
Contributor Author

@mdhaber mdhaber Apr 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to mention that, so thanks for bringing it up. I think it should be True, but that would make a lot of tests fail internally. I figured that could be fixed in a follow-up.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

array-api-extra tests? If so, yes, that sounds fine, but we should open an issue for that before merging this

Copy link
Contributor Author

@mdhaber mdhaber Apr 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Sure, I'll open an issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


See Also
--------
xp_assert_close : Similar function for inexact equality checks.
numpy.testing.assert_array_equal : Similar function for NumPy arrays.
"""
xp = _check_ns_shape_dtype(actual, desired)
actual = _prepare_for_test(actual, xp)
desired = _prepare_for_test(desired, xp)
xp = _check_ns_shape_dtype(actual, desired, check_dtype, check_shape, check_scalar)
actual_np = as_numpy_array(actual, xp=xp)
desired_np = as_numpy_array(desired, xp=xp)
np.testing.assert_array_equal(actual_np, desired_np, err_msg=err_msg)

if is_cupy_namespace(xp):
xp.testing.assert_array_equal(actual, desired, err_msg=err_msg)
elif is_torch_namespace(xp):
# PyTorch recommends using `rtol=0, atol=0` like this
# to test for exact equality
xp.testing.assert_close(
actual,
desired,
rtol=0,
atol=0,
equal_nan=True,
check_dtype=False,
msg=err_msg or None,
)
else:
import numpy as np # pylint: disable=import-outside-toplevel

np.testing.assert_array_equal(actual, desired, err_msg=err_msg)
def xp_assert_less(
x: Array,
y: Array,
*,
err_msg: str = "",
check_dtype: bool = True,
check_shape: bool = True,
check_scalar: bool = False,
) -> None:
"""
Array-API compatible version of `np.testing.assert_array_less`.

Parameters
----------
x, y : Array
The arrays to compare according to ``x < y`` (elementwise).
err_msg : str, optional
Error message to display on failure.
check_dtype, check_shape : bool, default: True
Whether to check agreement between actual and desired dtypes and shapes
check_scalar : bool, default: False
NumPy only: whether to check agreement between actual and desired types -
0d array vs scalar.

See Also
--------
xp_assert_close : Similar function for inexact equality checks.
numpy.testing.assert_array_equal : Similar function for NumPy arrays.
"""
xp = _check_ns_shape_dtype(x, y, check_dtype, check_shape, check_scalar)
x_np = as_numpy_array(x, xp=xp)
y_np = as_numpy_array(y, xp=xp)
np.testing.assert_array_less(x_np, y_np, err_msg=err_msg)


def xp_assert_close(
Expand All @@ -132,6 +188,9 @@ def xp_assert_close(
rtol: float | None = None,
atol: float = 0,
err_msg: str = "",
check_dtype: bool = True,
check_shape: bool = True,
check_scalar: bool = False,
) -> None:
"""
Array-API compatible version of `np.testing.assert_allclose`.
Expand All @@ -148,6 +207,11 @@ def xp_assert_close(
Absolute tolerance. Default: 0.
err_msg : str, optional
Error message to display on failure.
check_dtype, check_shape : bool, default: True
Whether to check agreement between actual and desired dtypes and shapes
check_scalar : bool, default: False
NumPy only: whether to check agreement between actual and desired types -
0d array vs scalar.

See Also
--------
Expand All @@ -159,40 +223,26 @@ def xp_assert_close(
-----
The default `atol` and `rtol` differ from `xp.all(xpx.isclose(a, b))`.
"""
xp = _check_ns_shape_dtype(actual, desired)

floating = xp.isdtype(actual.dtype, ("real floating", "complex floating"))
if rtol is None and floating:
# multiplier of 4 is used as for `np.float64` this puts the default `rtol`
# roughly half way between sqrt(eps) and the default for
# `numpy.testing.assert_allclose`, 1e-7
rtol = xp.finfo(actual.dtype).eps ** 0.5 * 4
elif rtol is None:
rtol = 1e-7

actual = _prepare_for_test(actual, xp)
desired = _prepare_for_test(desired, xp)

if is_cupy_namespace(xp):
xp.testing.assert_allclose(
actual, desired, rtol=rtol, atol=atol, err_msg=err_msg
)
elif is_torch_namespace(xp):
xp.testing.assert_close(
actual, desired, rtol=rtol, atol=atol, equal_nan=True, msg=err_msg or None
)
else:
import numpy as np # pylint: disable=import-outside-toplevel

# JAX/Dask arrays work directly with `np.testing`
assert isinstance(rtol, float)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was probably added to avoid the pyright error below? I don't think pyright should make us do this sort of thing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, pyright is failing to narrow the type of rtol after isinstance. I don't think we should bend over backward to make it happy, either.

np.testing.assert_allclose( # type: ignore[call-overload] # pyright: ignore[reportCallIssue]
actual, # pyright: ignore[reportArgumentType]
desired, # pyright: ignore[reportArgumentType]
rtol=rtol,
atol=atol,
err_msg=err_msg,
)
xp = _check_ns_shape_dtype(actual, desired, check_dtype, check_shape, check_scalar)

if rtol is None:
if xp.isdtype(actual.dtype, ("real floating", "complex floating")):
# multiplier of 4 is used as for `np.float64` this puts the default `rtol`
# roughly half way between sqrt(eps) and the default for
# `numpy.testing.assert_allclose`, 1e-7
rtol = xp.finfo(actual.dtype).eps ** 0.5 * 4
else:
rtol = 1e-7

actual_np = as_numpy_array(actual, xp=xp)
desired_np = as_numpy_array(desired, xp=xp)
np.testing.assert_allclose( # pyright: ignore[reportCallIssue]
actual_np,
desired_np,
rtol=rtol, # pyright: ignore[reportArgumentType]
atol=atol,
err_msg=err_msg,
)


def xfail(
Expand Down
1 change: 0 additions & 1 deletion tests/test_funcs.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,6 @@ def test_device(self, xp: ModuleType, device: Device):
y = apply_where(x % 2 == 0, x, self.f1, fill_value=x)
assert get_device(y) == device

@pytest.mark.xfail_xp_backend(Backend.SPARSE, reason="no isdtype")
@pytest.mark.filterwarnings("ignore::RuntimeWarning") # overflows, etc.
@hypothesis.settings(
# The xp and library fixtures are not regenerated between hypothesis iterations
Expand Down
Loading