diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml new file mode 100644 index 00000000..07603e5a --- /dev/null +++ b/.github/workflows/docs.yml @@ -0,0 +1,31 @@ +name: Docs +on: [push, pull_request] +jobs: + docs: + runs-on: ubuntu-latest + strategy: + fail-fast: false + environment: + name: docs-build-and-deploy + steps: + - uses: actions/checkout@v2 + - uses: actions/setup-python@v2 + - name: Install Dependencies + run: | + python -m pip install -r docs/requirements.txt + + - name: Build Docs + run: | + cd docs + make html + + # Note, the gh-pages deployment requires setting up a SSH deploy key. + # See + # https://github.com/JamesIves/github-pages-deploy-action/tree/dev#using-an-ssh-deploy-key- + - name: Deploy + uses: JamesIves/github-pages-deploy-action@v4 + if: ${{ github.ref == 'refs/heads/main' }} + with: + folder: docs/_build/html + ssh-key: ${{ secrets.DEPLOY_KEY }} + force: no diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000..d2469f4a --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,7 @@ +Contributions to array-api-compat are welcome, so long as they are [in +scope](https://data-apis.org/array-api-compat/index.html#scope). + +Contributors are encouraged to read through the [development +notes](https://data-apis.org/array-api-compat/dev/index.html) for the package +to get full context on some of the design decisions and implementation +details used in the codebase. diff --git a/README.md b/README.md index 784197dc..8f567606 100644 --- a/README.md +++ b/README.md @@ -6,394 +6,4 @@ NumPy, CuPy, PyTorch, Dask, and JAX are supported. If you want support for other libraries, or if you encounter any issues, please [open an issue](https://github.com/data-apis/array-api-compat/issues). -Note that some of the functionality in this library is backwards incompatible -with the corresponding wrapped libraries. The end-goal is to eventually make -each array library itself fully compatible with the array API, but this -requires making backwards incompatible changes in many cases, so this will -take some time. - -Currently all libraries here are implemented against the [2022.12 -version](https://data-apis.org/array-api/2022.12/) of the standard. - -## Install - -`array-api-compat` is available on both [PyPI](https://pypi.org/project/array-api-compat/) - -``` -python -m pip install array-api-compat -``` - -and [Conda-forge](https://anaconda.org/conda-forge/array-api-compat) - -``` -conda install --channel conda-forge array-api-compat -``` - -## Usage - -The typical usage of this library will be to get the corresponding array API -compliant namespace from the input arrays using `array_namespace()`, like - -```py -def your_function(x, y): - xp = array_api_compat.array_namespace(x, y) - # Now use xp as the array library namespace - return xp.mean(x, axis=0) + 2*xp.std(y, axis=0) -``` - -If you wish to have library-specific code-paths, you can import the -corresponding wrapped namespace for each library, like - -```py -import array_api_compat.numpy as np -``` - -```py -import array_api_compat.cupy as cp -``` - -```py -import array_api_compat.torch as torch -``` - -```py -import array_api_compat.dask as da -``` - -> [!NOTE] -> There is no `array_api_compat.jax` submodule. JAX support is contained -> in JAX itself in the `jax.experimental.array_api` module. array-api-compat simply -> wraps that submodule. The main JAX support in this module consists of -> supporting it in the [helper functions](#helper-functions) defined below. - -Each will include all the functions from the normal NumPy/CuPy/PyTorch/dask.array -namespace, except that functions that are part of the array API are wrapped so -that they have the correct array API behavior. In each case, the array object -used will be the same array object from the wrapped library. - -## Difference between `array_api_compat` and `array_api_strict` - -`array_api_strict` is a strict minimal implementation of the array API standard, formerly -known as `numpy.array_api` (see -[NEP 47](https://numpy.org/neps/nep-0047-array-api-standard.html)). For -example, `array_api_strict` does not include any functions that are not part of -the array API specification, and will explicitly disallow behaviors that are -not required by the spec (e.g., [cross-kind type -promotions](https://data-apis.org/array-api/latest/API_specification/type_promotion.html)). -(`cupy.array_api` is similar to `array_api_strict`) - -`array_api_compat`, on the other hand, is just an extension of the -corresponding array library namespaces with changes needed to be compliant -with the array API. It includes all additional library functions not mentioned -in the spec, and allows any library behaviors not explicitly disallowed by it, -such as cross-kind casting. - -In particular, unlike `array_api_strict`, this package does not use a separate -`Array` object, but rather just uses the corresponding array library array -objects (`numpy.ndarray`, `cupy.ndarray`, `torch.Tensor`, etc.) directly. This -is because those are the objects that are going to be passed as inputs to -functions by end users. This does mean that a few behaviors cannot be wrapped -(see below), but most of the array API functional, so this does not affect -most things. - -Array consuming library authors coding against the array API may wish to test -against `array_api_strict` to ensure they are not using functionality outside -of the standard, but prefer this implementation for the default behavior for -end-users. - -## Helper Functions - -In addition to the wrapped library namespaces and functions in the array API -specification, there are several helper functions included here that aren't -part of the specification but which are useful for using the array API: - -- `is_array_api_obj(x)`: Return `True` if `x` is an array API compatible array - object. - -- `is_numpy_array(x)`, `is_cupy_array(x)`, `is_torch_array(x)`, - `is_dask_array(x)`, `is_jax_array(x)`: return `True` if `x` is an array from - the corresponding library. These functions do not import the underlying - library if it has not already been imported, so they are cheap to use. - -- `array_namespace(*xs)`: Get the corresponding array API namespace for the - arrays `xs`. For example, if the arrays are NumPy arrays, the returned - namespace will be `array_api_compat.numpy`. Note that this function will - also work for namespaces that aren't supported by this compat library but - which do support the array API (i.e., arrays that have the - `__array_namespace__` attribute). - -- `device(x)`: Equivalent to - [`x.device`](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.device.html) - in the array API specification. Included because `numpy.ndarray` does not - include the `device` attribute and this library does not wrap or extend the - array object. Note that for NumPy and dask, `device(x)` is always `"cpu"`. - -- `to_device(x, device, /, *, stream=None)`: Equivalent to - [`x.to_device`](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.to_device.html). - Included because neither NumPy's, CuPy's, Dask's, nor PyTorch's array objects - include this method. For NumPy, this function effectively does nothing since - the only supported device is the CPU, but for CuPy, this method supports - CuPy CUDA - [Device](https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.Device.html) - and - [Stream](https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.Stream.html) - objects. For PyTorch, this is the same as - [`x.to(device)`](https://pytorch.org/docs/stable/generated/torch.Tensor.to.html) - (the `stream` argument is not supported in PyTorch). - -- `size(x)`: Equivalent to - [`x.size`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.size.html#array_api.array.size), - i.e., the number of elements in the array. Included because PyTorch's - `Tensor` defines `size` as a method which returns the shape, and this cannot - be wrapped because this compat library doesn't wrap or extend the array - objects. - -## Known Differences from the Array API Specification - -There are some known differences between this library and the array API -specification: - -### NumPy and CuPy - -- The array methods `__array_namespace__`, `device` (for NumPy), `to_device`, - and `mT` are not defined. This reuses `np.ndarray` and `cp.ndarray` and we - don't want to monkeypatch or wrap it. The helper functions `device()` and - `to_device()` are provided to work around these missing methods (see above). - `x.mT` can be replaced with `xp.linalg.matrix_transpose(x)`. - `array_namespace(x)` should be used instead of `x.__array_namespace__`. - -- Value-based casting for scalars will be in effect unless explicitly disabled - with the environment variable `NPY_PROMOTION_STATE=weak` or - `np._set_promotion_state('weak')` (requires NumPy 1.24 or newer, see [NEP - 50](https://numpy.org/neps/nep-0050-scalar-promotion.html) and - https://github.com/numpy/numpy/issues/22341) - -- `asarray()` does not support `copy=False`. - -- Functions which are not wrapped may not have the same type annotations - as the spec. - -- Functions which are not wrapped may not use positional-only arguments. - -The minimum supported NumPy version is 1.21. However, this older version of -NumPy has a few issues: - -- `unique_*` will not compare nans as unequal. -- `finfo()` has no `smallest_normal`. -- No `from_dlpack` or `__dlpack__`. -- `argmax()` and `argmin()` do not have `keepdims`. -- `qr()` doesn't support matrix stacks. -- `asarray()` doesn't support `copy=True` (as noted above, `copy=False` is not - supported even in the latest NumPy). -- Type promotion behavior will be value based for 0-D arrays (and there is no - `NPY_PROMOTION_STATE=weak` to disable this). - -If any of these are an issue, it is recommended to bump your minimum NumPy -version. - -### PyTorch - -- Like NumPy/CuPy, we do not wrap the `torch.Tensor` object. It is missing the - `__array_namespace__` and `to_device` methods, so the corresponding helper - functions `array_namespace()` and `to_device()` in this library should be - used instead (see above). - -- The `x.size` attribute on `torch.Tensor` is a function that behaves - differently from - [`x.size`](https://data-apis.org/array-api/draft/API_specification/generated/array_api.array.size.html) - in the spec. Use the `size(x)` helper function as a portable workaround (see - above). - -- PyTorch does not have unsigned integer types other than `uint8`, and no - attempt is made to implement them here. - -- PyTorch has type promotion semantics that differ from the array API - specification for 0-D tensor objects. The array functions in this wrapper - library do work around this, but the operators on the Tensor object do not, - as no operators or methods on the Tensor object are modified. If this is a - concern, use the functional form instead of the operator form, e.g., `add(x, - y)` instead of `x + y`. - -- [`unique_all()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.unique_all.html#array_api.unique_all) - is not implemented, due to the fact that `torch.unique` does not support - returning the `indices` array. The other - [`unique_*`](https://data-apis.org/array-api/latest/API_specification/set_functions.html) - functions are implemented. - -- Slices do not support negative steps. - -- [`std()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.std.html#array_api.std) - and - [`var()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.var.html#array_api.var) - do not support floating-point `correction`. - -- The `stream` argument of the `to_device()` helper (see above) is not - supported. - -- As with NumPy, type annotations and positional-only arguments may not - exactly match the spec for functions that are not wrapped at all. - -The minimum supported PyTorch version is 1.13. - -### JAX - -Unlike the other libraries supported here, JAX array API support is contained -entirely in the JAX library. The JAX array API support is tracked at -https://github.com/google/jax/issues/18353. - -## Dask - -If you're using dask with numpy, many of the same limitations that apply to numpy -will also apply to dask. Besides those differences, other limitations include missing -sort functionality (no `sort` or `argsort`), and limited support for the optional `linalg` -and `fft` extensions. - -In particular, the `fft` namespace is not compliant with the array API spec. Any functions -that you find under the `fft` namespace are the original, unwrapped functions under [`dask.array.fft`](https://docs.dask.org/en/latest/array-api.html#fast-fourier-transforms), which may or may not be Array API compliant. Use at your own risk! - -For `linalg`, several methods are missing, for example: -- `cross` -- `det` -- `eigh` -- `eigvalsh` -- `matrix_power` -- `pinv` -- `slogdet` -- `matrix_norm` -- `matrix_rank` -Other methods may only be partially implemented or return incorrect results at times. - -The minimum supported Dask version is 2023.12.0. - -## Vendoring - -This library supports vendoring as an installation method. To vendor the -library, simply copy `array_api_compat` into the appropriate place in the -library, like - -``` -cp -R array_api_compat/ mylib/vendored/array_api_compat -``` - -You may also rename it to something else if you like (nowhere in the code -references the name "array_api_compat"). - -Alternatively, the library may be installed as dependency on PyPI. - -## Implementation Notes - -As noted before, the goal of this library is to reuse the NumPy and CuPy array -objects, rather than wrapping or extending them. This means that the functions -need to accept and return `np.ndarray` for NumPy and `cp.ndarray` for CuPy. - -Each namespace (`array_api_compat.numpy`, `array_api_compat.cupy`, and -`array_api_compat.torch`) is populated with the normal library namespace (like -`from numpy import *`). Then specific functions are replaced with wrapped -variants. - -Since NumPy and CuPy are nearly identical in behavior, most wrapping logic can -be shared between them. Wrapped functions that have the same logic between -NumPy and CuPy are in `array_api_compat/common/`. -These functions are defined like - -```py -# In array_api_compat/common/_aliases.py - -def acos(x, /, xp): - return xp.arccos(x) -``` - -The `xp` argument refers to the original array namespace (either `numpy` or -`cupy`). Then in the specific `array_api_compat/numpy/` and -`array_api_compat/cupy/` namespaces, the `@get_xp` decorator is applied to -these functions, which automatically removes the `xp` argument from the -function signature and replaces it with the corresponding array library, like - -```py -# In array_api_compat/numpy/_aliases.py - -from ..common import _aliases - -import numpy as np - -acos = get_xp(np)(_aliases.acos) -``` - -This `acos` now has the signature `acos(x, /)` and calls `numpy.arccos`. - -Similarly, for CuPy: - -```py -# In array_api_compat/cupy/_aliases.py - -from ..common import _aliases - -import cupy as cp - -acos = get_xp(cp)(_aliases.acos) -``` - -Since NumPy and CuPy are nearly identical in their behaviors, this allows -writing the wrapping logic for both libraries only once. - -PyTorch uses a similar layout in `array_api_compat/torch/`, but it differs -enough from NumPy/CuPy that very few common wrappers for those libraries are -reused. - -See https://numpy.org/doc/stable/reference/array_api.html for a full list of -changes from the base NumPy (the differences for CuPy are nearly identical). A -corresponding document does not yet exist for PyTorch, but you can examine the -various comments in the -[implementation](https://github.com/data-apis/array-api-compat/blob/main/array_api_compat/torch/_aliases.py) -to see what functions and behaviors have been wrapped. - - -## Releasing - -To release, first note that CuPy must be tested manually (it isn't tested on -CI). Use the script - -``` -./test_cupy.sh -``` - -on a machine with a CUDA GPU. - -Once you are ready to release, create a PR with a release branch, so that you -can verify that CI is passing. You must edit - -``` -array_api_compat/__init__.py -``` - -and update the version (the version is not computed from the tag because that -would break vendorability). You should also edit - -``` -CHANGELOG.md -``` - -with the changes for the release. - -Then create a tag - -``` -git tag -a -``` - -and push it to GitHub - -``` -git push origin -``` - -Check that the `publish distributions` action works. Note that this action -will run even if the other CI fails, so you must make sure that CI is passing -*before* tagging. - -This does mean you can ignore CI failures, but ideally you should fix any -failures or update the `*-xfails.txt` files before tagging, so that CI and the -cupy tests pass. Otherwise it will be hard to tell what things are breaking in -the future. It's also a good idea to remove any xpasses from those files (but -be aware that some xfails are from flaky failures, so unless you know the -underlying issue has been fixed, a xpass test is probably still xfail). +See the documentation for more details https://data-apis.org/array-api-compat/ diff --git a/array_api_compat/common/_helpers.py b/array_api_compat/common/_helpers.py index ca9b746d..25419c01 100644 --- a/array_api_compat/common/_helpers.py +++ b/array_api_compat/common/_helpers.py @@ -19,6 +19,24 @@ import warnings def is_numpy_array(x): + """ + Return True if `x` is a NumPy array. + + This function does not import NumPy if it has not already been imported + and is therefore cheap to use. + + This also returns True for `ndarray` subclasses and NumPy scalar objects. + + See Also + -------- + + array_namespace + is_array_api_obj + is_cupy_array + is_torch_array + is_dask_array + is_jax_array + """ # Avoid importing NumPy if it isn't already if 'numpy' not in sys.modules: return False @@ -29,6 +47,24 @@ def is_numpy_array(x): return isinstance(x, (np.ndarray, np.generic)) def is_cupy_array(x): + """ + Return True if `x` is a CuPy array. + + This function does not import CuPy if it has not already been imported + and is therefore cheap to use. + + This also returns True for `cupy.ndarray` subclasses and CuPy scalar objects. + + See Also + -------- + + array_namespace + is_array_api_obj + is_numpy_array + is_torch_array + is_dask_array + is_jax_array + """ # Avoid importing NumPy if it isn't already if 'cupy' not in sys.modules: return False @@ -39,6 +75,22 @@ def is_cupy_array(x): return isinstance(x, (cp.ndarray, cp.generic)) def is_torch_array(x): + """ + Return True if `x` is a PyTorch tensor. + + This function does not import PyTorch if it has not already been imported + and is therefore cheap to use. + + See Also + -------- + + array_namespace + is_array_api_obj + is_numpy_array + is_cupy_array + is_dask_array + is_jax_array + """ # Avoid importing torch if it isn't already if 'torch' not in sys.modules: return False @@ -49,6 +101,22 @@ def is_torch_array(x): return isinstance(x, torch.Tensor) def is_dask_array(x): + """ + Return True if `x` is a dask.array Array. + + This function does not import dask if it has not already been imported + and is therefore cheap to use. + + See Also + -------- + + array_namespace + is_array_api_obj + is_numpy_array + is_cupy_array + is_torch_array + is_jax_array + """ # Avoid importing dask if it isn't already if 'dask.array' not in sys.modules: return False @@ -58,6 +126,23 @@ def is_dask_array(x): return isinstance(x, dask.array.Array) def is_jax_array(x): + """ + Return True if `x` is a JAX array. + + This function does not import JAX if it has not already been imported + and is therefore cheap to use. + + + See Also + -------- + + array_namespace + is_array_api_obj + is_numpy_array + is_cupy_array + is_torch_array + is_dask_array + """ # Avoid importing jax if it isn't already if 'jax' not in sys.modules: return False @@ -68,7 +153,17 @@ def is_jax_array(x): def is_array_api_obj(x): """ - Check if x is an array API compatible array object. + Return True if `x` is an array API compatible array object. + + See Also + -------- + + array_namespace + is_numpy_array + is_cupy_array + is_torch_array + is_dask_array + is_jax_array """ return is_numpy_array(x) \ or is_cupy_array(x) \ @@ -87,17 +182,57 @@ def array_namespace(*xs, api_version=None, _use_compat=True): """ Get the array API compatible namespace for the arrays `xs`. - `xs` should contain one or more arrays. + Parameters + ---------- + xs: arrays + one or more arrays. + + api_version: str + The newest version of the spec that you need support for (currently + the compat library wrapped APIs support v2022.12). + + Returns + ------- + + out: namespace + The array API compatible namespace corresponding to the arrays in `xs`. + + Raises + ------ + TypeError + If `xs` contains arrays from different array libraries or contains a + non-array. + - Typical usage is + Typical usage is to pass the arguments of a function to + `array_namespace()` at the top of a function to get the corresponding + array API namespace: - def your_function(x, y): - xp = array_api_compat.array_namespace(x, y) - # Now use xp as the array library namespace - return xp.mean(x, axis=0) + 2*xp.std(y, axis=0) + .. code:: python + + def your_function(x, y): + xp = array_api_compat.array_namespace(x, y) + # Now use xp as the array library namespace + return xp.mean(x, axis=0) + 2*xp.std(y, axis=0) + + + Wrapped array namespaces can also be imported directly. For example, + `array_namespace(np.array(...))` will return `array_api_compat.numpy`. + This function will also work for any array library not wrapped by + array-api-compat if it explicitly defines `__array_namespace__ + `__ + (the wrapped namespace is always preferred if it exists). + + See Also + -------- + + is_array_api_obj + is_numpy_array + is_cupy_array + is_torch_array + is_dask_array + is_jax_array - api_version should be the newest version of the spec that you need support - for (currently the compat library wrapped APIs only support v2021.12). """ namespaces = set() for x in xs: @@ -181,15 +316,33 @@ def device(x: Array, /) -> Device: """ Hardware device the array data resides on. + This is equivalent to `x.device` according to the `standard + `__. + This helper is included because some array libraries either do not have + the `device` attribute or include it with an incompatible API. + Parameters ---------- x: array - array instance from NumPy or an array API compatible library. + array instance from an array API compatible library. Returns ------- out: device - a ``device`` object (see the "Device Support" section of the array API specification). + a ``device`` object (see the `Device Support `__ + section of the array API specification). + + Notes + ----- + + For NumPy the device is always `"cpu"`. For Dask, the device is always a + special `DASK_DEVICE` object. + + See Also + -------- + + to_device : Move array data to a different device. + """ if is_numpy_array(x): return "cpu" @@ -262,22 +415,50 @@ def to_device(x: Array, device: Device, /, *, stream: Optional[Union[int, Any]] """ Copy the array from the device on which it currently resides to the specified ``device``. + This is equivalent to `x.to_device(device, stream=stream)` according to + the `standard + `__. + This helper is included because some array libraries do not have the + `to_device` method. + Parameters ---------- + x: array - array instance from NumPy or an array API compatible library. + array instance from an array API compatible library. + device: device - a ``device`` object (see the "Device Support" section of the array API specification). + a ``device`` object (see the `Device Support `__ + section of the array API specification). + stream: Optional[Union[int, Any]] - stream object to use during copy. In addition to the types supported in ``array.__dlpack__``, implementations may choose to support any library-specific stream object with the caveat that any code using such an object would not be portable. + stream object to use during copy. In addition to the types supported + in ``array.__dlpack__``, implementations may choose to support any + library-specific stream object with the caveat that any code using + such an object would not be portable. Returns ------- + out: array - an array with the same data and data type as ``x`` and located on the specified ``device``. + an array with the same data and data type as ``x`` and located on the + specified ``device``. + + Notes + ----- + + For NumPy, this function effectively does nothing since the only supported + device is the CPU. For CuPy, this method supports CuPy CUDA + :external+cupy:class:`Device ` and + :external+cupy:class:`Stream ` objects. For PyTorch, + this is the same as :external+torch:meth:`x.to(device) ` + (the ``stream`` argument is not supported in PyTorch). + + See Also + -------- + + device : Hardware device the array data resides on. - .. note:: - If ``stream`` is given, the copy operation should be enqueued on the provided ``stream``; otherwise, the copy operation should be enqueued on the default stream/queue. Whether the copy is performed synchronously or asynchronously is implementation-dependent. Accordingly, if synchronization is required to guarantee data safety, this must be clearly explained in a conforming library's documentation. """ if is_numpy_array(x): if stream is not None: @@ -305,7 +486,13 @@ def to_device(x: Array, device: Device, /, *, stream: Optional[Union[int, Any]] def size(x): """ - Return the total number of elements of x + Return the total number of elements of x. + + This is equivalent to `x.size` according to the `standard + `__. + This helper is included because PyTorch defines `size` in an + :external+torch:meth:`incompatible way `. + """ if None in x.shape: return None diff --git a/docs/Makefile b/docs/Makefile new file mode 100644 index 00000000..11356c4f --- /dev/null +++ b/docs/Makefile @@ -0,0 +1,23 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = . +BUILDDIR = _build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +livehtml: + sphinx-autobuild --open-browser --watch .. --port 0 -b html $(SOURCEDIR) $(ALLSPHINXOPTS) $(BUILDDIR)/html diff --git a/docs/_static/custom.css b/docs/_static/custom.css new file mode 100644 index 00000000..bac04989 --- /dev/null +++ b/docs/_static/custom.css @@ -0,0 +1,12 @@ +/* Makes the text look better on Mac retina displays (the Furo CSS disables*/ +/* subpixel antialiasing). */ +body { + -webkit-font-smoothing: auto; + -moz-osx-font-smoothing: auto; +} + +/* Disable the fancy scrolling behavior when jumping to headers (this is too + slow for long pages) */ +html { + scroll-behavior: auto; +} diff --git a/docs/_static/favicon.png b/docs/_static/favicon.png new file mode 100644 index 00000000..49b7d9d6 Binary files /dev/null and b/docs/_static/favicon.png differ diff --git a/docs/conf.py b/docs/conf.py new file mode 100644 index 00000000..d8d5c2da --- /dev/null +++ b/docs/conf.py @@ -0,0 +1,86 @@ +# Configuration file for the Sphinx documentation builder. +# +# For the full list of built-in configuration values, see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + +# -- Project information ----------------------------------------------------- +# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information + +import sys +import os +sys.path.insert(0, os.path.abspath('..')) + +project = 'array-api-compat' +copyright = '2024, Consortium for Python Data API Standards' +author = 'Consortium for Python Data API Standards' + +import array_api_compat +release = array_api_compat.__version__ + +# -- General configuration --------------------------------------------------- +# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration + +extensions = [ + 'myst_parser', + 'sphinx.ext.autodoc', + 'sphinx.ext.napoleon', + 'sphinx.ext.intersphinx', + 'sphinx_copybutton', +] + +intersphinx_mapping = { + 'cupy': ('https://docs.cupy.dev/en/stable', None), + 'torch': ('https://pytorch.org/docs/stable/', None), +} +# Require :external: to reference intersphinx. +intersphinx_disabled_reftypes = ['*'] + +templates_path = ['_templates'] +exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] + +myst_enable_extensions = ["dollarmath", "linkify"] + +napoleon_use_rtype = False +napoleon_use_param = False + +# Make sphinx give errors for bad cross-references +nitpicky = True +# autodoc wants to make cross-references for every type hint. But a lot of +# them don't actually refer to anything that we have a document for. +nitpick_ignore = [ + ("py:class", "Array"), + ("py:class", "Device"), +] + +# Lets us use single backticks for code in RST +default_role = 'code' + +# -- Options for HTML output ------------------------------------------------- +# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output + +html_theme = 'furo' +html_static_path = ['_static'] + +html_css_files = ['custom.css'] + +html_theme_options = { + # See https://pradyunsg.me/furo/customisation/footer/ + "footer_icons": [ + { + "name": "GitHub", + "url": "https://github.com/data-apis/array-api-compat", + "html": """ + + + + """, + "class": "", + }, + ], +} + +# Logo + +html_favicon = "_static/favicon.png" + +# html_logo = "_static/logo.svg" diff --git a/docs/dev/implementation-notes.md b/docs/dev/implementation-notes.md new file mode 100644 index 00000000..48fb01f1 --- /dev/null +++ b/docs/dev/implementation-notes.md @@ -0,0 +1,49 @@ +# Implementation Notes + +Since NumPy, CuPy, and to a degree, Dask, are nearly identical in behavior, +most wrapping logic can be shared between them. Wrapped functions that have +the same logic between multiple libraries are in `array_api_compat/common/`. +These functions are defined like + +```py +# In array_api_compat/common/_aliases.py + +def acos(x, /, xp): + return xp.arccos(x) +``` + +The `xp` argument refers to the original array namespace (e.g., `numpy` or +`cupy`). Then in the specific `array_api_compat/numpy/` and +`array_api_compat/cupy/` namespaces, the `@get_xp` decorator is applied to +these functions, which automatically removes the `xp` argument from the +function signature and replaces it with the corresponding array library, like + +```py +# In array_api_compat/numpy/_aliases.py + +from ..common import _aliases + +import numpy as np + +acos = get_xp(np)(_aliases.acos) +``` + +This `acos` now has the signature `acos(x, /)` and calls `numpy.arccos`. + +Similarly, for CuPy: + +```py +# In array_api_compat/cupy/_aliases.py + +from ..common import _aliases + +import cupy as cp + +acos = get_xp(cp)(_aliases.acos) +``` + +Most NumPy and CuPy are defined in this way, since their behaviors are nearly +identical PyTorch uses a similar layout in `array_api_compat/torch/`, but it +differs enough from NumPy/CuPy that very few common wrappers for those +libraries are reused. Dask is close to NumPy in behavior and so most Dask +functions also reuse the NumPy/CuPy common wrappers. diff --git a/docs/dev/index.md b/docs/dev/index.md new file mode 100644 index 00000000..c7eb6c08 --- /dev/null +++ b/docs/dev/index.md @@ -0,0 +1,13 @@ +# Development Notes + +This is internal documentation related to the development of array-api-compat. +It is recommended that contributors read through this documentation. + +```{toctree} +:titlesonly: + +special-considerations.md +implementation-notes.md +tests.md +releasing.md +``` diff --git a/docs/dev/releasing.md b/docs/dev/releasing.md new file mode 100644 index 00000000..5d2cbea3 --- /dev/null +++ b/docs/dev/releasing.md @@ -0,0 +1,59 @@ +# Releasing + +To release, first make sure that all CI tests are passing on `main`. + +Note that CuPy must be tested manually (it isn't tested on CI). Use the script + +``` +./test_cupy.sh +``` + +on a machine with a CUDA GPU. + +Once you are ready to release, create a PR with a release branch, so that you +can verify that CI is passing. You must edit + +``` +array_api_compat/__init__.py +``` + +and update the version (the version is not computed from the tag because that +would break vendorability). You should also edit + +``` +CHANGELOG.md +``` + +with the changes for the release. + +Once everything is ready, create a tag + +``` +git tag -a +``` + +(note the tag names are not prefixed, for instance, the tag for version 1.5 is +just `1.5`) + +and push it to GitHub + +``` +git push origin +``` + +Check that the `publish distributions` action on the tag build works. Note +that this action will run even if the other CI fails, so you must make sure +that CI is passing *before* tagging. + +This does mean you can ignore CI failures, but ideally you should fix any +failures or update the `*-xfails.txt` files before tagging, so that CI and the +cupy tests pass. Otherwise it will be hard to tell what things are breaking in +the future. It's also a good idea to remove any xpasses from those files (but +be aware that some xfails are from flaky failures, so unless you know the +underlying issue has been fixed, an xpass test is probably still xfail). + +If the publish action fails for some reason and didn't upload the release to +PyPI, you will need to delete the tag and try again. + +After the PyPI package is published, the conda-forge bot should update the +feedstock automatically. diff --git a/docs/dev/special-considerations.md b/docs/dev/special-considerations.md new file mode 100644 index 00000000..da868f31 --- /dev/null +++ b/docs/dev/special-considerations.md @@ -0,0 +1,83 @@ +# Special Considerations + +array-api-compat requires some special development considerations that are +different from most other Python libraries. The goal of array-api-compat is to +be a small library that packages can either vendor or add as a dependency to +implement array API support. Consequently, certain design considerations +should be taken into account: + +- **No Hard Dependencies.** Although array-api-compat "depends" on NumPy, CuPy, + PyTorch, etc., it does not hard depend on them. These libraries are not + imported unless either an array object is passed to + {func}`~.array_namespace()`, or the specific `array_api_compat.` + sub-namespace is explicitly imported. + +- **Vendorability.** array-api-compat should be [vendorable](vendoring). This + means that, for instance, all imports in the library are relative imports. + No code in the package specifically references the name `array_api_compat` + (we also support renaming the package to something else). + Vendorability support is tested in `tests/test_vendoring.py`. + +- **Pure Python.** To make array-api-compat as easy as possible to add as a + dependency, the code is all pure Python. + +- **Minimal Wrapping Only.** The wrapping functionality is minimal. This means + that if something is difficult to wrap using pure Python, or if trying to + support some array API behavior would require a significant amount of code, + we prefer to leave the behavior as an upstream issue for the array library, + and [document it as a known difference](../supported-array-libraries.md). + + This also means that we do not at this point in time implement anything + other than wrappers for functions in the standard, and basic [helper + functions](../helper-functions.rst) that would be useful for most users of + array-api-compat. The addition of functions that are not part of the array + API standard is currently out-of-scope for this package (see the + [Scope](scope) section of the documentation). + +- **No Side-Effects**. array-api-compat behavior should be localized to only the + specific code that imports and uses it. It should be invisible to end-users + or users of dependent codes. This in particular implies to the next two + points. + +- **No Monkey Patching.** `array-api-compat` should not attempt to modify + anything about the underlying library. It is a *wrapper* library only. + +- **No Modifying the Array Object.** The array (or tensor) object of the array + library cannot be modified. This also precludes the creation of array + subclasses or wrapper classes. + + Any non-standard behavior that is built-in to the array object, such as the + behavior of [array + methods](https://data-apis.org/array-api/latest/API_specification/array_object.html), + is therefore left unwrapped. Users can workaround issues by using + corresponding [elementwise + functions](https://data-apis.org/array-api/latest/API_specification/elementwise_functions.html) + instead of + [operators](https://data-apis.org/array-api/latest/API_specification/array_object.html#operators), + and by using the [helper functions](../helper-functions.rst) provided by + array-api-compat instead of attributes or methods like `x.to_device()`. + +- **Avoid Restricting Behavior that is Outside the Scope of the Standard.** All + array libraries have functions and behaviors that are outside of the scope + of what is specified by the standard. These behaviors should be left intact + whenever possible, unless the standard explicitly disallows something. This + means + + - All namespaces are *extended* with wrapper functions. You may notice the + extensive use of `import *` in various files in `array_api_compat`. While + this would normally be questionable, this is the [one actual legitimate + use-case for `import *`](https://peps.python.org/pep-0008/#imports), to + re-export names from an external namespace. + + - All wrapper functions pass `**kwargs` through to the wrapped function. + + - Input types not supported by the standard should work if they work in the + underlying wrapped function (for instance, Python scalars or `np.ndarray` + subclasses). + + By keeping underlying behaviors intact, it is easier for libraries to swap + out NumPy or other array libraries for array-api-compat, and it is easier + for libraries to write array library-specific code paths. + + The onus is on users of array-api-compat to ensure their array API code is + portable, e.g., by testing against [array-api-strict](array-api-strict). diff --git a/docs/dev/tests.md b/docs/dev/tests.md new file mode 100644 index 00000000..6a1ee2f7 --- /dev/null +++ b/docs/dev/tests.md @@ -0,0 +1,29 @@ +# Tests + +The majority of the behavior for array-api-compat is tested by the +[array-api-tests](https://github.com/data-apis/array-api-tests) test suite for +the array API standard. There are also array-api-compat specific tests in +[`tests/`](https://github.com/data-apis/array-api-compat/tree/main/tests). +These tests should be limited to things that are not tested by the test suite, +e.g., tests for [helper functions](../helper-functions.rst) or for behavior +that is not strictly required by the standard. + +array-api-tests is run against all supported libraries are tested on CI +([except for JAX](jax-support)). This is achieved by a [reusable GitHub Actions +Workflow](https://github.com/data-apis/array-api-compat/blob/main/.github/workflows/array-api-tests.yml). +Most libraries have tests that must be xfailed or skipped for various reasons. +These are defined in specific `-xfails.txt` files and are +automatically forwarded to array-api-tests. + +You may often need to update these xfail files, either to add new xfails +(e.g., because of new test suite features, or because a test that was +previously thought to be passing actually flaky fails). Try to keep the xfails +files organized, with comments pointing to upstream issues whenever possible. + +From time to time, xpass tests should be removed from the xfail files, but be +aware that many xfail tests are flaky, so an xpass should only be removed if +you know that the underlying issue has been fixed. + +Array libraries that require a GPU to run (currently only CuPy) cannot be +tested on CI. There is a helper script `test_cupy.sh` that can be used to +manually test CuPy on a machine with a CUDA GPU. diff --git a/docs/helper-functions.rst b/docs/helper-functions.rst new file mode 100644 index 00000000..dcaa2e44 --- /dev/null +++ b/docs/helper-functions.rst @@ -0,0 +1,51 @@ +Helper Functions +================ + +.. currentmodule:: array_api_compat + +In addition to the wrapped library namespaces and functions in the array API +specification, there are several helper functions included here that aren't +part of the specification but which are useful for using the array API: + +Entry-point Helpers +------------------- + +The `array_namespace()` function is the primary entry-point for array API +consuming libraries. + + +.. autofunction:: array_namespace +.. autofunction:: is_array_api_obj + +Array Method Helpers +-------------------- + +array-api-compat does not attempt to wrap or monkey patch the array object for +any library. Consequently, any API differences for the `array object +`__ +cannot be directly wrapped. Some libraries do not define some of these methods +or define them differently. For these, helper functions are provided which can +be used instead. + +Note that if you have a compatibility issue with an operator method (like +`__add__`, i.e., `+`) you can prefer to use the corresponding `elementwise +function +`__ +instead, which would be wrapped. + +.. autofunction:: device +.. autofunction:: to_device +.. autofunction:: size + +Inspection Helpers +------------------ + +These convenience functions can be used to test if an array comes from a +specific library without importing that library if it hasn't been imported +yet. + +.. autofunction:: is_numpy_array +.. autofunction:: is_cupy_array +.. autofunction:: is_torch_array +.. autofunction:: is_dask_array +.. autofunction:: is_jax_array diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 00000000..287c7a12 --- /dev/null +++ b/docs/index.md @@ -0,0 +1,186 @@ +# Array API compatibility library + +This is a small wrapper around common array libraries that is compatible with +the [Array API standard](https://data-apis.org/array-api/latest/). Currently, +NumPy, CuPy, PyTorch, Dask, and JAX are supported. If you want support for other array +libraries, or if you encounter any issues, please [open an +issue](https://github.com/data-apis/array-api-compat/issues). + +Note that some of the functionality in this library is backwards incompatible +with the corresponding wrapped libraries. The end-goal is to eventually make +each array library itself fully compatible with the array API, but this +requires making backwards incompatible changes in many cases, so this will +take some time. + +Currently all libraries here are implemented against the [2022.12 +version](https://data-apis.org/array-api/2022.12/) of the standard. + +## Installation + +`array-api-compat` is available on both [PyPI](https://pypi.org/project/array-api-compat/) + +``` +python -m pip install array-api-compat +``` + +and [conda-forge](https://anaconda.org/conda-forge/array-api-compat) + +``` +conda install --channel conda-forge array-api-compat +``` + +## Usage + +The typical usage of this library will be to get the corresponding array API +compliant namespace from the input arrays using {func}`~.array_namespace()`, like + +```py +def your_function(x, y): + xp = array_api_compat.array_namespace(x, y) + # Now use xp as the array library namespace + return xp.mean(x, axis=0) + 2*xp.std(y, axis=0) +``` + +If you wish to have library-specific code-paths, you can import the +corresponding wrapped namespace for each library, like + +```py +import array_api_compat.numpy as np +``` + +```py +import array_api_compat.cupy as cp +``` + +```py +import array_api_compat.torch as torch +``` + +```py +import array_api_compat.dask as da +``` + +```{note} +There is no `array_api_compat.jax` submodule. JAX support is contained in JAX +itself in the `jax.experimental.array_api` module. array-api-compat simply +wraps that submodule. The main JAX support in this module consists of +supporting it in the [helper functions](helper-functions). +``` + +Each will include all the functions from the normal NumPy/CuPy/PyTorch/dask.array +namespace, except that functions that are part of the array API are wrapped so +that they have the correct array API behavior. In each case, the array object +used will be the same array object from the wrapped library. + +(array-api-strict)= +## Difference between `array_api_compat` and `array_api_strict` + +[`array_api_strict`](https://github.com/data-apis/array-api-strict) is a +strict minimal implementation of the array API standard, formerly known as +`numpy.array_api` (see [NEP +47](https://numpy.org/neps/nep-0047-array-api-standard.html)). For example, +`array_api_strict` does not include any functions that are not part of the +array API specification, and will explicitly disallow behaviors that are not +required by the spec (e.g., [cross-kind type +promotions](https://data-apis.org/array-api/latest/API_specification/type_promotion.html)). +(`cupy.array_api` is similar to `array_api_strict`) + +`array_api_compat`, on the other hand, is just an extension of the +corresponding array library namespaces with changes needed to be compliant +with the array API. It includes all additional library functions not mentioned +in the spec, and allows any library behaviors not explicitly disallowed by it, +such as cross-kind casting. + +In particular, unlike `array_api_strict`, this package does not use a separate +`Array` object, but rather just uses the corresponding array library array +objects (`numpy.ndarray`, `cupy.ndarray`, `torch.Tensor`, etc.) directly. This +is because those are the objects that are going to be passed as inputs to +functions by end users. This does mean that a few behaviors cannot be wrapped +(see below), but most of the array API functional, so this does not affect +most things. + +Array consuming library authors coding against the array API may wish to test +against `array_api_strict` to ensure they are not using functionality outside +of the standard, but prefer this implementation for the default behavior for +end-users. + +(vendoring)= +## Vendoring + +This library supports vendoring as an installation method. To vendor the +library, simply copy `array_api_compat` into the appropriate place in the +library, like + +``` +cp -R array_api_compat/ mylib/vendored/array_api_compat +``` + +You may also rename it to something else if you like (nowhere in the code +references the name "array_api_compat"). + +Alternatively, the library may be installed as dependency on PyPI. + +(scope)= +## Scope + +At this time, the scope of array-api-compat is limited to wrapping array +libraries so that they can comply with the [array API +standard](https://data-apis.org/array-api/latest/API_specification/index.html). +This includes a small set of [helper functions](helper-functions.rst) which may +be useful to most users of array-api-compat, for instance, functions that +provide meta-functionality to aid in supporting the array API, or functions +that are necessary to work around wrapping limitations for certain libraries. + +Things that are out-of-scope include: + +- functions that have not yet been +standardized (although note that functions that are in a draft version of the +standard are *in scope*), + +- functions that are complicated to implement correctly/maintain, + +- anything that requires the use of non-Python code. + +If you want a function that is not in array-api-compat that isn't part of the +standard, you should request it either for [inclusion in the +standard](https://github.com/data-apis/array-api/issues) or in specific array +libraries. + +Why is the scope limited in this way? Firstly, we want to keep +array-api-compat as primarily a +[polyfill](https://en.wikipedia.org/wiki/Polyfill_(programming)) compatibility +shim. The goal is to let consuming libraries use the array API today, even +with array libraries that do not yet fully support it. In an ideal world---one that we hope to eventually see in the future---array-api-compat would be +unnecessary, because every array library would fully support the standard. + +The inclusion of non-standardized functions in array-api-compat would +undermine this goal. But much more importantly, it would also undermine the +goals of the [Data APIs Consortium](https://data-apis.org/). The Consortium +creates the array API standard via the consensus of stakeholders from various +array libraries and users. If a not-yet-standardized function were included in +array-api-compat, it would become *de facto* standard, bypassing the decision +making processes of the Consortium. + +Secondly, we want to keep array-api-compat as minimal as possible, so that it +is easy for libraries to add as a (possibly vendored) dependency. + +Thirdly, array-api-compat has a relatively small development team. Pull +requests to array-api-compat would not necessarily receive the same stringent +level of scrutiny that changes to established array libraries like NumPy or +PyTorch would. For wrapped standard functions, this is fine, since the +wrappers typically just clean up a few small inconsistencies from the +standard, leaving the complexity of the implementation to the base array +library function. Furthermore, standard functions are tested by the rigorous +[array-api-tests](https://github.com/data-apis/array-api-tests) test suite. +For this reason, functions that require complex implementations are generally +out-of-scope and should be preferred to be implemented in upstream array +libraries. + +```{toctree} +:titlesonly: +:hidden: + +helper-functions.rst +supported-array-libraries.md +dev/index.md +``` diff --git a/docs/make.bat b/docs/make.bat new file mode 100644 index 00000000..32bb2452 --- /dev/null +++ b/docs/make.bat @@ -0,0 +1,35 @@ +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=. +set BUILDDIR=_build + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.https://www.sphinx-doc.org/ + exit /b 1 +) + +if "%1" == "" goto help + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/docs/requirements.txt b/docs/requirements.txt new file mode 100644 index 00000000..dbec7740 --- /dev/null +++ b/docs/requirements.txt @@ -0,0 +1,6 @@ +furo +linkify-it-py +myst-parser +sphinx +sphinx-copybutton +sphinx-autobuild diff --git a/docs/supported-array-libraries.md b/docs/supported-array-libraries.md new file mode 100644 index 00000000..861b74bd --- /dev/null +++ b/docs/supported-array-libraries.md @@ -0,0 +1,134 @@ +# Supported Array Libraries + +The following array libraries are supported. This page outlines the known +differences between this library and the array API specification for the +supported packages. + +Note that the {func}`~.array_namespace()` helper will also support any array +library that explicitly supports the array API by defining +[`__array_namespace__`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__array_namespace__.html). + +Any reasonably popular array library is in-scope for array-api-compat, +assuming it is possible to wrap it to support the array API without too much +complexity. If your favorite library is not supported, feel free to open an +[issue or pull request](https://github.com/data-apis/array-api-compat/issues). + +## [NumPy](https://numpy.org/) and [CuPy](https://cupy.dev/) + +NumPy 2.0 has full array API compatibility. This package is not strictly +necessary for NumPy 2.0 support, but may still be useful for the support of +other libraries, as well as for the [helper functions](helper-functions.rst). + +For NumPy 1.26, as well as corresponding versions of CuPy, the following +deviations from the standard should be noted: + +- The array methods `__array_namespace__`, `device` (for NumPy), `to_device`, + and `mT` are not defined. This reuses `np.ndarray` and `cp.ndarray` and we + don't want to monkey patch or wrap it. The [helper + functions](helper-functions.rst) {func}`~.device()` and {func}`~.to_device()` + are provided to work around these missing methods. `x.mT` can be replaced + with `xp.linalg.matrix_transpose(x)`. {func}`~.array_namespace()` should be + used instead of `x.__array_namespace__`. + +- Value-based casting for scalars will be in effect unless explicitly disabled + with the environment variable `NPY_PROMOTION_STATE=weak` or + `np._set_promotion_state('weak')` (requires NumPy 1.24 or newer, see [NEP + 50](https://numpy.org/neps/nep-0050-scalar-promotion.html) and + https://github.com/numpy/numpy/issues/22341) + +- `asarray()` does not support `copy=False`. + +- Functions which are not wrapped may not have the same type annotations + as the spec. + +- Functions which are not wrapped may not use positional-only arguments. + +The minimum supported NumPy version is 1.21. However, this older version of +NumPy has a few issues: + +- `unique_*` will not compare nans as unequal. +- `finfo()` has no `smallest_normal`. +- No `from_dlpack` or `__dlpack__`. +- `argmax()` and `argmin()` do not have `keepdims`. +- `qr()` doesn't support matrix stacks. +- `asarray()` doesn't support `copy=True` (as noted above, `copy=False` is not + supported even in the latest NumPy). +- Type promotion behavior will be value based for 0-D arrays (and there is no + `NPY_PROMOTION_STATE=weak` to disable this). + +If any of these are an issue, it is recommended to bump your minimum NumPy +version. + +## [PyTorch](https://pytorch.org/) + +- Like NumPy/CuPy, we do not wrap the `torch.Tensor` object. It is missing the + `__array_namespace__` and `to_device` methods, so the corresponding helper + functions {func}`~.array_namespace()` and {func}`~.to_device()` in this + library should be used instead. + +- The {external+torch:meth}`x.size() ` attribute on + `torch.Tensor` is a method that behaves differently from the + [`x.size`](https://data-apis.org/array-api/draft/API_specification/generated/array_api.array.size.html) + attribute in the spec. Use the {func}`~.size()` helper function as a + portable workaround. + +- PyTorch does not have unsigned integer types other than `uint8`, and no + attempt is made to implement them here. + +- PyTorch has type promotion semantics that differ from the array API + specification for 0-D tensor objects. The array functions in this wrapper + library do work around this, but the operators on the Tensor object do not, + as no operators or methods on the Tensor object are modified. If this is a + concern, use the functional form instead of the operator form, e.g., `add(x, + y)` instead of `x + y`. + +- [`unique_all()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.unique_all.html#array_api.unique_all) + is not implemented, due to the fact that `torch.unique` does not support + returning the `indices` array. The other + [`unique_*`](https://data-apis.org/array-api/latest/API_specification/set_functions.html) + functions are implemented. + +- Slices do not support negative steps. + +- [`std()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.std.html#array_api.std) + and + [`var()`](https://data-apis.org/array-api/latest/API_specification/generated/array_api.var.html#array_api.var) + do not support floating-point `correction`. + +- The `stream` argument of the {func}`~.to_device()` helper is not supported. + +- As with NumPy, type annotations and positional-only arguments may not + exactly match the spec for functions that are not wrapped at all. + +The minimum supported PyTorch version is 1.13. + +(jax-support)= +## [JAX](https://jax.readthedocs.io/en/latest/) + +Unlike the other libraries supported here, JAX array API support is contained +entirely in the JAX library. The JAX array API support is tracked at +https://github.com/google/jax/issues/18353. + +## [Dask](https://www.dask.org/) + +If you're using dask with numpy, many of the same limitations that apply to numpy +will also apply to dask. Besides those differences, other limitations include missing +sort functionality (no `sort` or `argsort`), and limited support for the optional `linalg` +and `fft` extensions. + +In particular, the `fft` namespace is not compliant with the array API spec. Any functions +that you find under the `fft` namespace are the original, unwrapped functions under [`dask.array.fft`](https://docs.dask.org/en/latest/array-api.html#fast-fourier-transforms), which may or may not be Array API compliant. Use at your own risk! + +For `linalg`, several methods are missing, for example: +- `cross` +- `det` +- `eigh` +- `eigvalsh` +- `matrix_power` +- `pinv` +- `slogdet` +- `matrix_norm` +- `matrix_rank` +Other methods may only be partially implemented or return incorrect results at times. + +The minimum supported Dask version is 2023.12.0.