-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Add autoloading tutorial #3037
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add autoloading tutorial #3037
Changes from 24 commits
ee33c2d
6215e6e
9f23f16
37c72ee
67ad7ff
22ddca1
891753e
54dd911
5c9d78b
aa47a21
7caaad8
b4e884d
523d289
b3766ed
5d9adfb
5d36b03
9123d82
e726565
84879c6
6ac2d2e
5a0f00e
a85ebed
2db4cee
4d44a78
d5fe718
b6281bf
a4ace51
3980ab7
dcb5fd3
93087be
a48cfc4
d1217dc
23cfef4
3c0c1e0
dda22c4
0a47d48
bcbe0f6
80c8683
2ba51d0
5ea5a36
f8365e8
0cc9850
33c60cc
ee5c353
e425fe9
f1018e3
ebfcbff
0b52b02
9a1b2f7
d45477c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,170 @@ | ||
Out-of-tree extension autoloading in Python | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
=========================================== | ||
|
||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
What is it? | ||
----------- | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
The extension autoloading mechanism enables PyTorch to automatically | ||
load out-of-tree backend extensions without explicit import statements. This | ||
mechanism is very useful for users. On the one hand, it improves the user | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
experience and enables users to adhere to the familiar PyTorch device | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
programming model without needing to explicitly load or import device-specific | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
extensions. On the other hand, it facilitates effortless | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
adoption of existing PyTorch applications with zero-code changes on | ||
out-of-tree devices. For further details, refer to the | ||
`[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/122468>`_. | ||
|
||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
.. note:: | ||
|
||
This feature is enabled by default and can be disabled using | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``export TORCH_DEVICE_BACKEND_AUTOLOAD=0``. | ||
If you get an error like this: "Failed to load the backend extension", | ||
this error is independent with PyTorch, you should disable this feature | ||
and ask the out-of-tree extension maintainer for help. | ||
|
||
How to apply this mechanism to out-of-tree extensions? | ||
------------------------------------------------------ | ||
|
||
For instance, suppose you have a backend named ``foo`` and a corresponding package named ``torch_foo``. Ensure that | ||
your package is compatible with PyTorch 2.5+ and includes the following snippet in its ``__init__.py`` file: | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
.. code-block:: python | ||
|
||
def _autoload(): | ||
print("No need to import torch_foo anymore! Check things are working with `torch.foo.is_available()`.") | ||
|
||
Then the only thing you need to do is define an entry point within your Python package: | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
.. code-block:: python | ||
|
||
setup( | ||
name="torch_foo", | ||
version="1.0", | ||
entry_points={ | ||
"torch.backends": [ | ||
"torch_foo = torch_foo:_autoload", | ||
], | ||
} | ||
) | ||
|
||
Now the ``torch_foo`` module can be imported when running import torch: | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
.. code-block:: python | ||
|
||
>>> import torch | ||
No need to import torch_foo anymore! Check things are working with `torch.foo.is_available()`. | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
>>> torch.foo.is_available() | ||
True | ||
|
||
You may encounter issues with circular imports, the following examples are intended to help you address them. | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
Examples | ||
^^^^^^^^ | ||
|
||
Here we take Intel Gaudi HPU and Huawei Ascend NPU as examples to determine how to | ||
integrate your out-of-tree extension with PyTorch using the autoloading mechanism. | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
`habana_frameworks.torch`_ is a Python package that enables users to run | ||
PyTorch programs on Intel Gaudi via the PyTorch ``HPU`` device key. | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
.. _habana_frameworks.torch: https://docs.habana.ai/en/latest/PyTorch/Getting_Started_with_PyTorch_and_Gaudi/Getting_Started_with_PyTorch.html | ||
|
||
``habana_frameworks.torch`` is a submodule of ``habana_frameworks``, we add an entry point to | ||
``__autoload()`` in ``habana_frameworks/setup.py``: | ||
|
||
.. code-block:: diff | ||
|
||
setup( | ||
name="habana_frameworks", | ||
version="2.5", | ||
+ entry_points={ | ||
+ 'torch.backends': [ | ||
+ "device_backend = habana_frameworks:__autoload", | ||
+ ], | ||
+ } | ||
) | ||
|
||
In ``habana_frameworks/init.py``, we use a global variable to track if our module has been loaded: | ||
|
||
.. code-block:: python | ||
|
||
import os | ||
|
||
is_loaded = False # A member variable of habana_frameworks module to track if our module has been imported | ||
|
||
def __autoload(): | ||
# This is an entrypoint for pytorch autoload mechanism | ||
# If the following condition is true, that means our backend has already been loaded, either explicitly | ||
# or by the autoload mechanism and importing it again should be skipped to avoid circular imports | ||
global is_loaded | ||
if is_loaded: | ||
return | ||
import habana_frameworks.torch | ||
|
||
In ``habana_frameworks/torch/init.py``, We prevent circular imports by updating the state of the global variable: | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
.. code-block:: python | ||
|
||
import os | ||
|
||
# This is to prevent torch autoload mechanism from causing circular imports | ||
import habana_frameworks | ||
|
||
habana_frameworks.is_loaded = True | ||
|
||
`torch_npu`_ enables users to run PyTorch programs on Huawei Ascend NPU, it | ||
leverages the ``PrivateUse1`` device key and exposes the device name | ||
as ``npu`` to the end users. | ||
|
||
.. _torch_npu: https://github.com/Ascend/pytorch | ||
|
||
We define an entry point in `torch_npu/setup.py`_: | ||
|
||
.. _torch_npu/setup.py: https://github.com/Ascend/pytorch/blob/master/setup.py#L618 | ||
|
||
.. code-block:: diff | ||
|
||
setup( | ||
name="torch_npu", | ||
version="2.5", | ||
+ entry_points={ | ||
+ 'torch.backends': [ | ||
+ 'torch_npu = torch_npu:_autoload', | ||
+ ], | ||
+ } | ||
) | ||
|
||
Unlike ``habana_frameworks``, ``torch_npu`` uses the environment variable ``TORCH_DEVICE_BACKEND_AUTOLOAD`` | ||
to control the autoloading process. For example, we set it to ``0`` to disable autoloading to prevent circular imports: | ||
|
||
.. code-block:: python | ||
|
||
# Disable autoloading before running 'import torch' | ||
os.environ['TORCH_DEVICE_BACKEND_AUTOLOAD'] = '0' | ||
|
||
import torch | ||
|
||
How it works | ||
------------ | ||
|
||
.. image:: ../_static/img/python_extension_autoload_impl.png | ||
:alt: Autoloading implementation | ||
:align: center | ||
|
||
This mechanism is implemented based on Python's `Entry points | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
<https://packaging.python.org/en/latest/specifications/entry-points/>`_ | ||
mechanism. We discover and load all of the specific entry points | ||
in ``torch/__init__.py`` that are defined by out-of-tree extensions. | ||
|
||
As shown above, after installing ``torch_foo``, your Python module can be imported | ||
when loading the entry point you defined, and then you can do some necessary work when | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
calling it. | ||
|
||
See the implementation in this pull request: `[RFC] Add support for device extension autoloading | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure if we should link to a pull request. Maybe you can link to the file - it looks like the changes have landed already. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This section talks about the implementation of autoloading, I think it makes sense to put a PR here. |
||
<https://github.com/pytorch/pytorch/pull/127074>`_. | ||
|
||
Conclusion | ||
---------- | ||
|
||
This tutorial has guided you through the out-of-tree extension autoloading | ||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
mechanism, including its usage and implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The diagram seems missing a pointer from
import torch
to the entry points to load - the loading of entry points is triggered byimport torch
, right?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I will make some changes to this diagram
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jgong5 updated, please have a look