Skip to content

Commit 6537199

Browse files
authored
[small] fix link (#2906)
1 parent a58f40f commit 6537199

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

intermediate_source/process_group_cpp_extension_tutorial.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Customize Process Group Backends Using Cpp Extensions
22
=====================================================
33

4-
**Author**: `Howard Huang <https://github.com/H-Huang>`, `Feng Tian <https://github.com/ftian1>`__, `Shen Li <https://mrshenli.github.io/>`__, `Min Si <https://minsii.github.io/>`__
4+
**Author**: `Howard Huang <https://github.com/H-Huang>`__, `Feng Tian <https://github.com/ftian1>`__, `Shen Li <https://mrshenli.github.io/>`__, `Min Si <https://minsii.github.io/>`__
55

66
.. note::
77
|edit| View and edit this tutorial in `github <https://github.com/pytorch/tutorials/blob/main/intermediate_source/process_group_cpp_extension_tutorial.rst>`__.
@@ -100,7 +100,7 @@ repository for the full implementation.
100100
// The collective communication APIs without a custom implementation
101101
// will error out if invoked by application code.
102102
};
103-
103+
104104
class WorkDummy : public Work {
105105
public:
106106
WorkDummy(
@@ -266,8 +266,8 @@ After installation, you can conveniently use the ``dummy`` backend when calling
266266
`init_process_group <https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group>`__
267267
as if it is an builtin backend.
268268

269-
We can specify dispatching based on backend by changing the ``backend`` argument of ``init_process_group``. We
270-
can dispatch collective with CPU tensor to ``gloo`` backend and dispatch collective with CUDA tensor to ``dummy`` backend by
269+
We can specify dispatching based on backend by changing the ``backend`` argument of ``init_process_group``. We
270+
can dispatch collective with CPU tensor to ``gloo`` backend and dispatch collective with CUDA tensor to ``dummy`` backend by
271271
specifying ``cpu:gloo,cuda:dummy`` as the backend argument.
272272

273273
To send all tensors to ``dummy`` backend, we can simply specify ``dummy`` as the backend argument.

0 commit comments

Comments
 (0)