You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- If oneccl_bindings_for_pytorch is built without oneCCL and use oneCCL in system, dynamic link oneCCl from oneAPI basekit (recommended usage):
123
+
124
+
```bash
125
+
source$basekit_root/ccl/latest/env/vars.sh
126
+
```
127
+
128
+
Note: Make sure you have installed [basekit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#base-kit) when using Intel® oneCCL Bindings for Pytorch\* on Intel® GPUs.
129
+
130
+
- If oneccl_bindings_for_pytorch is built with oneCCL from third party or installed from prebuilt wheel:
131
+
Dynamic link oneCCL and Intel MPI libraries:
132
+
133
+
```bash
134
+
source$(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh
135
+
```
136
+
137
+
Dynamic link oneCCL only (not including Intel MPI):
138
+
139
+
```bash
140
+
source$(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/vars.sh
116
141
```
117
142
118
143
## Usage
@@ -145,15 +170,12 @@ model = torch.nn.parallel.DistributedDataParallel(model, ...)
145
170
...
146
171
```
147
172
148
-
(oneccl_bindings_for_pytorch is installed along with the MPI tool set.)
173
+
(oneccl_bindings_for_pytorch is built without oneCCL, use oneCCL and MPI(if needed) in system)
0 commit comments