6
6
Distributed Ranges
7
7
===================
8
8
9
- .. image :: https://github.com/intel-sandbox/libraries.runtimes.hpc.dds. distributed-ranges/actions/workflows/pr.yml/badge.svg
10
- :target: https://github.com/intel-sandbox/libraries.runtimes.hpc.dds. distributed-ranges/actions/workflows/pr.yml
9
+ .. image :: https://github.com/oneapi-src/ distributed-ranges/actions/workflows/pr.yml/badge.svg
10
+ :target: https://github.com/oneapi-src/ distributed-ranges/actions/workflows/pr.yml
11
11
.. image :: https://www.bestpractices.dev/projects/8975/badge
12
12
:target: https://www.bestpractices.dev/projects/8975
13
13
@@ -17,60 +17,124 @@ C++ Ranges.
17
17
About
18
18
-----
19
19
20
- Distributed Ranges is a productivity library for distributed and partitioned memory based on C++ ranges.
20
+ Distributed Ranges is a C++ productivity library for distributed and partitioned memory based on C++ ranges.
21
21
It offers a collection of data structures, views, and algorithms for building generic abstractions
22
22
and provides interoperability with MPI, SHMEM, SYCL and OpenMP and portability on CPUs and GPUs.
23
23
NUMA-aware allocators and distributed data structures facilitate development of C++ applications
24
24
on heterogeneous nodes with multiple devices and achieve excellent performance and parallel scalability
25
25
by exploiting local compute and data access.
26
26
27
+ Main strength of the library
28
+ ============================
29
+
30
+ In this model one can:
31
+
32
+ * create a `distributed data structure ` that work with all our algorithms out of the box
33
+ * create an `algorithm ` that works with all our distributed data structures out of the box
34
+
35
+ Distributed Ranges is a `glue ` that makes this possible.
36
+
37
+
27
38
Documentation
28
39
-------------
29
40
30
- - Usage:
31
- - Introductory presentation: `Distributed Ranges, why you need it `_, 2024
32
- - Article: `Get Started with Distributed Ranges `_, 2023
33
- - Tutorial: `Sample repository showing Distributed Ranges usage `_
34
- - Design / Implementation:
35
- - Conference paper: `Distributed Ranges, A Model for Distributed Data Structures, Algorithms, and Views `_, 2024
36
- - Talk: `CppCon 2023; Benjamin Brock; Distributed Ranges `_, 2023
37
- - Technical presentation: `Intel Innovation'23 `_, 2023
38
- - `API specification `_
39
- - `Doxygen `_
41
+ * Usage:
40
42
41
- Contact us
42
- ----------
43
+ * Introductory presentation: `Distributed Ranges, why you need it `_, 2024
44
+ * Article: `Get Started with Distributed Ranges `_, 2023
45
+ * Tutorial: `Distributed Ranges Tutorial `_
43
46
44
- We seek collaboration opportunities and welcome feedback on ways to extend the library,
45
- according to developer needs. Contact us by writing a `new issue `_.
47
+ * Design / Implementation:
46
48
49
+ * Conference paper: `Distributed Ranges, A Model for Distributed Data Structures, Algorithms, and Views `_, 2024
50
+ * Talk: `CppCon 2023; Benjamin Brock; Distributed Ranges `_, 2023
51
+ * Technical presentation: `Intel Innovation'23 `_, 2023
52
+ * `API specification `_
47
53
48
- Examples
49
- --------
50
54
51
- See `Sample repository showing Distributed Ranges usage `_ for a few well explained examples.
52
- Additionally you may build all tests of this repository to see and run much more examples.
55
+ Requirements
56
+ ------------
57
+
58
+ * Linux
59
+ * cmake >=3.20
60
+ * `OneAPI HPC Toolkit `_ installed
61
+
62
+ Enable `OneAPI ` by::
63
+
64
+ source ~/intel/oneapi/setvars.sh
65
+
66
+ ... or by::
67
+
68
+ source /opt/intel/oneapi/setvars.sh
69
+
70
+ ... or wherever you have ``oneapi/setvars.sh `` script installed in your system.
71
+
72
+ Additional requirements for NVIDIA GPUs
73
+ =======================================
74
+
75
+ * `CUDA `_
76
+ * `OneAPI for NVIDIA GPUs `_ plugin
77
+
78
+ When enabling OneAPI use ``--include-intel-llvm `` option, e.g. call::
79
+
80
+ source ~/intel/oneapi/setvars.sh --include-intel-llvm
81
+
82
+ ... instead of ``source ~/intel/oneapi/setvars.sh ``.
83
+
84
+
85
+ Build and run
86
+ -------------
87
+
88
+ Build for Intel GPU/CPU
89
+ =======================
90
+
91
+ All tests and examples can be build by::
92
+
93
+ CXX=icpx cmake -B build
94
+ cmake --build build -- -j
53
95
54
- Build and test with gcc for CPU::
55
96
56
- CXX=g++-12 cmake -B build
57
- make -C build -j all test
97
+ Build for NVidia GPU
98
+ ====================
58
99
59
- Build and test with ipcx for SYCL && CPU/GPU ::
100
+ .. note ::
60
101
61
- CXX=icpx cmake -B build -DENABLE_SYCL=ON
102
+ Distributed Ranges library works in two models:
103
+ - Multi Process (based on SYCL and MPI)
104
+ - Single Process (based on pure SYCL)
62
105
63
- See how example is run and the output::
106
+ On NVIDIA GPU only ` Multi Process ` model is currently supported.
64
107
65
- cd build
66
- ctest -VV
108
+ To build multi-process tests call::
109
+
110
+ CXX=icpx cmake -B build -DENABLE_CUDA:BOOL=ON
111
+ cmake --build build --target mp-all-tests -- -j
112
+
113
+
114
+ Run tests
115
+ =========
116
+
117
+ Run multi process tests::
118
+
119
+ ctest --test-dir build --output-on-failure -L MP -j 4
120
+
121
+ Run single process tests::
122
+
123
+ ctest --test-dir build --output-on-failure -L SP -j 4
124
+
125
+ Run all tests::
126
+
127
+ ctest --test-dir build --output-on-failure -L TESTLABEL -j 4
128
+
129
+
130
+ Examples
131
+ --------
132
+
133
+ See `Distributed Ranges Tutorial `_ for a few well explained examples.
67
134
68
135
Adding Distributed Ranges to your project
69
136
-----------------------------------------
70
137
71
- See `Sample repository showing Distributed Ranges usage `_
72
- for a live example how to write CMakeLists.txt. Alternatively you may read details below.
73
-
74
138
If your project uses CMAKE, add the following to your
75
139
``CMakeLists.txt `` to download the library::
76
140
@@ -87,52 +151,49 @@ The above will define targets that can be included in your project::
87
151
88
152
target_link_libraries(<application> MPI::MPI_CXX DR::mpi)
89
153
90
- If your project does not use CMAKE, then you need to download the
91
- library, and install it into a prefix::
92
-
93
- git clone https://github.com/oneapi-src/distributed-ranges.git dr
94
- cd dr
95
- cmake -B build -DCMAKE_INSTALL_PREFIX=<prefix>
96
- make -C build install
97
- cmake -B build-fmt -DCMAKE_INSTALL_PREFIX=<prefix> build/_deps/cpp-format-src
98
- make -C build-fmt install
99
-
100
- Use ``-I `` and ``-L `` to find headers and libs during compilation::
101
-
102
- g++ -std=c=++20 -I <prefix>/include -L <prefix>/lib -L /opt/intel/oneapi/mpi/latest/lib/release -lfmt -lmpicxx -lmpi
154
+ See `Distributed Ranges Tutorial `_
155
+ for a live example of a cmake project that imports and uses Distributed Ranges.
103
156
104
157
Logging
105
158
-------
106
159
107
- Add this to your main to enable logging::
160
+ Add below code to your `` main `` function to enable logging.
108
161
109
- std::ofstream logfile(fmt::format("dr.{}.log", comm_rank));
162
+ If using `Single-Process ` model::
163
+
164
+ std::ofstream logfile("dr.log");
110
165
dr::drlog.set_file(logfile);
111
166
167
+ If using `Multi-Process ` model::
112
168
113
- Contributing
114
- ------------
169
+ int my_mpi_rank;
170
+ MPI_Comm_rank(MPI_COMM_WORLD, &my_mpi_rank);
171
+ std::ofstream logfile(fmt::format("dr.{}.log", my_mpi_rank));
115
172
116
- See ` CONTRIBUTING `_.
173
+ Example of adding custom log statement to your code::
117
174
175
+ DRLOG("my debug message with varA:{} and varB:{}", a, b);
118
176
119
- See also
120
- --------
121
177
122
- ` Fuzz Testing `_
123
- Fuzz testing of distributed ranges APIs
178
+ Contact us
179
+ ----------
124
180
125
- `Spec Editing `_
126
- Editing the API document
181
+ Contact us by writing a `new issue `_.
127
182
128
- ` Print Type `_
129
- Print types at compile time:
183
+ We seek collaboration opportunities and welcome feedback on ways to extend the library,
184
+ according to developer needs.
130
185
131
- `Testing `_
132
- Test system maintenance
133
186
134
- `Security `_
135
- Security policy
187
+ See also
188
+ --------
189
+
190
+ * `CONTRIBUTING `_
191
+ * `Fuzz Testing `_
192
+ * `Spec Editing `_ - Editing the API document
193
+ * `Print Type `_ - Print types at compile time:
194
+ * `Testing `_ - Test system maintenance
195
+ * `Security `_ - Security policy
196
+ * `Doxygen `_
136
197
137
198
.. _`Security` : SECURITY.md
138
199
.. _`Testing` : doc/developer/testing
@@ -142,10 +203,13 @@ See also
142
203
.. _`CONTRIBUTING` : CONTRIBUTING.md
143
204
.. _`Distributed Ranges, why you need it` : https://github.com/oneapi-src/distributed-ranges/blob/main/doc/presentations/Distributed%20Ranges%2C%20why%20you%20need%20it.pdf
144
205
.. _`Get Started with Distributed Ranges` : https://www.intel.com/content/www/us/en/developer/articles/guide/get-started-with-distributed-ranges.html
145
- .. _`Sample repository showing Distributed Ranges usage ` : https://github.com/oneapi-src/distributed-ranges-tutorial
206
+ .. _`Distributed Ranges Tutorial ` : https://github.com/oneapi-src/distributed-ranges-tutorial
146
207
.. _`Distributed Ranges, A Model for Distributed Data Structures, Algorithms, and Views` : https://dl.acm.org/doi/10.1145/3650200.3656632
147
208
.. _`CppCon 2023; Benjamin Brock; Distributed Ranges` : https://www.youtube.com/watch?v=X_dlJcV21YI
148
209
.. _`Intel Innovation'23` : https://github.com/oneapi-src/distributed-ranges/blob/main/doc/presentations/Distributed%20Ranges.pdf
149
210
.. _`API specification` : https://oneapi-src.github.io/distributed-ranges/spec/
150
211
.. _`Doxygen` : https://oneapi-src.github.io/distributed-ranges/doxygen/
151
212
.. _`new issue` : issues/new
213
+ .. _`OneAPI HPC Toolkit` : https://www.intel.com/content/www/us/en/developer/tools/oneapi/hpc-toolkit-download.html
214
+ .. _`OneAPI for NVIDIA GPUs` : https://developer.codeplay.com/products/oneapi/nvidia/home/
215
+ .. _`CUDA` : https://developer.nvidia.com/cuda-toolkit
0 commit comments