|
2 | 2 |
|
3 | 3 | A tool for building a scientific software stack from a recipe for vClusters on CSCS' Alps infrastructure.
|
4 | 4 |
|
5 |
| -## Bootstrapping |
6 |
| -Use the `bootstrap.sh` script to install the necessary dependencies. |
7 |
| -The dependencies are going to be installed under the `external` directory on the root directory of the project. |
| 5 | +Read the [documentation](https://eth-cscs.github.io/stackinator/) to get started. |
8 | 6 |
|
9 |
| -## Basic usage |
10 |
| - |
11 |
| -The tool generates the make files and spack configurations that build the spack environments that are packaged together in the spack stack. |
12 |
| -It can be thought of as equivalent to calling `cmake` or `configure`, before running make to run the configured build. |
13 |
| - |
14 |
| -```bash |
15 |
| -# configure the build |
16 |
| -./bin/stack-config -b$BUILD_PATH -r$RECIPE_PATH |
17 |
| - |
18 |
| -# build the spack stack |
19 |
| -cd $BUILD_PATH |
20 |
| -env --ignore-environment PATH=/usr/bin:/bin:`pwd`/spack/bin make modules store.squashfs -j64 |
21 |
| - |
22 |
| -# mount the stack |
23 |
| -squashfs-run store.squashfs bash |
24 |
| -``` |
25 |
| -* `-b, --build`: the path where the build stage |
26 |
| -* `-r, --recipe`: the path with the recipe yaml files that describe the environment. |
27 |
| -* `-d, --debug`: print detailed python error messages. |
28 |
| - |
29 |
| -## Recipes |
30 |
| - |
31 |
| -A recipe is the input provided to the tool. A recipe is comprised of the following yaml files in a directory: |
32 |
| - |
33 |
| -* `config.yaml`: common configuration for the stack. |
34 |
| -* `compilers.yaml`: the compilers provided by the stack. |
35 |
| -* `environments.yaml`: environments that contain all the software packages. |
36 |
| -* `modules.yaml`: _optional_ module generation rules |
37 |
| - * follows the spec for (spack mirror configuration)[https://spack.readthedocs.io/en/latest/mirrors.html] |
38 |
| -* `packages.yaml`: _optional_ package rules. |
39 |
| - * follows the spec for (spack package configuration)[https://spack.readthedocs.io/en/latest/build_settings.html] |
40 |
| - |
41 |
| -### config |
42 |
| - |
43 |
| -```yaml |
44 |
| -name: nvgpu-basic |
45 |
| -store: /user-environment |
46 |
| -system: hohgant |
47 |
| -spack: |
48 |
| - repo: https://github.com/spack/spack.git |
49 |
| - commit: 6408b51 |
50 |
| -modules: True |
51 |
| -``` |
52 |
| -
|
53 |
| -* `name`: a plain text name for the environment |
54 |
| -* `store`: the location where the environment will be mounted. |
55 |
| -* `system`: the name of the vCluster on which the stack will be deployed. |
56 |
| - * one of `balfrin` or `hohgant`. |
57 |
| - * cluster-specific details such as the version and location of libfabric are used when configuring and building the stack. |
58 |
| -* `spack`: which spack repository to use for installation. |
59 |
| -* `mirrors`: _optional_ configure use of build caches, see [build cache documentation](docs/build-cache.md). |
60 |
| -* `modules`: _optional_ enable/diasble module file generation (default `True`). |
61 |
| - |
62 |
| -### compilers |
63 |
| - |
64 |
| -Take an example configuration: |
65 |
| -```yaml |
66 |
| -bootstrap: |
67 |
| - spec: gcc@11 |
68 |
| -gcc: |
69 |
| - specs: |
70 |
| - - gcc@11 |
71 |
| -llvm: |
72 |
| - requires: gcc@11 |
73 |
| - specs: |
74 |
| - |
75 |
| - - llvm@14 |
76 |
| -``` |
77 |
| - |
78 |
| -The compilers are built in multiple stages: |
79 |
| - |
80 |
| -1. *bootstrap*: A bootstrap gcc compiler is built using the system compiler (currently gcc 4.7.5). |
81 |
| - * `gcc:specs`: single spec of the form `gcc@version`. |
82 |
| - * The selected version should have full support for the target architecture in order to build optimised gcc toolchains in step 2. |
83 |
| -2. *gcc*: The bootstrap compiler is then used to build the gcc version(s) provided by the stack. |
84 |
| - * `gcc:specs`: A list of _at least one_ of the specs of the form `gcc@version`. |
85 |
| -3. *llvm*: (optional) The nvhpc and/or llvm toolchains are build using one of the gcc toolchains installed in step 2. |
86 |
| - * `llvm:specs`: a list of specs of the form `nvhpc@version` or `llvm@version`. |
87 |
| - * `llvm:requires`: the version of gcc from step 2 that is used to build the llvm compilers. |
88 |
| - |
89 |
| -The first two steps are required, so that the simplest stack will provide at least one version of gcc compiled for the target architecture. |
90 |
| - |
91 |
| -> **Note** |
92 |
| -> |
93 |
| -> Don't provide full specs, because the tool will insert "opinionated" specs for the target node type, for example: |
94 |
| -> * `[email protected]` generates `[email protected] ~mpi~blas~lapack` |
95 |
| -> * `llvm@14` generates `llvm@14 +clang targets=x86 ~gold ^ninja@kitware` |
96 |
| -> * `gcc@11` generates `gcc@11 build_type=Release +profiled +strip` |
97 |
| - |
98 |
| -### environments |
99 |
| - |
100 |
| -The software packages are configured as disjoint environments, each built with the same compiler, and configured with a single implementation of MPI. |
101 |
| - |
102 |
| -#### example: a cpu-only gnu toolchain with MPI |
103 |
| - |
104 |
| -``` |
105 |
| -# environments.yaml |
106 |
| -gcc-host: |
107 |
| - compiler: |
108 |
| - - toolchain: gcc |
109 |
| - |
110 |
| - unify: true |
111 |
| - specs: |
112 |
| - - hdf5 +mpi |
113 |
| - - fftw +mpi |
114 |
| - mpi: |
115 |
| - spec: cray-mpich |
116 |
| - gpu: false |
117 |
| -``` |
118 |
| -
|
119 |
| -An environment labelled `gcc-host` is built using `[email protected]` from the `gcc` compiler toolchain (**note** the compiler spec must mach a compiler from the toolchain that was installed via the `compilers.yaml` file). |
120 |
| -The tool will generate a `spack.yaml` specification: |
121 |
| -
|
122 |
| -```yaml |
123 |
| -# spack.yaml |
124 |
| -spack: |
125 |
| - include: |
126 |
| - - compilers.yaml |
127 |
| - - config.yaml |
128 |
| - view: false |
129 |
| - concretizer: |
130 |
| - unify: True |
131 |
| - specs: |
132 |
| - - fftw +mpi |
133 |
| - - hdf5 +mpi |
134 |
| - - cray-mpich |
135 |
| - packages: |
136 |
| - all: |
137 |
| - |
138 |
| - mpi: |
139 |
| - require: cray-mpich |
140 |
| -``` |
141 |
| - |
142 |
| -> **Note** |
143 |
| -> |
144 |
| -> The `cray-mpich` spec is added to the list of package specs automatically. |
145 |
| -> By setting `environments.ENV.mpi` all packages in the environment `ENV` that use the virtual dependency `+mpi` will use the same `cray-mpich` implementation. |
146 |
| -
|
147 |
| -#### example: a gnu toolchain with MPI and NVIDIA GPU support |
148 |
| - |
149 |
| -```yaml |
150 |
| -# environments.yaml |
151 |
| -gcc-nvgpu: |
152 |
| - compiler: |
153 |
| - - toolchain: gcc |
154 |
| - |
155 |
| - unify: true |
156 |
| - specs: |
157 |
| - |
158 |
| - - fftw +mpi |
159 |
| - - hdf5 +mpi |
160 |
| - mpi: |
161 |
| - spec: cray-mpich |
162 |
| - gpu: cuda |
163 |
| -``` |
164 |
| -
|
165 |
| -The `environments:gcc-nvgpu:gpu` to `cuda` will build the `cray-mpich` with support for GPU-direct. |
166 |
| - |
167 |
| -```yaml |
168 |
| -# spack.yaml |
169 |
| -spack: |
170 |
| - include: |
171 |
| - - compilers.yaml |
172 |
| - - config.yaml |
173 |
| - view: false |
174 |
| - concretizer: |
175 |
| - unify: True |
176 |
| - specs: |
177 |
| - |
178 |
| - - fftw +mpi |
179 |
| - - hdf5 +mpi |
180 |
| - - cray-mpich +cuda |
181 |
| - packages: |
182 |
| - all: |
183 |
| - |
184 |
| - mpi: |
185 |
| - require: cray-mpich |
186 |
| -``` |
187 |
| - |
188 |
| -#### example: a nvhpc toolchain with MPI |
189 |
| - |
190 |
| -To build a toolchain with NVIDIA HPC SDK, we provide two compiler toolchains: |
191 |
| -- The `llvm:nvhpc` compiler; |
192 |
| -- A version of gcc from the `gcc` toolchain, in order to build dependencies (like CMake) that can't be built with nvhpc. If a second compiler is not provided, Spack will fall back to the system gcc 4.7.5, and not generate zen2/zen3 optimized code as a result. |
193 |
| - |
194 |
| -```yaml |
195 |
| -# environments.yaml |
196 |
| -prgenv-nvidia: |
197 |
| - compiler: |
198 |
| - - toolchain: llvm |
199 |
| - spec: nvhpc |
200 |
| - - toolchain: gcc |
201 |
| - |
202 |
| - unify: true |
203 |
| - specs: |
204 |
| - |
205 |
| - - fftw%nvhpc +mpi |
206 |
| - - hdf5%nvhpc +mpi |
207 |
| - mpi: |
208 |
| - spec: cray-mpich |
209 |
| - gpu: cuda |
210 |
| -``` |
211 |
| - |
212 |
| -The following `spack.yaml` is generated: |
213 |
| - |
214 |
| -```yaml |
215 |
| -# spack.yaml |
216 |
| -spack: |
217 |
| - include: |
218 |
| - - compilers.yaml |
219 |
| - - config.yaml |
220 |
| - view: false |
221 |
| - concretizer: |
222 |
| - unify: True |
223 |
| - specs: |
224 |
| - |
225 |
| - - fftw%nvhpc +mpi |
226 |
| - - hdf5%nvhpc +mpi |
227 |
| - - cray-mpich +cuda |
228 |
| - packages: |
229 |
| - all: |
230 |
| - compiler: [nvhpc, [email protected]] |
231 |
| - mpi: |
232 |
| - require: cray-mpich |
233 |
| -``` |
234 |
| - |
235 |
| -#### example: a gnu toolchain that provides some common tools |
236 |
| - |
237 |
| -```yaml |
238 |
| -# environments.yaml |
239 |
| -tools: |
240 |
| - compiler: |
241 |
| - toolchain: gcc |
242 |
| - |
243 |
| - unify: true |
244 |
| - specs: |
245 |
| - - cmake |
246 |
| - |
247 |
| - - tmux |
248 |
| - - reframe |
249 |
| - mpi: false |
250 |
| - gpu: false |
251 |
| -``` |
252 |
| - |
253 |
| -```yaml |
254 |
| -# spack.yaml |
255 |
| -spack: |
256 |
| - include: |
257 |
| - - compilers.yaml |
258 |
| - - config.yaml |
259 |
| - view: false |
260 |
| - concretizer: |
261 |
| - unify: True |
262 |
| - specs: |
263 |
| - - cmake |
264 |
| - |
265 |
| - - tmux |
266 |
| - - reframe |
267 |
| - packages: |
268 |
| - all: |
269 |
| - |
270 |
| -``` |
271 |
| - |
272 |
| -### repo |
273 |
| - |
274 |
| -New Spack packages or custom versions of a package can be added to the `alps` repo. If a `repo/` folder is provided, `stackinator` will copy all the Spack packages in `repo/packages/` into the `alps` repo (the same repo providing `cray-mpich`). If the user provides a `repo.yaml` file in the `repo/` folder, the file will be ignored (and a warning is emitted). |
275 |
| - |
276 |
| -### modules |
277 |
| - |
278 |
| -Modules are generated for the installed compilers and packages by spack. The default module generation rules set by the version of spack specified in `config.yaml` will be used if no `modules.yaml` file is provided. |
279 |
| - |
280 |
| -To set rules for module generation, provide a `module.yaml` file as per the [spack documentation](https://spack.readthedocs.io/en/latest/module_file_support.html). |
281 |
| - |
282 |
| -To disable module generation, set the field `config:modules:False` in `config.yaml`. |
283 |
| - |
284 |
| -### packages |
285 |
| - |
286 |
| -A spack `packages.yaml` file is provided by the tool for each target cluster. This file sets system dependencies, such as libfabric and slurm, which are expected to be provided by the cluster and not built by Spack. A recipe can provide a `packages.yaml` file, which is merged with the cluster-specific `packages.yaml`. |
287 |
| - |
288 |
| -For example, to enforce every compiler and environment built use the versions of perl and git installed on the system, add a file like the following (with appropriate version numbers and prefixes, of course): |
289 |
| - |
290 |
| -```yaml |
291 |
| -# packages.yaml |
292 |
| -packages: |
293 |
| - perl: |
294 |
| - buildable: false |
295 |
| - externals: |
296 |
| - |
297 |
| - prefix: /usr |
298 |
| - git: |
299 |
| - buildable: false |
300 |
| - externals: |
301 |
| - |
302 |
| - prefix: /usr |
303 |
| -``` |
| 7 | +Create a ticket in our [GitHub issues](https://github.com/eth-cscs/stackinator/issues) if you find a bug, have a feature request or have a question. |
0 commit comments