Skip to content

Commit

Permalink
Merging changes to allow tutorial to build locally
Browse files Browse the repository at this point in the history
  • Loading branch information
jennyfothergill committed Jan 28, 2025
1 parent aff84ce commit 9ace241
Show file tree
Hide file tree
Showing 10 changed files with 248 additions and 5 deletions.
Original file line number Diff line number Diff line change
@@ -1 +1,21 @@
<!-- CTCMS does not use Modules -->
```
~~~ /cvmfs/pilot.eessi-hpc.org/2020.12/software/x86_64/amd/zen2/modules/all ~~~
Bazel/3.6.0-GCCcore-x.y.z NSS/3.51-GCCcore-x.y.z
Bison/3.5.3-GCCcore-x.y.z Ninja/1.10.0-GCCcore-x.y.z
Boost/1.72.0-gompi-2020a OSU-Micro-Benchmarks/5.6.3-gompi-2020a
CGAL/4.14.3-gompi-2020a-Python-3.x.y OpenBLAS/0.3.9-GCC-x.y.z
CMake/3.16.4-GCCcore-x.y.z OpenFOAM/v2006-foss-2020a
[removed most of the output here for clarity]
Where:
L: Module is loaded
Aliases: Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2"
will load foo/1.2.3
D: Default Module
Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".
```
{: .output}
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
```
No Modulefiles Currently Loaded.
```
{: .output}
Original file line number Diff line number Diff line change
@@ -1,4 +1,33 @@
If the `python3` command was unavailable, we would see output like

```
/usr/bin/which: no python3 in (/cm/shared/apps/slurm/current/sbin:/cm/shared/apps/slurm/current/bin:/cm/local/apps/gcc/9.2.0/bin:/cm/local/apps/environment-modules/4.4.0//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin:/sbin:/usr/sbin:/cm/local/apps/environment-modules/4.4.0/bin:/opt/dell/srvadmin/bin:/bsuhome/{{ site.remote.user }}/.local/bin:/bsuhome/{{ site.remote.user }}/bin)
```
{: .output}

Note that this wall of text is really a list, with values separated
by the `:` character. The output is telling us that the `which` command
searched the following directories for `python3`, without success:

```
/cvmfs/pilot.eessi-hpc.org/2020.12/compat/linux/x86_64/usr/bin
/opt/software/slurm/bin
/usr/local/bin
/usr/bin
/usr/local/sbin
/usr/sbin
/opt/puppetlabs/bin
/home/{{site.remote.user}}/.local/bin
/home/{{site.remote.user}}/bin
```
{: .output}

However, in our case we do have an existing `python3` available so we see

```
/cvmfs/pilot.eessi-hpc.org/2020.12/compat/linux/x86_64/usr/bin/python3
```
{: .output}

We need a different Python than the system provided one though, so let us load
a module to access it.
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
```
{{ site.remote.prompt }} module load {{ site.remote.module_python3 }}
{{ site.remote.prompt }} which python3
```
{: .language-bash}
Original file line number Diff line number Diff line change
@@ -1 +1,87 @@
<!-- CTCMS does not use modules -->
To demonstrate, let's use `module list`. `module list` shows all loaded
software modules.

```
{{ site.remote.prompt }} module list
```
{: .language-bash}

```
Currently Loaded Modules:
1) GCCcore/x.y.z 4) GMP/6.2.0-GCCcore-x.y.z
2) Tcl/8.6.10-GCCcore-x.y.z 5) libffi/3.3-GCCcore-x.y.z
3) SQLite/3.31.1-GCCcore-x.y.z 6) Python/3.x.y-GCCcore-x.y.z
```
{: .output}

```
{{ site.remote.prompt }} module load GROMACS
{{ site.remote.prompt }} module list
```
{: .language-bash}

```
Currently Loaded Modules:
1) GCCcore/x.y.z 14) libfabric/1.11.0-GCCcore-x.y.z
2) Tcl/8.6.10-GCCcore-x.y.z 15) PMIx/3.1.5-GCCcore-x.y.z
3) SQLite/3.31.1-GCCcore-x.y.z 16) OpenMPI/4.0.3-GCC-x.y.z
4) GMP/6.2.0-GCCcore-x.y.z 17) OpenBLAS/0.3.9-GCC-x.y.z
5) libffi/3.3-GCCcore-x.y.z 18) gompi/2020a
6) Python/3.x.y-GCCcore-x.y.z 19) FFTW/3.3.8-gompi-2020a
7) GCC/x.y.z 20) ScaLAPACK/2.1.0-gompi-2020a
8) numactl/2.0.13-GCCcore-x.y.z 21) foss/2020a
9) libxml2/2.9.10-GCCcore-x.y.z 22) pybind11/2.4.3-GCCcore-x.y.z-Pytho...
10) libpciaccess/0.16-GCCcore-x.y.z 23) SciPy-bundle/2020.03-foss-2020a-Py...
11) hwloc/2.2.0-GCCcore-x.y.z 24) networkx/2.4-foss-2020a-Python-3.8...
12) libevent/2.1.11-GCCcore-x.y.z 25) GROMACS/2020.1-foss-2020a-Python-3...
13) UCX/1.8.0-GCCcore-x.y.z
```
{: .output}

So in this case, loading the `GROMACS` module (a bioinformatics software
package), also loaded `GMP/6.2.0-GCCcore-x.y.z` and
`SciPy-bundle/2020.03-foss-2020a-Python-3.x.y` as well. Let's try unloading the
`GROMACS` package.

```
{{ site.remote.prompt }} module unload GROMACS
{{ site.remote.prompt }} module list
```
{: .language-bash}

```
Currently Loaded Modules:
1) GCCcore/x.y.z 13) UCX/1.8.0-GCCcore-x.y.z
2) Tcl/8.6.10-GCCcore-x.y.z 14) libfabric/1.11.0-GCCcore-x.y.z
3) SQLite/3.31.1-GCCcore-x.y.z 15) PMIx/3.1.5-GCCcore-x.y.z
4) GMP/6.2.0-GCCcore-x.y.z 16) OpenMPI/4.0.3-GCC-x.y.z
5) libffi/3.3-GCCcore-x.y.z 17) OpenBLAS/0.3.9-GCC-x.y.z
6) Python/3.x.y-GCCcore-x.y.z 18) gompi/2020a
7) GCC/x.y.z 19) FFTW/3.3.8-gompi-2020a
8) numactl/2.0.13-GCCcore-x.y.z 20) ScaLAPACK/2.1.0-gompi-2020a
9) libxml2/2.9.10-GCCcore-x.y.z 21) foss/2020a
10) libpciaccess/0.16-GCCcore-x.y.z 22) pybind11/2.4.3-GCCcore-x.y.z-Pytho...
11) hwloc/2.2.0-GCCcore-x.y.z 23) SciPy-bundle/2020.03-foss-2020a-Py...
12) libevent/2.1.11-GCCcore-x.y.z 24) networkx/2.4-foss-2020a-Python-3.x.y
```
{: .output}

So using `module unload` "un-loads" a module, and depending on how a site is
configured it may also unload all of the dependencies (in our case it does
not). If we wanted to unload everything at once, we could run `module purge`
(unloads everything).

```
{{ site.remote.prompt }} module purge
{{ site.remote.prompt }} module list
```
{: .language-bash}

```
No modules loaded
```
{: .output}

Note that `module purge` is informative. It will also let us know if a default
set of "sticky" packages cannot be unloaded (and how to actually unload these
if we truly so desired).
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
```
{{ site.remote.bash_shebang }}
{{ site.sched.comment }} {{ site.sched.flag.name }} parallel-job
{{ site.sched.comment }} {{ site.sched.flag.queue }} {{ site.sched.queue.testing }}
{{ site.sched.comment }} -N 1
{{ site.sched.comment }} -n 8
# Load the computing environment we need
# (mpi4py and numpy are in SciPy-bundle)
module load {{ site.remote.module_python3 }}
module load SciPy-bundle
# Execute the task
mpiexec amdahl
```
{: .language-bash}
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
```
{{ site.remote.bash_shebang }}
{{ site.sched.comment }} {{ site.sched.flag.name }} parallel-pi
{{ site.sched.comment }} {{ site.sched.flag.name }} parallel-job
{{ site.sched.comment }} {{ site.sched.flag.queue }} {{ site.sched.queue.testing }}
{{ site.sched.comment }} -N 1
{{ site.sched.comment }} -n 4
{{ site.sched.comment }} --mem=3G
# Load the computing environment we need
# (mpi4py and numpy are in SciPy-bundle)
module load {{ site.remote.module_python3 }}
module load SciPy-bundle
# Execute the task
mpiexec python pi.py 100000000
mpiexec amdahl
```
{: .language-bash}
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
```
{{ site.remote.bash_shebang }}
{{ site.sched.comment }} {{ site.sched.flag.name }} solo-job
{{ site.sched.comment }} {{ site.sched.flag.queue }} {{ site.sched.queue.testing }}
{{ site.sched.comment }} -N 1
{{ site.sched.comment }} -n 1
# Load the computing environment we need
module load {{ site.remote.module_python3 }}
# Execute the task
amdahl
```
{: .language-bash}
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
`{{ site.sched.interactive }}` runs a single command on the cluster and then
exits. Let's demonstrate this by running the `hostname` command with
`{{ site.sched.interactive }}`. (We can cancel an `{{ site.sched.interactive }}`
job with `Ctrl-c`.)

```
{{ site.remote.prompt }} {{ site.sched.interactive }} hostname
```
{: .language-bash}

```
{{ site.remote.node }}
```
{: .output}

`{{ site.sched.interactive }}` accepts all of the same options as
`{{ site.sched.submit.name }}`. However, instead of specifying these in a script,
these options are specified on the command-line when starting a job. To submit
a job that uses 2 CPUs for instance, we could use the following command:

```
{{ site.remote.prompt }} {{ site.sched.interactive }} -n 2 echo "This job will use 2 CPUs."
```
{: .language-bash}

```
This job will use 2 CPUs.
This job will use 2 CPUs.
```
{: .output}

Typically, the resulting shell environment will be the same as that for
`{{ site.sched.submit.name }}`.

### Interactive jobs

Sometimes, you will need a lot of resources for interactive use. Perhaps it's
our first time running an analysis or we are attempting to debug something that
went wrong with a previous job. Fortunately, {{ site.sched.name }} makes it
easy to start an interactive job with `{{ site.sched.interactive }}`:

```
{{ site.remote.prompt }} {{ site.sched.interactive }} --pty bash
```
{: .language-bash}

You should be presented with a bash prompt. Note that the prompt will likely
change to reflect your new location, in this case the compute node we are
logged on. You can also verify this with `hostname`.

> ## Creating remote graphics
>
> To see graphical output inside your jobs, you need to use X11 forwarding. To
> connect with this feature enabled, use the `-Y` option when you login with
> the `ssh` command, e.g., `ssh -Y {{ site.remote.user }}@{{ site.remote.login }}`.
>
> To demonstrate what happens when you create a graphics window on the remote
> node, use the `xeyes` command. A relatively adorable pair of eyes should pop
> up (press `Ctrl-C` to stop). If you are using a Mac, you must have installed
> XQuartz (and restarted your computer) for this to work.
>
> If your cluster has the
> [slurm-spank-x11](https://github.com/hautreux/slurm-spank-x11) plugin
> installed, you can ensure X11 forwarding within interactive jobs by using the
> `--x11` option for `{{ site.sched.interactive }}` with the command
> `{{ site.sched.interactive }} --x11 --pty bash`.
{: .callout}

When you are done with the interactive job, type `exit` to quit your session.

0 comments on commit 9ace241

Please sign in to comment.