@@ -78,77 +78,10 @@ as long as they use a local filesystem and not home or Scratch.
78
78
Our default setup uses ` $XDG_RUNTIME_DIR ` on the local disk of the login nodes, or ` $TMPDIR ` on a
79
79
compute node (local disk on the node, on clusters that are not diskless).
80
80
81
- If you try to build a container on a parallel filesystem, it will fail with a number of
81
+ If you try to build a container on a parallel filesystem, it may fail with a number of
82
82
permissions errors.
83
83
84
84
85
- ## Singularity
86
-
87
- Run ` singularity --version ` to see which version we currently have installed.
88
-
89
-
90
- !!! important "Singularity update to Apptainer"
91
- On Myriad, we are updating to Singularity to Apptainer. This update will occur on 14th
92
- November during a [ planned outage] ( ../Planned_Outages.md )
93
-
94
- This update may affect any containers that are currently downloaded, so users will have to test
95
- them to check their workflow still functions correctly after the update. We expect most to work
96
- as before, but cannot confirm this.
97
-
98
- A Singularity command that will no longer be available in Apptainer is
99
- `singularity build --remote`. If any of you have workflows that depend on this,
100
- please email [email protected] . We are currently looking into how we would provide
101
- equivalent functionality.
102
-
103
- Updates to the other clusters will follow, dates tbc.
104
-
105
-
106
- ### Set up cache locations and bind directories
107
-
108
- The cache directories should be set to somewhere in your space so they don't fill up ` /tmp ` on
109
- the login nodes.
110
-
111
- The bindpath mentioned below specifies what directories are made available inside the container -
112
- only your home is bound by default so you need to add Scratch.
113
-
114
- You can either use the ` singularity-env ` environment module for this, or run the commands manually.
115
-
116
- ```
117
- module load singularity-env
118
- ```
119
-
120
- or:
121
-
122
- ```
123
- # Create a .singularity directory in your Scratch
124
- mkdir $HOME/Scratch/.singularity
125
-
126
- # Create cache subdirectories we will use / export
127
- mkdir $HOME/Scratch/.singularity/tmp
128
- mkdir $HOME/Scratch/.singularity/localcache
129
- mkdir $HOME/Scratch/.singularity/pull
130
-
131
- # Set all the Singularity cache dirs to Scratch
132
- export SINGULARITY_CACHEDIR=$HOME/Scratch/.singularity
133
- export SINGULARITY_TMPDIR=$SINGULARITY_CACHEDIR/tmp
134
- export SINGULARITY_LOCALCACHEDIR=$SINGULARITY_CACHEDIR/localcache
135
- export SINGULARITY_PULLFOLDER=$SINGULARITY_CACHEDIR/pull
136
-
137
- # Bind your Scratch directory so it is accessible from inside the container
138
- # and the temporary storage jobs are allocated
139
- export SINGULARITY_BINDPATH=/scratch/scratch/$USER,/tmpdir
140
- ```
141
-
142
- Different subdirectories are being set for each cache so you can tell which files came from where.
143
-
144
- You probably want to add those ` export ` statements to your ` .bashrc ` under ` # User specific aliases and functions ` so those environment variables are always set when you log in.
145
-
146
- For more information on these options, have a look at the Singularity documentation:
147
-
148
- * [ Singularity user guide] ( https://sylabs.io/guides/3.5/user-guide/index.html )
149
- * [ Singularity Bind Paths and Mounts] ( https://sylabs.io/guides/3.5/user-guide/bind_paths_and_mounts.html )
150
- * [ Singularity Build Environment] ( https://sylabs.io/guides/3.5/user-guide/build_env.html )
151
-
152
85
## Downloading and running a container
153
86
154
87
Assuming you want to run an existing container, first you need to pull it from somewhere online that
0 commit comments