Skip to content

Commit 46a851c

Browse files
Merge pull request #92 from UCL-ARC/removescratch
Remove mentions to /scratch/scratch; remove old Singularity section
2 parents 428726c + 9298812 commit 46a851c

File tree

5 files changed

+34
-97
lines changed

5 files changed

+34
-97
lines changed

mkdocs-project-dir/docs/Background/Data_Storage.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,13 +28,15 @@ Many programs will write hidden config files in here, with names beginning with
2828

2929
### Scratch
3030

31-
Every user also has a Scratch directory. It is intended that this is a larger space to keep your working
32-
files as you do your research, but should not be relied on for secure long-term permanent storage.
31+
Every user also has a Scratch directory. On Myriad and Kathleen this exists for backwards
32+
compatibility and all of your home space should be considered scratch.
33+
34+
It is intended that this is a larger space to keep your working files as you do your
35+
research. It should not be relied on for secure long-term permanent storage.
3336

3437
Important data should be regularly backed up to another location.
3538

36-
- Location: `/scratch/scratch/<username>`
37-
- Also at: `/home/<username>/Scratch` (a shortcut or symbolic link to the first location).
39+
- Location: `/home/<username>/Scratch` (either a directory or a shortcut or symbolic link to a different location).
3840

3941
### Temporary storage for jobs (TMPDIR)
4042

@@ -62,8 +64,7 @@ The ACFS is available read-write on the login nodes but read-only on the compute
6264
that your jobs can read from it, but not write to it, and it is intended that you copy data onto
6365
it after deciding which outputs from your jobs are important to keep.
6466

65-
Initially rolled out on Kathleen, you will later be able to access it from Myriad too and see
66-
the same files from both clusters.
67+
Available on Myriad and Kathleen, you will be able to see the same files from both clusters.
6768

6869
- Location: `/acfs/users/<username>`
6970
- Also at: `/home/<username>/ACFS` (a shortcut or symbolic link to the first location).

mkdocs-project-dir/docs/Clusters/Myriad.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -49,25 +49,25 @@ Please refer to the page on [How do I transfer data onto the system?](../howto.m
4949

5050
## Quotas
5151

52-
The default quotas on Myriad are 150GB for home and 1TB for Scratch.
52+
The default quota on Myriad is 1TB for home (which is also considered scratch).
5353

54-
These are hard quotas: once you reach them, you will no longer be able
55-
to write more data. Keep an eye on them, as this will cause jobs to fail
54+
The "hard limit" number means that once you reach it, you will no longer be able
55+
to write more data. Keep an eye on your quota, as this will cause jobs to fail
5656
if they cannot create their .o or .e files at the start, or their output
5757
files partway through.
5858

59-
You can check both quotas on Myriad by running:
59+
You can check your quota on Myriad by running:
6060

6161
```
62-
lquota
62+
gquota
6363
```
6464

6565
which will give you output similar to this:
6666

6767
```
68-
Storage Used Quota % Used Path
69-
home 721.68 MiB 150.00 GiB 0% /home/uccaxxx
70-
scratch 52.09 MiB 1.00 TiB 0% /scratch/scratch/uccaxxx
68+
Current Usage: 108.7GiB
69+
Soft Limit: 1024GiB
70+
Hard Limit: 1024GiB
7171
```
7272

7373
You can apply for quota increases using the form at [Additional Resource Requests](../Additional_Resource_Requests.md).

mkdocs-project-dir/docs/Software_Guides/AlphaFold3.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,10 @@ Having added the model weights, you will also need to create input and output di
2525
You should now have three locations which are unique to you. For ease of reference in the job script I'm going to set environment variables to them so that the command-line is consistent and so you can change them without messing with the command line options.
2626

2727
```
28-
export AF3_INPUT=/scratch/scratch/uccaoke/af3_input # Replace with your input folder
28+
export AF3_INPUT=/home/${USER}/Scratch/af3_input # Replace with your input folder
2929
export AF3_INPUT_FILE=fold_input.json # Replace with a file in your input folder
30-
export AF3_OUTPUT=/scratch/scratch/uccaoke/af3_output # Replace with your output folder
31-
export AF3_WEIGHTS=/scratch/scratch/uccaoke/weights # Replace with the folder you put the weights in
30+
export AF3_OUTPUT=/home/${USER}/Scratch/af3_output # Replace with your output folder
31+
export AF3_WEIGHTS=/home/${USER}/Scratch/weights # Replace with the folder you put the weights in
3232
```
3333

3434
## Running AlphaFold3
@@ -59,10 +59,10 @@ Write a job script that requests GPU nodes:
5959
# Set the working directory to the current working directory.
6060
#$ -cwd
6161
62-
export AF3_INPUT=/scratch/scratch/uccaoke/af3_input # Replace with your input folder
62+
export AF3_INPUT=/home/${USER}/Scratch/af3_input # Replace with your input folder
6363
export AF3_INPUT_FILE=fold_input.json # Replace with a file in your input folder
64-
export AF3_OUTPUT=/scratch/scratch/uccaoke/af3_output # Replace with your output folder
65-
export AF3_WEIGHTS=/scratch/scratch/uccaoke/weights # Replace with the folder you put the weights in
64+
export AF3_OUTPUT=/home/${USER}/Scratch/af3_output # Replace with your output folder
65+
export AF3_WEIGHTS=/home/${USER}/Scratch/weights # Replace with the folder you put the weights in
6666
6767
apptainer exec --nv --bind ${AF3_INPUT}:/root/af_input --bind ${AF3_OUTPUT}:/root/af_output --bind ${AF3_WEIGHTS}:/root/models --bind /shared/ucl/apps/AlphaFold3_db:/root/public_databases --no-home --no-mount bind-paths /shared/ucl/apps/AlphaFold3/alphafold3.sif sh -c "XLA_FLAGS='--xla_disable_hlo_passes=custom-kernel-fusion-rewriter' python3 /app/alphafold/run_alphafold.py --json_path=/root/af_input/${AF3_INPUT_FILE} --model_dir=/root/models --db_dir=/root/public_databases --output_dir=/root/af_output --flash_attention_implementation=xla"
6868
```
@@ -95,12 +95,12 @@ If you wish to queue for an A100, this job should work:
9595
# Set the working directory to the current working directory.
9696
#$ -cwd
9797
98-
export AF3_INPUT=/scratch/scratch/uccaoke/af3_input # Replace with your input folder
98+
export AF3_INPUT=/home/${USER}/Scratch/af3_input # Replace with your input folder
9999
export AF3_INPUT_FILE=fold_input.json # Replace with a file in your input folder
100-
export AF3_OUTPUT=/scratch/scratch/uccaoke/af3_output # Replace with your output folder
101-
export AF3_WEIGHTS=/scratch/scratch/uccaoke/weights # Replace with the folder you put the weights in
100+
export AF3_OUTPUT=/home/${USER}/Scratch/af3_output # Replace with your output folder
101+
export AF3_WEIGHTS=/home/${USER}/Scratch/weights # Replace with the folder you put the weights in
102102
103103
apptainer exec --nv --bind ${AF3_INPUT}:/root/af_input --bind ${AF3_OUTPUT}:/root/af_output --bind ${AF3_WEIGHTS}:/root/models --bind /shared/ucl/apps/AlphaFold3_db:/root/public_databases --no-home --no-mount bind-paths /shared/ucl/apps/AlphaFold3/alphafold3.sif sh -c "python3 /app/alphafold/run_alphafold.py --json_path=/root/af_input/${AF3_INPUT_FILE} --model_dir=/root/models --db_dir=/root/public_databases --output_dir=/root/af_output"
104104
```
105105

106-
For other options you can pass to AlphaFold 3, please consult [DeepMind's documentation](https://github.com/google-deepmind/alphafold3/tree/main/docs).
106+
For other options you can pass to AlphaFold 3, please consult [DeepMind's documentation](https://github.com/google-deepmind/alphafold3/tree/main/docs).

mkdocs-project-dir/docs/Software_Guides/Singularity.md

Lines changed: 1 addition & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -78,77 +78,10 @@ as long as they use a local filesystem and not home or Scratch.
7878
Our default setup uses `$XDG_RUNTIME_DIR` on the local disk of the login nodes, or `$TMPDIR` on a
7979
compute node (local disk on the node, on clusters that are not diskless).
8080

81-
If you try to build a container on a parallel filesystem, it will fail with a number of
81+
If you try to build a container on a parallel filesystem, it may fail with a number of
8282
permissions errors.
8383

8484

85-
## Singularity
86-
87-
Run `singularity --version` to see which version we currently have installed.
88-
89-
90-
!!! important "Singularity update to Apptainer"
91-
On Myriad, we are updating to Singularity to Apptainer. This update will occur on 14th
92-
November during a [planned outage](../Planned_Outages.md)
93-
94-
This update may affect any containers that are currently downloaded, so users will have to test
95-
them to check their workflow still functions correctly after the update. We expect most to work
96-
as before, but cannot confirm this.
97-
98-
A Singularity command that will no longer be available in Apptainer is
99-
`singularity build --remote`. If any of you have workflows that depend on this,
100-
please email [email protected]. We are currently looking into how we would provide
101-
equivalent functionality.
102-
103-
Updates to the other clusters will follow, dates tbc.
104-
105-
106-
### Set up cache locations and bind directories
107-
108-
The cache directories should be set to somewhere in your space so they don't fill up `/tmp` on
109-
the login nodes.
110-
111-
The bindpath mentioned below specifies what directories are made available inside the container -
112-
only your home is bound by default so you need to add Scratch.
113-
114-
You can either use the `singularity-env` environment module for this, or run the commands manually.
115-
116-
```
117-
module load singularity-env
118-
```
119-
120-
or:
121-
122-
```
123-
# Create a .singularity directory in your Scratch
124-
mkdir $HOME/Scratch/.singularity
125-
126-
# Create cache subdirectories we will use / export
127-
mkdir $HOME/Scratch/.singularity/tmp
128-
mkdir $HOME/Scratch/.singularity/localcache
129-
mkdir $HOME/Scratch/.singularity/pull
130-
131-
# Set all the Singularity cache dirs to Scratch
132-
export SINGULARITY_CACHEDIR=$HOME/Scratch/.singularity
133-
export SINGULARITY_TMPDIR=$SINGULARITY_CACHEDIR/tmp
134-
export SINGULARITY_LOCALCACHEDIR=$SINGULARITY_CACHEDIR/localcache
135-
export SINGULARITY_PULLFOLDER=$SINGULARITY_CACHEDIR/pull
136-
137-
# Bind your Scratch directory so it is accessible from inside the container
138-
# and the temporary storage jobs are allocated
139-
export SINGULARITY_BINDPATH=/scratch/scratch/$USER,/tmpdir
140-
```
141-
142-
Different subdirectories are being set for each cache so you can tell which files came from where.
143-
144-
You probably want to add those `export` statements to your `.bashrc` under `# User specific aliases and functions` so those environment variables are always set when you log in.
145-
146-
For more information on these options, have a look at the Singularity documentation:
147-
148-
* [Singularity user guide](https://sylabs.io/guides/3.5/user-guide/index.html)
149-
* [Singularity Bind Paths and Mounts](https://sylabs.io/guides/3.5/user-guide/bind_paths_and_mounts.html)
150-
* [Singularity Build Environment](https://sylabs.io/guides/3.5/user-guide/build_env.html)
151-
15285
## Downloading and running a container
15386

15487
Assuming you want to run an existing container, first you need to pull it from somewhere online that

mkdocs-project-dir/docs/Supplementary/Troubleshooting.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,23 +49,26 @@ see here it's trying to parse `^M` as an option.
4949

5050
### I think I deleted my Scratch space, how do I restore it?
5151

52-
You may have accidentally deleted or replaced the link to your Scratch
53-
space. Do an `ls -al` in your home - if set up correctly, it should look
54-
like this:
52+
If you are on a cluster (not Myriad) where Scratch is a symbolic link (shortcut) you may have
53+
accidentally deleted or replaced the link to your Scratch space but not removed its contents.
54+
Do an `ls -al` in your home - if set up correctly, it should look like this:
5555

5656
```
5757
lrwxrwxrwx   1 username group       24 Apr 14  2022 Scratch -> /scratch/scratch/username
5858
```
5959

6060
where `username` is your UCL user ID and `group` is your primary group.
6161

62-
If this link is not present, you can recreate it with
62+
If this link is not present, it will be automatically recreated when you log back in, or you
63+
can recreate it with
6364

6465
```
6566
ln -s /scratch/scratch/$(whoami) Scratch
6667
```
6768

68-
If you have actually deleted the files stored in your Scratch space, there is unfortunately no way to restore them.
69+
If you are on a cluster like Myriad where Scratch is a normal directory in your home or you
70+
have actually deleted the files stored in your Scratch space, there is unfortunately no way
71+
to restore them.
6972

7073
### Which MKL library files should I use to build my application?
7174

0 commit comments

Comments
 (0)