Skip to content

Commit 7375ae6

Browse files
authored
Make new README (#71)
1 parent cc0cce5 commit 7375ae6

File tree

1 file changed

+11
-315
lines changed

1 file changed

+11
-315
lines changed

README.md

+11-315
Original file line numberDiff line numberDiff line change
@@ -1,319 +1,15 @@
1-
# ACCESS-ESM with **payu**
1+
# historical+concentrations
2+
Standard configuration for a coupled CO~2~ concentration driven [ACCESS-ESM1.5](https://github.com/ACCESS-NRI/ACCESS-ESM1.5) under historical forcings between 1850-2014.
23

3-
## Quickstart Guide
4+
For usage instructions, see the [ACCESS-Hive docs](https://access-hive.org.au/models/run-a-model/run-access-esm/)
45

5-
Get payu:
6+
This configution is based on the CMIP6 configuration developed at [CSIRO](https://www.csiro.au/en/research/environmental-impacts/climate-change/climate-science-centre). Minor differences in ancillary files mean that bitwise reproducibility with the original CMIP6 simulation is not maintained.
67

7-
module use /g/data3/hh5/public/modules
8-
module load conda/analysis3-unstable
8+
## Conditions of use
99

10-
Create a directory in which to keep the model configurations:
11-
12-
mkdir -p ~/access-esm
13-
cd ~/access-esm
14-
git clone https://github.com/coecms/access-esm
15-
cd access-esm
16-
git checkout historical
17-
18-
Set up a warm start from a CSIRO run (see the script for details):
19-
20-
./warm-start.sh
21-
22-
Run the model:
23-
24-
payu run
25-
26-
Check the output:
27-
28-
ls archive/
29-
30-
The default configuration is a 1 year per model run. To run the model for, say, 25 years:
31-
32-
payu run -n 25
33-
34-
With default settings, 1 model year cost is ~ 1100 SU, with a walltime of 1 hour 20 minutes
35-
36-
**Note:**
37-
We have noticed that some modules interfere with the git commands, for example `matlab/R2018a`.
38-
If you are running into issues during the installation, it might be a good idea to `module purge` first before starting again.
39-
40-
## Warm Starts
41-
42-
The model is normally 'warm started' from the restart files of another
43-
configuration. For instance the SSP experiments are started from the end of the
44-
historical experiment, and in turn the historical experiment is started from
45-
the pre-industrial control experiment (different ensemble members are created
46-
by starting from different piControl years). Starting the experiment from
47-
scratch requires a long period of spinup to ensure stability and should be
48-
avoided if possible.
49-
50-
There are two options for restarting the model. It can be started from an
51-
experiment run by CSIRO (requires membership in the p66 group), or it can be
52-
started from another Payu experiment.
53-
54-
To perform a warm start, edit the file `warm-start.sh` to set the experiment
55-
directory to start from and then run the script. For CSIRO jobs you must also
56-
specify the date of the run to start from, for Payu jobs each restart directory
57-
holds a different year.
58-
59-
## Understanding **payu**
60-
61-
**payu** was designed to help users of the NCI system run climate models.
62-
It was initially created for MOM, but has been adapted for other models,
63-
including coupled ones.
64-
65-
The aim of **payu** is to make it easy and intuitive to configure and run the models.
66-
67-
**payu** knows certain models and how to run them. Adding more models needs additions to the **payu** sources.
68-
This will not be part of this document.
69-
70-
### Terms
71-
72-
To understand **payu**, it helps to distinguish certain terms:
73-
74-
- The **laboratory** is a directory where all parts of the model are kept.
75-
It is typically in the user's short directory, usually at `/short/$PROJECT/$USER/<MODEL>`
76-
- The **Control Directory** is the directory where the model configuration is
77-
kept and from where the model is run.
78-
- The **work** directory is where the model will actually be run.
79-
It is typically a subdirectory of the Laboratory.
80-
Submodels will have their own subdirectories in the work directory, named
81-
after their name in the master configuration file.
82-
It is ephemeral, that means payu will clean it up after the run.
83-
- The **archive** directory is where **payu** puts all output files after each run.
84-
85-
The **work** and **archive** directories will be automatically created by **payu**.
86-
87-
### The master configuration file
88-
89-
In the Control Directory, the file `config.yaml` is the master control file.
90-
Examples of what is configured in this file are:
91-
92-
- The actual model to run.
93-
- Where to find the model binaries and configurations
94-
- What resources to request from the scheduling system (PBS)
95-
- Links to the laboratory
96-
- Start date and run length per submission pf the model
97-
98-
The model configuration files are typically in subdirectories of the Control Directory,
99-
the location of which is referenced in the master control file.
100-
Since the models itself do need different ways to set up the model, the contents of these subdirectories will differ between different models.
101-
102-
## Understanding ACCESS-ESM
103-
104-
ACCESS (Australian Community Climate and Earth System Simulator) is a Coupled Climate Model.
105-
106-
The ESM 1.5 subversion of ACCESS specifically contains these models:
107-
108-
| Component | Model | Version |
109-
| ---------- | ---------- | ------- |
110-
| Atmosphere | UM-HG3 | 7.3 |
111-
| Ocean | MOM | 5 |
112-
| Sea Ice | CICE | 4.1 |
113-
| Land | CABLE | 2.2.4 |
114-
| Coupler | OASIS-MCT | 3.5 |
115-
116-
~~Pre-compiled executables for these models are available on raijin at
117-
`/short/public/access-esm/payu/bin/csiro/`.~~
118-
119-
## Setting up ACCESS-ESM with **payu**
120-
121-
### The pre-conditions
122-
123-
On `gadi`, first make sure that you have access to our modules.
124-
This can most easily been done by adding the line
125-
126-
module use /g/data3/hh5/public/modules
127-
128-
to your `~/.profile`, then logging back in. Then all you have to do is
129-
130-
module load conda/analysis3-unstable
131-
132-
to load the **payu** module.
133-
Please check again after 7/2019 to see whether it has been made part of the stable conda module.
134-
135-
as **payu** will use git to keep track of all configuration changes automatically.
136-
137-
### Setting up the control directory
138-
139-
Create a directory in your home directory to keep all the Control Directories you might want.
140-
141-
mkdir ~/ACCESS-ESM
142-
cd ~/ACCESS-ESM
143-
144-
Then clone the most recent version of the ACCESS-ESM control directory:
145-
146-
git clone https://github.com/coecms/esm-historical
147-
cd esm-historical
148-
149-
(Note: Currently we only have the historical model set up, other versions will follow later.)
150-
151-
### Setting up the Master Configuration file.
152-
153-
Open the `config.yaml` file with your preferred text editor.
154-
155-
Let's have a closer look at the parts:
156-
157-
jobname: historical
158-
queue: normal
159-
walltime: 20:00:00
160-
161-
These are settings for the PBS system. Name, walltime and queue to use.
162-
163-
# note: if laboratory is relative path, it is relative to /short/$PROJECT/$USER
164-
laboratory: access-esm
165-
166-
The location of the laboratory. At this point, **payu** can not expand shell environment variables (it's in our TO-DO), so as a work-around, if you use relative paths, it will be relative to your default short directory.
167-
168-
In this default configuration, it will be in `/short/$PROJECT/$USER/access-esm`.
169-
But you can also hard-code the full path, if you want it somewhere different.
170-
171-
model: access
172-
173-
The main model. This mainly tells **payu** which driver to use. **payu** knows that **access** is a coupled model, so it will look for separate configurations of the submodels, which is the next item of the configuration file:
174-
175-
submodels:
176-
- name: atmosphere
177-
model: um
178-
ncpus: 192
179-
exe: /short/public/access-esm/payu/bin/csiro/um_hg3.exe-20190129_15
180-
input:
181-
- /short/public/access-esm/payu/input/historical/atmosphere
182-
183-
- name: ocean
184-
model: mom
185-
ncpus: 84
186-
exe: /short/public/access-esm/payu/bin/coe/fms_ACCESS-CM.x
187-
input:
188-
- /short/public/access-esm/payu/input/common/ocean
189-
- /short/public/access-esm/payu/input/historical/ocean
190-
191-
- name: ice
192-
model: cice
193-
ncpus: 12
194-
exe: /short/public/access-esm/payu/bin/csiro/cice4.1_access-mct-12p-20180108
195-
input:
196-
- /short/public/access-esm/payu/input/common/ice
197-
198-
- name: coupler
199-
model: oasis
200-
ncpus: 0
201-
input:
202-
- /short/public/access-esm/payu/input/common/coupler
203-
204-
This is probably the meatiest part of the configuration, so let's look at it in more detail.
205-
206-
Each submodel has
207-
- a **name**
208-
- the **model** to know which driver to use
209-
- the number of CPUs that this model should receive (**ncpus**)
210-
- the location of the executable to use (**exe**)
211-
- one or more locations for the **input** files.
212-
213-
The **name** is more than a useful reminder of what the model is.
214-
**payu** expects this submodel's configuration files in a subdirectory with that name.
215-
216-
collate:
217-
exe: /short/public/access-esm/payu/bin/mppnccombine
218-
restart: true
219-
mem: 4GB
220-
221-
Collation refers joining together of ocean diagnostics that are output at model runtime
222-
in separate, tiled, files. In a process using minimal resources the output files are
223-
joined back together. The restart files are typically also tiled in the same way. Here
224-
the `restart: true` option means the restart files from the **previous** run are also
225-
collated. This saves space and cuts down the number of files which makes more efficient
226-
use of storage and better for archiving in the future.
227-
228-
restart: /short/public/access-esm/payu/restart/historical
229-
230-
This is the location of the warm restart files.
231-
**payu** will use the restart files in there for the initial run.
232-
233-
calendar:
234-
start:
235-
year: 1850
236-
month: 1
237-
days: 1
238-
239-
runtime:
240-
years: 1
241-
months: 0
242-
days: 0
243-
244-
Here is the start date, and the runtime **per run**.
245-
The total time you want to model is `runtime` * `number of runs`
246-
247-
runspersub: 5
248-
249-
This `runspersub` feature is a nifty tool to allow you to bundle several runs into a single submission for the PBS queue.
250-
251-
Let's have an example: Say you told **payu** to make 7 runs with the above setting.
252-
Each run would have a runtime of 1 year. So in the first submission it would run the model 5 times, to model years 101 through 105 respectively.
253-
254-
Then it would automatically resubmit another pbs job to model years 106 and 107, and then end.
255-
256-
### Setting up the Atmosphere Submodel
257-
258-
The **name** in `config.yaml` for the atmosphere submodel is "atmosphere", so the configuration of the UM will be in the `atmosphere` subdirectory.
259-
260-
ls atmosphere/
261-
CNTLALL SIZES __pycache__ ftxx ihist prefix.CNTLATM
262-
CONTCNTL STASHC cable.nml ftxx.new input_atm.nml prefix.CNTLGEN
263-
INITHIS UAFILES_A errflag ftxx.vars namelists prefix.PRESM_A
264-
PPCNTL UAFLDS_A exstat hnlist parexe um_env.py
265-
266-
There are many configuration files, but I want to note the `um_env.py`.
267-
This file is used to set environment variables for the UM.
268-
The UM driver of **payu** will look for this file and add these definitions to the environment when it runs the model.
269-
270-
### Setting up the Ocean Submodel
271-
272-
The **name** in `config.yaml` for the ocean submodel is "ocean", so the configuration
273-
of MOM will be in the `ocean` subdirectory.
274-
275-
ls ocean
276-
data_table diag_table field_table input.nml
277-
278-
279-
### Setting up the Ice Submodel
280-
281-
The **name** in `config.yaml` for the ice submodel is "ice", so the configuration
282-
of CICE will be in the `ice` subdirectory.
283-
284-
ls ice/
285-
cice_in.nml input_ice.nml
286-
287-
## Running the Model
288-
289-
If you have set up the modules system to use the `/g/data3/hh5/public/modules` folder, a simple `module load conda/analysis3-unstable` should give you access to the **payu** system.
290-
291-
From the control directory, type
292-
293-
payu setup
294-
295-
This will prepare a the model run based on the configuration of the experiment.
296-
It will setup `work` and `archive` directories and link to them from within the
297-
configuration directory.
298-
You don't have to do that, as the run command also sets it up, but it helps to check for errors.
299-
300-
payu sweep
301-
302-
This command removes the `work` directory again, but leaves the `archive`.
303-
304-
Finally,
305-
306-
payu run
307-
308-
will submit a single run to the queue.
309-
It will start from the beginning (as indicated by the `start` section in the `config.yaml`) if it has not run before.
310-
311-
To automatically submit several runs (and to take advantage of the `runspersub` directive), you use the `-n` option:
312-
313-
payu run -n 7
314-
315-
## Finding the Output
316-
317-
The output is automatically copied to the `archive/outputXXX` directories.
318-
319-
**Warning**: This directory is a link to your laboratory (probably on scratch), so while it might *seem* that the output files are created twice, they are not. Deleting them from one location also removes them from the other. Do not do that if you want to keep the data.
10+
The developers of ACCESS-ESM1.5 request that users of this model configuration:
11+
1. Cite https://doi.org/10.1071/ES19035
12+
2. Include an acknowledgment such as the following:
13+
"The authors thank CSIRO for developing the ACCESS-ESM1.5 model configuration and making it freely available to researchers."
14+
ACCESS-NRI requests users follow the guidelines for acknowledging ACCESS-NRI and include a statement such as:
15+
"This research used the ACCESS-ESM1.5 model infrastructure provided by ACCESS-NRI, which is enabled by the Australian Government's National Collaborative Research Infrastructure Strategy (NCRIS)."

0 commit comments

Comments
 (0)