diff --git a/1.1.10/.documenter-siteinfo.json b/1.1.10/.documenter-siteinfo.json index ab7cada..e742a8d 100644 --- a/1.1.10/.documenter-siteinfo.json +++ b/1.1.10/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-09-04T15:03:20","documenter_version":"1.1.2"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-10-16T14:10:57","documenter_version":"1.1.2"}} \ No newline at end of file diff --git a/1.1.10/atos_bologna/index.html b/1.1.10/atos_bologna/index.html index 7e03242..c7acae9 100644 --- a/1.1.10/atos_bologna/index.html +++ b/1.1.10/atos_bologna/index.html @@ -1,7 +1,7 @@ -Atos · Davai

Complementary information about DAVAI setup on aa|ab|ac|ad HPC machine @ ECMWF/Bologna

Quick install

module use ~rm9/public/modulefiles
+Atos · Davai

Complementary information about DAVAI setup on aa|ab|ac|ad HPC machine @ ECMWF/Bologna

Quick install

module use ~acrd/public/modulefiles
 module load davai

I advise to put the first line in your .bash_profile, and execute the second only when needed.


Pre-requirements (if not already set up)

  1. Load the required environment for GMKPACK compilation and DAVAI execution. It is REQUIRED that you add the following to your .bash_profile:

    module purge
    -module use /home/rm9/public/modulefiles
    +module use /home/acrd/public/modulefiles
     module load intel/2021.4.0 prgenv/intel python3/3.10.10-01 ecmwf-toolbox/2021.08.3.0 davai/master
     
     # Gmkpack is installed at Ryad El Khatib's
    @@ -21,4 +21,4 @@
     mkdir -p $d
     chgrp -R accord $d
     chmod g+s $d
    -done
+done
diff --git a/1.1.10/belenos/index.html b/1.1.10/belenos/index.html index 8636254..37d28bc 100644 --- a/1.1.10/belenos/index.html +++ b/1.1.10/belenos/index.html @@ -2,4 +2,4 @@ Belenos · Davai

Complementary information about DAVAI setup on belenos HPC machine @ MF

Quick install

module use ~mary/public/modulefiles
 module load davai

I advise to put the first line in your .bash_profile, and execute the second only when needed.


Pre-requirements (if not already set up)

  1. Load modules (conveniently in your .bash_profile):
    module load python/3.7.6
     module load git
  2. Configure your ~/.netrc file for FTP communications with archive machine hendrix, if not already done:
    machine hendrix login <your_user> password <your_password>
    -machine hendrix.meteo.fr login <your_user> password <your_password>
    (! don't forget to chmod 600 ~/.netrc if you are creating this file !)
    To be updated when you change your password
  3. Configure ftserv (information is stored encrypted in ~/.ftuas):
    ftmotpasse -h hendrix -u <your_user>
    (and give your actual password)
    AND
    ftmotpasse -h hendrix.meteo.fr -u <your_user>
    (same)
    To be updated when you change your password
  4. Configure Git proxy certificate info :
    git config --global http.sslVerify false
  5. Ensure SSH connectivity between compute and transfer nodes, if not already done:
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

And maybe

with a version of tests prior to DV48T1_op0.04-1, you may also need epygram:

  • ~mary/public/EPyGrAM/stable/_install/setup_epygram.py -v
  • then to avoid a matplotlib/display issue, set:
    backend : Agg in ~/.config/matplotlib/matplotlibrc
+machine hendrix.meteo.fr login <your_user> password <your_password>(! don't forget to chmod 600 ~/.netrc if you are creating this file !)
To be updated when you change your password
  • Configure ftserv (information is stored encrypted in ~/.ftuas):
    ftmotpasse -h hendrix -u <your_user>
    (and give your actual password)
    AND
    ftmotpasse -h hendrix.meteo.fr -u <your_user>
    (same)
    To be updated when you change your password
  • Configure Git proxy certificate info :
    git config --global http.sslVerify false
  • Ensure SSH connectivity between compute and transfer nodes, if not already done:
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

  • And maybe

    with a version of tests prior to DV48T1_op0.04-1, you may also need epygram:

    diff --git a/1.1.10/build/index.html b/1.1.10/build/index.html index 529b9ec..dd4fe98 100644 --- a/1.1.10/build/index.html +++ b/1.1.10/build/index.html @@ -1,2 +1,2 @@ -Build · Davai

    (Re-)Build of executables

    Build with gmkpack

    The tasks in the build job are respectively in charge of:

    • gitref2pack : fetch/pull the sources from the requested Git reference and set one or several incremental gmkpack's pack(s) – depending on compilation_flavours as set in config. The packs are then populated with the set of modifications, from the latest official tag to the contents of your branch (including non-commited modifications).

    • pack2bin : compile sources and link necessary executables (i.e. those used in the tests), for each pack flavour.

    In case the compilation fails, or if you need to (re-)modify the sources for any reason (e.g. fix an issue):

    1. implement corrections in the branch (commited or not)

    2. re-run the build:

      davai-build -e

      (option -e or –preexisting_pack assumes the pack already preexists; this is a protection against accidental overwrite of an existing pack. The option can also be passed to davai-run_xp)

    3. and then if build successful davai-run_tests

    Build with [cmake/ecbuild...]

    Not implemented yet.

    +Build · Davai

    (Re-)Build of executables

    Build with gmkpack

    The tasks in the build job are respectively in charge of:

    • gitref2pack : fetch/pull the sources from the requested Git reference and set one or several incremental gmkpack's pack(s) – depending on compilation_flavours as set in config. The packs are then populated with the set of modifications, from the latest official tag to the contents of your branch (including non-commited modifications).

    • pack2bin : compile sources and link necessary executables (i.e. those used in the tests), for each pack flavour.

    In case the compilation fails, or if you need to (re-)modify the sources for any reason (e.g. fix an issue):

    1. implement corrections in the branch (commited or not)

    2. re-run the build:

      davai-build -e

      (option -e or –preexisting_pack assumes the pack already preexists; this is a protection against accidental overwrite of an existing pack. The option can also be passed to davai-run_xp)

    3. and then if build successful davai-run_tests

    Build with [cmake/ecbuild...]

    Not implemented yet.

    diff --git a/1.1.10/buildoptions/index.html b/1.1.10/buildoptions/index.html index 9705013..b8c7f13 100644 --- a/1.1.10/buildoptions/index.html +++ b/1.1.10/buildoptions/index.html @@ -1,2 +1,2 @@ -Build options · Davai

    Build options

    The choice of a build system is corollary to the versioning of the tests. However, at time of writing, only gmkpack is available within DAVAÏ.

    Build with gmkpack

    In the [gmkpack] section of config file conf/davai_.ini:

    • to make a main pack, instead of an incremental pack
      $\hookrightarrow$ set packtype = main

    • to set the list of compilation flavours to build (a.k.a. compiler label/flag)
      $\hookrightarrow$ use compilation_flavours
      ! if you modify this, you potentially need to modify the compilation_flavour accordingly in the "families" sections that define it, as well as the programs_by_flavour that define the executables to be built for specific flavours

    In the [gitref2pack] section:

    • to use a different $ROOTPACK (i.e. a different source of ancestor packs, for incremental packs)
      $\hookrightarrow$ use rootpack
      (preferably to modifying the environment variable, so that will be specific to that experiment only)

    • to avoid cleaning all .o and .a when (re-)populating the pack:
      $\hookrightarrow$ set cleanpack = False

    In the [pack2bin] section:

    • to make the pack2bin task crash more quickly after a compilation/link error, or do not crash at all
      $\hookrightarrow$ set fatal_build_failure =

      • __finally__ $\Rightarrow$ crash after trying to compile and build all executables

      • __any__ $\Rightarrow$ crash if compilation fails or right after the first executable linking to fail

      • __none__ $\Rightarrow$ never == ignore failed builds

    • to re-generate ics_ files before building
      $\hookrightarrow$ set regenerate_ics = True

    • to (re-)compile local sources with gmkpack’s option Ofrt=2 (i.e. -O0 -check bounds):
      $\hookrightarrow$ set Ofrt = 2

    • to use more/less threads for compilating (independent) sources files in parallel:
      $\hookrightarrow$ use threads

    • to change the list of executables to be built, by default or depending on the compilation flavour:
      $\hookrightarrow$ use default_programs and programs_by_flavour

    Also, any gmkpack native variables can be set in the .bash_profile, e.g. ROOTPACK, HOMEPACK, etc... Some might be overwritten by the config, e.g. if you set rootpack in config file.

    Build with [cmake/makeup/ecbuild...]

    Not implemented yet.

    +Build options · Davai

    Build options

    The choice of a build system is corollary to the versioning of the tests. However, at time of writing, only gmkpack is available within DAVAÏ.

    Build with gmkpack

    In the [gmkpack] section of config file conf/davai_.ini:

    • to make a main pack, instead of an incremental pack
      $\hookrightarrow$ set packtype = main

    • to set the list of compilation flavours to build (a.k.a. compiler label/flag)
      $\hookrightarrow$ use compilation_flavours
      ! if you modify this, you potentially need to modify the compilation_flavour accordingly in the "families" sections that define it, as well as the programs_by_flavour that define the executables to be built for specific flavours

    In the [gitref2pack] section:

    • to use a different $ROOTPACK (i.e. a different source of ancestor packs, for incremental packs)
      $\hookrightarrow$ use rootpack
      (preferably to modifying the environment variable, so that will be specific to that experiment only)

    • to avoid cleaning all .o and .a when (re-)populating the pack:
      $\hookrightarrow$ set cleanpack = False

    In the [pack2bin] section:

    • to make the pack2bin task crash more quickly after a compilation/link error, or do not crash at all
      $\hookrightarrow$ set fatal_build_failure =

      • __finally__ $\Rightarrow$ crash after trying to compile and build all executables

      • __any__ $\Rightarrow$ crash if compilation fails or right after the first executable linking to fail

      • __none__ $\Rightarrow$ never == ignore failed builds

    • to re-generate ics_ files before building
      $\hookrightarrow$ set regenerate_ics = True

    • to (re-)compile local sources with gmkpack’s option Ofrt=2 (i.e. -O0 -check bounds):
      $\hookrightarrow$ set Ofrt = 2

    • to use more/less threads for compilating (independent) sources files in parallel:
      $\hookrightarrow$ use threads

    • to change the list of executables to be built, by default or depending on the compilation flavour:
      $\hookrightarrow$ use default_programs and programs_by_flavour

    Also, any gmkpack native variables can be set in the .bash_profile, e.g. ROOTPACK, HOMEPACK, etc... Some might be overwritten by the config, e.g. if you set rootpack in config file.

    Build with [cmake/makeup/ecbuild...]

    Not implemented yet.

    diff --git a/1.1.10/ciboulai/index.html b/1.1.10/ciboulai/index.html index e3490dd..6665a2b 100644 --- a/1.1.10/ciboulai/index.html +++ b/1.1.10/ciboulai/index.html @@ -1,2 +1,2 @@ -Monitoring results · Davai

    Monitor and inspect results

    1. Monitor the execution of the jobs with the scheduler (with SLURM: squeue -u <user>)

    2. Check the tests results summary on the Ciboulaï dashboard, which URL is prompted at the end of tests launch, or visible in the config file:

      • open Ciboulaï dashboard in a web browser:

        • To guide you in the navigation in Ciboulaï, cf. Ciboulai
        • To get the paths to a job output or abort directory: button [+] then Context.
      • if the dashboard is not accessible, a command-line version of the status is possible; in the XP directory, run:

        davai-xp_status

        to see the status summary of each job. The detailed status and expertise of tests are also available as json files on the Vortex cache: belenos:/scratch/mtool/<user>/cache/vortex/davai/<vconf>/<xpid>/summaries_stack/ or

        davai-xp_status -t <task>

        To get the paths to a job output or abort directory: davai-xp_status -t <task> then open the itself file and look in the Context section.

    3. If everything is OK (green) at the end of executions, your branch is validated !

    4. If not, cf. Section advanced topics to re-compile a code modification and re-run tests.

    +Monitoring results · Davai

    Monitor and inspect results

    1. Monitor the execution of the jobs with the scheduler (with SLURM: squeue -u <user>)

    2. Check the tests results summary on the Ciboulaï dashboard, which URL is prompted at the end of tests launch, or visible in the config file:

      • open Ciboulaï dashboard in a web browser:

        • To guide you in the navigation in Ciboulaï, cf. Ciboulai
        • To get the paths to a job output or abort directory: button [+] then Context.
      • if the dashboard is not accessible, a command-line version of the status is possible; in the XP directory, run:

        davai-xp_status

        to see the status summary of each job. The detailed status and expertise of tests are also available as json files on the Vortex cache: belenos:/scratch/mtool/<user>/cache/vortex/davai/<vconf>/<xpid>/summaries_stack/ or

        davai-xp_status -t <task>

        To get the paths to a job output or abort directory: davai-xp_status -t <task> then open the itself file and look in the Context section.

    3. If everything is OK (green) at the end of executions, your branch is validated !

    4. If not, cf. Section advanced topics to re-compile a code modification and re-run tests.

    diff --git a/1.1.10/ciboulai_navigation/index.html b/1.1.10/ciboulai_navigation/index.html index 04b9dc1..a7c0792 100644 --- a/1.1.10/ciboulai_navigation/index.html +++ b/1.1.10/ciboulai_navigation/index.html @@ -1,2 +1,2 @@ -Ciboulaï navigation · Davai

    Navigation in Ciboulaï

    • On the main page, the numbers in the columns to the right indicate the numbers of jobs which results are respectively:

      • bit-reproducible or within acceptable numerical error;
      • numerically different;
      • jobs that have crashed before end;
      • the experts were not able to state on the test results, to be checked manually;
      • these tests have no expected result to be checked: they are assumed OK since they did not crash.
    • When you get to an experiment page, you can find a few key features of the experiment, in the header. The [+] close to the XPID (experiment ID) will provide more. The others [+] to the left of the uenv's provide inner details from each one. The summary of tests results is also visible on the top right.

    • Each task is summarized: its Pending/Crashed/Ended status, and in case of Ended, the comparison status. As a first glance, a main metric is shown, assumed to be the most meaningful for this test.

    • The ‘drHook rel diff’ and ‘rss rel diff’ columns show the relative difference in respectively: the elapse time of the execution, and the memory consumption (RSS) compared to the reference.

      Warning

      So far the drHook figures have proven to be too volatile from an execution to another, to be meaningful. Don't pay too much attention, for now. Similarly, the RSS figures remain to be investigated (relevance and availability).

    • A filter is available to show only a subset of tasks.

    • When you click on the [+] of the more column, the detailed expertise is displayed:

      • the itself tab will show info from each Expert about the task independently from reference

      • the continuity tab will show the compared results from each Expert against the same task from reference experiment

      • the consistency tab will show the compared results from each Expert against a different reference task from the same experiment, when meaningful (very few cases, so far)

      Click on each Expert to unroll results.

    • At the experiment level as well as at the task level, a little pen symbol enables you to annotate it. That might be used for instance to justify numerical differences.

    +Ciboulaï navigation · Davai

    Navigation in Ciboulaï

    • On the main page, the numbers in the columns to the right indicate the numbers of jobs which results are respectively:

      • bit-reproducible or within acceptable numerical error;
      • numerically different;
      • jobs that have crashed before end;
      • the experts were not able to state on the test results, to be checked manually;
      • these tests have no expected result to be checked: they are assumed OK since they did not crash.
    • When you get to an experiment page, you can find a few key features of the experiment, in the header. The [+] close to the XPID (experiment ID) will provide more. The others [+] to the left of the uenv's provide inner details from each one. The summary of tests results is also visible on the top right.

    • Each task is summarized: its Pending/Crashed/Ended status, and in case of Ended, the comparison status. As a first glance, a main metric is shown, assumed to be the most meaningful for this test.

    • The ‘drHook rel diff’ and ‘rss rel diff’ columns show the relative difference in respectively: the elapse time of the execution, and the memory consumption (RSS) compared to the reference.

      Warning

      So far the drHook figures have proven to be too volatile from an execution to another, to be meaningful. Don't pay too much attention, for now. Similarly, the RSS figures remain to be investigated (relevance and availability).

    • A filter is available to show only a subset of tasks.

    • When you click on the [+] of the more column, the detailed expertise is displayed:

      • the itself tab will show info from each Expert about the task independently from reference

      • the continuity tab will show the compared results from each Expert against the same task from reference experiment

      • the consistency tab will show the compared results from each Expert against a different reference task from the same experiment, when meaningful (very few cases, so far)

      Click on each Expert to unroll results.

    • At the experiment level as well as at the task level, a little pen symbol enables you to annotate it. That might be used for instance to justify numerical differences.

    diff --git a/1.1.10/continuousintegration/index.html b/1.1.10/continuousintegration/index.html index e6260fa..6d0b3a1 100644 --- a/1.1.10/continuousintegration/index.html +++ b/1.1.10/continuousintegration/index.html @@ -1,2 +1,2 @@ -Continuous integration · Davai

    Steps and updates in the Continuous Integration process

    1. Integration of b1 :

      • Reference: x0 is the default reference xp in dev_DV49_toT1 config file

      • Tests: b1 did not require to adapt the tests $\rightarrow$ we can test with branch dev_DV49_toT1 unchanged (and still equal to DV49)

      davai-new_xp dev_CY49_toT1 -v dev_DV49_toT1
      $~~~\hookrightarrow~~~$ xi1 == x1 == x0

    2. Integration of b2 :

      • Reference: xi1 should normally be the reference xp, but since its results are bit-identical to x0 as opposed to x2, it is more relevant to compare to x2, to check that the merge of b1 and b2 still give the same results as b2

      • Tests: b2 did not require to adapt the tests $\rightarrow$ tests branch DV49_toT1 unchanged

      davai-new_xp dev_CY49_toT1 -v DV49_toT1
      $~~~$and set ref_xpid = x2
      $~~~\hookrightarrow~~~$ xi2 == x2

      • then ref_xpid should be set to xi2 in branch DV49_toT1
    3. Integration of b3 :

      • Reference: b3 does not change the results, so reference experiment is as expected by default xi2

      • Tests: b3 requires tests adaptations (DV49_b3) $\rightarrow$ update dev_DV49_toT1 by merging DV49_b3 in

      davai-new_xp dev_CY49_toT1 -v DV49_toT1
      $~~~\hookrightarrow~~~$ xi3 == xi2

    4. Integration of b4 : (where it becomes more or less tricky)

      • Reference: b4 changes the results, but the results of xi3 (current default reference for integration branch) are also changed from x0 (since b2) $\rightarrow$ the reference experiment becomes less obvious !
        The choice of the reference should be made depending on the width of impact on both sides:

        1. if there is more differences in the results between dev_CY49_toT1 and CY49 than between b4 and CY49:
          $\rightarrow$ xi3 should be taken as reference, and the differences finely compared to those shown in x4

        2. if there is more differences in the results between b4 and CY49 than between dev_CY49_toT1 and CY49:
          $\rightarrow$ x4 should be taken as reference, and the differences finely compared to those shown in xi3’, where xi3’ is a "witness" experiment comparing the integration branch after integration of b3 (commit <c3>) to CY49 (experiment x0):
          davai-new_xp <c3> -v dev_DV49_toT1
          $~~~$and set ref_xpid = x0
          $~~~\hookrightarrow~~~$ xi3’

        This is still OK if the tests affected by dev_CY49_toT1 (via b2) and the tests affected by b4 are not the same subset, or if at least if the affected fields are not the same. If they are (e.g. numerical differences that propagate prognostically through the model), the conclusion becomes much more difficult !!!
        In this case, we do not really have explicit recommendation; the integrators should double-check the result of the merge with the author of the contribution b4. Any idea welcome to sort it out.

      • Tests: b4 requires tests adaptations (DV49_b4) $\rightarrow$ update dev_DV49_toT1 by merging in DV49_b4 in

      davai-new_xp dev_CY49_toT1 -v dev_DV49_toT1
      $~~~$and set ref_xpid = xi3|xi4
      $~~~\hookrightarrow~~~$ xi4

    +Continuous integration · Davai

    Steps and updates in the Continuous Integration process

    1. Integration of b1 :

      • Reference: x0 is the default reference xp in dev_DV49_toT1 config file

      • Tests: b1 did not require to adapt the tests $\rightarrow$ we can test with branch dev_DV49_toT1 unchanged (and still equal to DV49)

      davai-new_xp dev_CY49_toT1 -v dev_DV49_toT1
      $~~~\hookrightarrow~~~$ xi1 == x1 == x0

    2. Integration of b2 :

      • Reference: xi1 should normally be the reference xp, but since its results are bit-identical to x0 as opposed to x2, it is more relevant to compare to x2, to check that the merge of b1 and b2 still give the same results as b2

      • Tests: b2 did not require to adapt the tests $\rightarrow$ tests branch DV49_toT1 unchanged

      davai-new_xp dev_CY49_toT1 -v DV49_toT1
      $~~~$and set ref_xpid = x2
      $~~~\hookrightarrow~~~$ xi2 == x2

      • then ref_xpid should be set to xi2 in branch DV49_toT1
    3. Integration of b3 :

      • Reference: b3 does not change the results, so reference experiment is as expected by default xi2

      • Tests: b3 requires tests adaptations (DV49_b3) $\rightarrow$ update dev_DV49_toT1 by merging DV49_b3 in

      davai-new_xp dev_CY49_toT1 -v DV49_toT1
      $~~~\hookrightarrow~~~$ xi3 == xi2

    4. Integration of b4 : (where it becomes more or less tricky)

      • Reference: b4 changes the results, but the results of xi3 (current default reference for integration branch) are also changed from x0 (since b2) $\rightarrow$ the reference experiment becomes less obvious !
        The choice of the reference should be made depending on the width of impact on both sides:

        1. if there is more differences in the results between dev_CY49_toT1 and CY49 than between b4 and CY49:
          $\rightarrow$ xi3 should be taken as reference, and the differences finely compared to those shown in x4

        2. if there is more differences in the results between b4 and CY49 than between dev_CY49_toT1 and CY49:
          $\rightarrow$ x4 should be taken as reference, and the differences finely compared to those shown in xi3’, where xi3’ is a "witness" experiment comparing the integration branch after integration of b3 (commit <c3>) to CY49 (experiment x0):
          davai-new_xp <c3> -v dev_DV49_toT1
          $~~~$and set ref_xpid = x0
          $~~~\hookrightarrow~~~$ xi3’

        This is still OK if the tests affected by dev_CY49_toT1 (via b2) and the tests affected by b4 are not the same subset, or if at least if the affected fields are not the same. If they are (e.g. numerical differences that propagate prognostically through the model), the conclusion becomes much more difficult !!!
        In this case, we do not really have explicit recommendation; the integrators should double-check the result of the merge with the author of the contribution b4. Any idea welcome to sort it out.

      • Tests: b4 requires tests adaptations (DV49_b4) $\rightarrow$ update dev_DV49_toT1 by merging in DV49_b4 in

      davai-new_xp dev_CY49_toT1 -v dev_DV49_toT1
      $~~~$and set ref_xpid = xi3|xi4
      $~~~\hookrightarrow~~~$ xi4

    diff --git a/1.1.10/create_branch/index.html b/1.1.10/create_branch/index.html index bbc7352..8902141 100644 --- a/1.1.10/create_branch/index.html +++ b/1.1.10/create_branch/index.html @@ -1,2 +1,2 @@ -Creating a branch · Davai

    Create your branch, containing your modifications

    To use DAVAÏ to test your contribution to the next development release, you need to have your code in a Git branch starting from the latest official release (e.g. CY48T1 tag for contributions to 48T2, or CY49 tag for contributions to 49T1).

    In the following the example is taken on a contribution to 48T2:

    1. In your repository (e.g. ~/repositories/arpifs – make sure it is clean with git status beforehand), create your branch:

      git checkout -b <my_branch> [<starting_reference>]
      Example

      git checkout -b mary_CY48T1_cleaning CY48T1

      Note

      It is strongly recommended to have explicit branch names with regards to their origin and their owner, hence the legacy branch naming syntax <user>_<CYCLE>_<purpose_of_the_branch>

    2. Implement your developments in the branch. It is recommended to find a compromise between a whole development in only one commit, and a large number of very small commits (e.g. one by changed file). In case you then face compilation or runtime issues then, but only if you haven't pushed it yet, you can amend[1] the latest commit to avoid a whole series of commits just for debugging purpose.

      Note

      DAVAÏ is currently able to include non-committed changes in the compilation and testing. However, in the next version based on bundle, this might not be possible anymore.

    • 1git commit –amend
    +Creating a branch · Davai

    Create your branch, containing your modifications

    To use DAVAÏ to test your contribution to the next development release, you need to have your code in a Git branch starting from the latest official release (e.g. CY48T1 tag for contributions to 48T2, or CY49 tag for contributions to 49T1).

    In the following the example is taken on a contribution to 48T2:

    1. In your repository (e.g. ~/repositories/arpifs – make sure it is clean with git status beforehand), create your branch:

      git checkout -b <my_branch> [<starting_reference>]
      Example

      git checkout -b mary_CY48T1_cleaning CY48T1

      Note

      It is strongly recommended to have explicit branch names with regards to their origin and their owner, hence the legacy branch naming syntax <user>_<CYCLE>_<purpose_of_the_branch>

    2. Implement your developments in the branch. It is recommended to find a compromise between a whole development in only one commit, and a large number of very small commits (e.g. one by changed file). In case you then face compilation or runtime issues then, but only if you haven't pushed it yet, you can amend[1] the latest commit to avoid a whole series of commits just for debugging purpose.

      Note

      DAVAÏ is currently able to include non-committed changes in the compilation and testing. However, in the next version based on bundle, this might not be possible anymore.

    • 1git commit –amend
    diff --git a/1.1.10/exercise4developers/index.html b/1.1.10/exercise4developers/index.html index af765d4..c787f6f 100644 --- a/1.1.10/exercise4developers/index.html +++ b/1.1.10/exercise4developers/index.html @@ -22,4 +22,4 @@ 20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0009:00.fa 20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0000:00.fa 20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0006:00.fa -20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0012:00.fa

    To know how to name these files, look at similar data for other experiments, or just your experiment and see where it crashes.

    Defining a new geometry

    Since the ALARO+SURFEX test runs on a new domain, this domain should also be registred. This is done in a file ~/.vortexrc/geometries.ini, following the examples from the file vortex/conf/geometries.ini.

    +20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0012:00.fa

    To know how to name these files, look at similar data for other experiments, or just your experiment and see where it crashes.

    Defining a new geometry

    Since the ALARO+SURFEX test runs on a new domain, this domain should also be registred. This is done in a file ~/.vortexrc/geometries.ini, following the examples from the file vortex/conf/geometries.ini.

    diff --git a/1.1.10/expertthresholds/index.html b/1.1.10/expertthresholds/index.html index c443443..d6ae88d 100644 --- a/1.1.10/expertthresholds/index.html +++ b/1.1.10/expertthresholds/index.html @@ -1,2 +1,2 @@ -Expert thresholds · Davai

    Experts thresholds

    Experts are the tools developed to parse outputs of the tasks and compare them to a reference. Each expert has its expertise field: norms, Jo-tables, etc...

    See Information on experts in the left tab of Ciboulaï to get information about the tunable thresholds of the various experts (e.g. the allowed error on Jo). Then, set according attributes in the experts definitions in the concerned tasks.

    Again, if you need to modify these, please ***explain and describe in the integration request***.

    +Expert thresholds · Davai

    Experts thresholds

    Experts are the tools developed to parse outputs of the tasks and compare them to a reference. Each expert has its expertise field: norms, Jo-tables, etc...

    See Information on experts in the left tab of Ciboulaï to get information about the tunable thresholds of the various experts (e.g. the allowed error on Jo). Then, set according attributes in the experts definitions in the concerned tasks.

    Again, if you need to modify these, please ***explain and describe in the integration request***.

    diff --git a/1.1.10/fixingproblems/index.html b/1.1.10/fixingproblems/index.html index 27b1441..6475527 100644 --- a/1.1.10/fixingproblems/index.html +++ b/1.1.10/fixingproblems/index.html @@ -1,2 +1,2 @@ -- · Davai

    Investigating a problem

    The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:

    • NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.

    • ELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue

    • PC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.

    Smaller tests for smaller problems

    To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.

    To switch to this mode:

    • create a new experiment with the same arguments but -u ELP and go in it

    • for a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False

    • davai-run_xp

    Instead of 50$^+$ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...

    The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.

    +- · Davai

    Investigating a problem

    The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:

    • NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.

    • ELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue

    • PC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.

    Smaller tests for smaller problems

    To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.

    To switch to this mode:

    • create a new experiment with the same arguments but -u ELP and go in it

    • for a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False

    • davai-run_xp

    Instead of 50$^+$ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...

    The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.

    diff --git a/1.1.10/index.html b/1.1.10/index.html index 268f23d..f56dc38 100644 --- a/1.1.10/index.html +++ b/1.1.10/index.html @@ -1,2 +1,2 @@ -Home · Davai

    DAVAÏ User Guide

    DAVAÏ embeds the whole workflow from the source code to the green/red light validation status: fetching sources from Git, building executables, running test cases, analysing the results and displaying them on a dashboard.

    For now, the only build system embedded is gmkpack, but we expect other systems to be plugged when required. The second limitation of this version is that the starting point is still an IAL[1] Git reference only. The next version of the DAVAÏ system will include multi-projects/repositories fetching, using the bundle concept as starting point.

    The dimensioning of tests (grid sizes, number of observations, parallelization...) is done in order to conceal representativity and execution speed. Therefore, in the general usecases, the tests are supposed to run on HPC. A dedicated usecase will target smaller configurations to run on workstation (not available yet). An accessible source code forge is set within the ACCORD consortium to host the IAL central repository on which updates and releases are published, and where integration requests will be posted, reviewed and monitored.

    By the way: DAVAI stands for "Device Aiming at the VAlidation of IAL"

    • 1IAL = IFS-Arpege-LAM
    +Home · Davai

    DAVAÏ User Guide

    DAVAÏ embeds the whole workflow from the source code to the green/red light validation status: fetching sources from Git, building executables, running test cases, analysing the results and displaying them on a dashboard.

    For now, the only build system embedded is gmkpack, but we expect other systems to be plugged when required. The second limitation of this version is that the starting point is still an IAL[1] Git reference only. The next version of the DAVAÏ system will include multi-projects/repositories fetching, using the bundle concept as starting point.

    The dimensioning of tests (grid sizes, number of observations, parallelization...) is done in order to conceal representativity and execution speed. Therefore, in the general usecases, the tests are supposed to run on HPC. A dedicated usecase will target smaller configurations to run on workstation (not available yet). An accessible source code forge is set within the ACCORD consortium to host the IAL central repository on which updates and releases are published, and where integration requests will be posted, reviewed and monitored.

    By the way: DAVAI stands for "Device Aiming at the VAlidation of IAL"

    • 1IAL = IFS-Arpege-LAM
    diff --git a/1.1.10/inputdata/index.html b/1.1.10/inputdata/index.html index ab463fc..c04ee77 100644 --- a/1.1.10/inputdata/index.html +++ b/1.1.10/inputdata/index.html @@ -1,2 +1,2 @@ -Input data · Davai

    Input data

    DAVAÏ gets its input data through 2 providers:

    • "shelves" (pseudo Vortex experiments) for the data supposed to flow in real case (e.g. initial conditions file, observations files, etc...), where this data is statically stored, usually in a cache to fetch it faster

    • "uget" for the static data (namelists, climatologic files, parameter files...), catalogued in ***uenv*** files.

    These shelves and uenv catalogs (cf. uget/uenv help documentation for the use of this tool.) can be modified in the [DEFAULT] section of config file.

    In case your contribution needs a modification in these, ***don't forget to describe these changes in the integration request***.

    +Input data · Davai

    Input data

    DAVAÏ gets its input data through 2 providers:

    • "shelves" (pseudo Vortex experiments) for the data supposed to flow in real case (e.g. initial conditions file, observations files, etc...), where this data is statically stored, usually in a cache to fetch it faster

    • "uget" for the static data (namelists, climatologic files, parameter files...), catalogued in ***uenv*** files.

    These shelves and uenv catalogs (cf. uget/uenv help documentation for the use of this tool.) can be modified in the [DEFAULT] section of config file.

    In case your contribution needs a modification in these, ***don't forget to describe these changes in the integration request***.

    diff --git a/1.1.10/internalorganization/index.html b/1.1.10/internalorganization/index.html index 84c032f..7e9d7c4 100644 --- a/1.1.10/internalorganization/index.html +++ b/1.1.10/internalorganization/index.html @@ -1,2 +1,2 @@ -Internal organization · Davai
    +Internal organization · Davai
    diff --git a/1.1.10/investigatingproblems/index.html b/1.1.10/investigatingproblems/index.html index 84b0800..8d6346b 100644 --- a/1.1.10/investigatingproblems/index.html +++ b/1.1.10/investigatingproblems/index.html @@ -1,2 +1,2 @@ -Investigate Problems · Davai

    Investigating a problem

    The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:

    • NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.

    • ELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue

    • PC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.

    Smaller tests for smaller problems

    To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.

    To switch to this mode:

    • create a new experiment with the same arguments but -u ELP and go in it

    • for a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False

    • davai-run_xp

    Instead of 50$^+$ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...

    The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.

    +Investigate Problems · Davai

    Investigating a problem

    The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:

    • NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.

    • ELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue

    • PC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.

    Smaller tests for smaller problems

    To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.

    To switch to this mode:

    • create a new experiment with the same arguments but -u ELP and go in it

    • for a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False

    • davai-run_xp

    Instead of 50$^+$ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...

    The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.

    diff --git a/1.1.10/jobs_tasks/index.html b/1.1.10/jobs_tasks/index.html index 73471fa..59d471e 100644 --- a/1.1.10/jobs_tasks/index.html +++ b/1.1.10/jobs_tasks/index.html @@ -1,2 +1,2 @@ -Jobs & Tasks · Davai

    Jobs & tasks

    A Task is generally understood as the triplet:

    1. fetch input resources,
    2. run an executable,
    3. dispatch the produced output.

    In a Vortex script, the tasks are written in Python, using classes and functionalities of the Vortex Python packages. In particular, running an executable is wrapped in what is called an AlgoComponent. In DAVAÏ, we add a second AlgoComponent right after the nominal one in (2) to "expertise" the outputs and compare to a reference.

    The tasks templates are stored in the tasks/ directory, and all inherit from the abstract class: vortex.layout.nodes.Task. A Test is a Task that includes an expertise to a reference. A Job is understood as a series of one or several tasks, executed sequentially within one "job submission" to a job scheduler.

    The jobs templates are stored in the tasks/ directory, and are defined as a function setup that return a Driver object, which itself contains a series of Task(s) and Family(ies).

    In DAVAÏ, the idea is to have the tasks in independent jobs as far as possible, except: for flow-dependent tasks, or for loops on clones of a task with a varying parameter.

    +Jobs & Tasks · Davai

    Jobs & tasks

    A Task is generally understood as the triplet:

    1. fetch input resources,
    2. run an executable,
    3. dispatch the produced output.

    In a Vortex script, the tasks are written in Python, using classes and functionalities of the Vortex Python packages. In particular, running an executable is wrapped in what is called an AlgoComponent. In DAVAÏ, we add a second AlgoComponent right after the nominal one in (2) to "expertise" the outputs and compare to a reference.

    The tasks templates are stored in the tasks/ directory, and all inherit from the abstract class: vortex.layout.nodes.Task. A Test is a Task that includes an expertise to a reference. A Job is understood as a series of one or several tasks, executed sequentially within one "job submission" to a job scheduler.

    The jobs templates are stored in the tasks/ directory, and are defined as a function setup that return a Driver object, which itself contains a series of Task(s) and Family(ies).

    In DAVAÏ, the idea is to have the tasks in independent jobs as far as possible, except: for flow-dependent tasks, or for loops on clones of a task with a varying parameter.

    diff --git a/1.1.10/mtool/index.html b/1.1.10/mtool/index.html index a5e1916..0df1ac1 100644 --- a/1.1.10/mtool/index.html +++ b/1.1.10/mtool/index.html @@ -1,2 +1,2 @@ -MTOOL · Davai

    Running jobs on HPC : MTOOL

    On HPCs, the compute nodes are "expensive" and so we try as much as possible to save the elapse time spent on compute nodes for actual computations, i.e. execution of the executable. Therefore in DAVAÏ, the generation of the scripts uses the MTOOL filter to replicate and cut a job script into several steps:

    1. on transfer nodes, fetch the resources, either locally on the file system(s) or using FTP connections to outer machines
    2. on compute nodes, execute the AlgoComponent(s)
    3. on transfer nodes, dispatch the produced output
    4. final step to clean the temporary environment created for the jobs

    In addition to this separation and chaining these 4 steps, MTOOL initially sets up a clean environment with a temporary unique execution directory. It also collects log files of the script's execution, and in the case of a failure (missing input resources, execution aborted), it takes a screenshot of the execution directory. Therefore for each job, one will find :

    • a depot directory in which to find the actual 4 scripts and their log files

    • an abort directory, in which to find the exact copy of the execution directory when the execution failed

    These directories are registered by the DAVAÏ expertise and are displayed in the Context item of the expertise for each task in Ciboulaï.

    +MTOOL · Davai

    Running jobs on HPC : MTOOL

    On HPCs, the compute nodes are "expensive" and so we try as much as possible to save the elapse time spent on compute nodes for actual computations, i.e. execution of the executable. Therefore in DAVAÏ, the generation of the scripts uses the MTOOL filter to replicate and cut a job script into several steps:

    1. on transfer nodes, fetch the resources, either locally on the file system(s) or using FTP connections to outer machines
    2. on compute nodes, execute the AlgoComponent(s)
    3. on transfer nodes, dispatch the produced output
    4. final step to clean the temporary environment created for the jobs

    In addition to this separation and chaining these 4 steps, MTOOL initially sets up a clean environment with a temporary unique execution directory. It also collects log files of the script's execution, and in the case of a failure (missing input resources, execution aborted), it takes a screenshot of the execution directory. Therefore for each job, one will find :

    • a depot directory in which to find the actual 4 scripts and their log files

    • an abort directory, in which to find the exact copy of the execution directory when the execution failed

    These directories are registered by the DAVAÏ expertise and are displayed in the Context item of the expertise for each task in Ciboulaï.

    diff --git a/1.1.10/organization/index.html b/1.1.10/organization/index.html index d67419c..b235e69 100644 --- a/1.1.10/organization/index.html +++ b/1.1.10/organization/index.html @@ -1,2 +1,2 @@ -Organization of experiment · Davai

    Organisation of an experiment

    The davai-new_xp command-line prepares a "testing experiment" directory, named uniquely after an incremental number, the platform and the user.

    This testing experiment will consist in:

    • conf/davai_nrv.ini : config file, containing parameters such as the git reference to test, davai options, historisations of input resources to use, tunings of tests (e.g. the input obs files to take into account) and profiles of jobs

    • conf/<USECASE>.yaml : contains an ordered and categorised list of jobs to be ran in the requested usecase.

    • conf/sources.yaml : information about the sources to be tested, in terms of Git or bundle

    • tasks/ : templates of single tasks and jobs

    • links to the python packages that are used by the scripts (vortex, epygram, ial_build, ial_expertise)

    • a logs directory/link will appear after the first execution, containing log files of each job.

    • DAVAI-tests : a clone of the DAVAI-tests repository, checkedout on the requested version of the tests, on which point the tasks/ and conf/

    +Organization of experiment · Davai

    Organisation of an experiment

    The davai-new_xp command-line prepares a "testing experiment" directory, named uniquely after an incremental number, the platform and the user.

    This testing experiment will consist in:

    • conf/davai_nrv.ini : config file, containing parameters such as the git reference to test, davai options, historisations of input resources to use, tunings of tests (e.g. the input obs files to take into account) and profiles of jobs

    • conf/<USECASE>.yaml : contains an ordered and categorised list of jobs to be ran in the requested usecase.

    • conf/sources.yaml : information about the sources to be tested, in terms of Git or bundle

    • tasks/ : templates of single tasks and jobs

    • links to the python packages that are used by the scripts (vortex, epygram, ial_build, ial_expertise)

    • a logs directory/link will appear after the first execution, containing log files of each job.

    • DAVAI-tests : a clone of the DAVAI-tests repository, checkedout on the requested version of the tests, on which point the tasks/ and conf/

    diff --git a/1.1.10/otheroptions/index.html b/1.1.10/otheroptions/index.html index bd26ba8..21879f8 100644 --- a/1.1.10/otheroptions/index.html +++ b/1.1.10/otheroptions/index.html @@ -1,2 +1,2 @@ -Other options · Davai

    Other options

    In the [DEFAULT] section, a few other general options can be set to tune the behaviour of the experiment:

    • expertise_fatal_exceptions to raise/ignore errors that could occur in the expertise subsequent to the tests

    • drhook_profiling to activate DrHook profiling or not

    • ignore_reference to force to ignore reference outputs (and so deactivate comparison)

    • archive_as_ref to archive the outputs (saving of a reference only)

    +Other options · Davai

    Other options

    In the [DEFAULT] section, a few other general options can be set to tune the behaviour of the experiment:

    • expertise_fatal_exceptions to raise/ignore errors that could occur in the expertise subsequent to the tests

    • drhook_profiling to activate DrHook profiling or not

    • ignore_reference to force to ignore reference outputs (and so deactivate comparison)

    • archive_as_ref to archive the outputs (saving of a reference only)

    diff --git a/1.1.10/parallelprofiling/index.html b/1.1.10/parallelprofiling/index.html index 2524634..c791d52 100644 --- a/1.1.10/parallelprofiling/index.html +++ b/1.1.10/parallelprofiling/index.html @@ -1,2 +1,2 @@ -Parallel profiling · Davai

    Parallel profiling

    Each job has a section in the config file, in which one can tune the requested profile parameters to the jobs scheduler:

    • time : elapse time

    • ntasks : number of MPI tasks per node

    • nnodes : number of nodes

    • openmp : number of OpenMP threads

    • partition : category of nodes

    • mem : memory (helps to prevent OOM)

    The total number of MPI tasks is therefore nnodes \times ntasks, and is automatically replaced in namelist

    +Parallel profiling · Davai

    Parallel profiling

    Each job has a section in the config file, in which one can tune the requested profile parameters to the jobs scheduler:

    • time : elapse time

    • ntasks : number of MPI tasks per node

    • nnodes : number of nodes

    • openmp : number of OpenMP threads

    • partition : category of nodes

    • mem : memory (helps to prevent OOM)

    The total number of MPI tasks is therefore nnodes \times ntasks, and is automatically replaced in namelist

    diff --git a/1.1.10/rerun/index.html b/1.1.10/rerun/index.html index a593cbe..7d195d7 100644 --- a/1.1.10/rerun/index.html +++ b/1.1.10/rerun/index.html @@ -1,2 +1,2 @@ -Rerun tests · Davai

    Re-run a test

    The Davai command davai-run_tests launches all the jobs listed in conf/<USECASE>.yaml, sequentially and independently (i.e. without waiting for the jobs to finish). The command can also be used complementary:

    • to list the jobs that would be launched by the command, according to the conf/<USECASE>.yaml config file: davai-run_tests -l

    • to run a single job:

      davai-run_tests <job identifier as given by -l option>

    Some tests are gathered together within a single job. There are 2 reasons for that: if they are an instance of a loop (e.g. same test on different obstypes, or different geometries), or if they have a flow-dependency with an upstream/downstream test (e.g. bator > screening > minimization).

    When a test fails within a job and the user wants to re-run it without re-runnning the other tests from the same job, it is possible to do so by deactivating them[1] :

    • loops: to deactivate members of a loop: open config file conf/davai_.ini, and in the section corresponding to the job or family, the loops can be found as list(...), e.g. obstypes, rundates or geometries. Items in the list can be reduced to the only required ones (note that if only one item remains, one needs to keep a final "," within the parenthesis).

    • dependency: open driver file corresponding to the job name in tasks/ directory, and comment out (#) the unrequired tasks or families of nodes, leaving only the required task.

    • 1including upstream tasks that produce flow-resources for the targeted test, as long as the resources stay in cache
    +Rerun tests · Davai

    Re-run a test

    The Davai command davai-run_tests launches all the jobs listed in conf/<USECASE>.yaml, sequentially and independently (i.e. without waiting for the jobs to finish). The command can also be used complementary:

    • to list the jobs that would be launched by the command, according to the conf/<USECASE>.yaml config file: davai-run_tests -l

    • to run a single job:

      davai-run_tests <job identifier as given by -l option>

    Some tests are gathered together within a single job. There are 2 reasons for that: if they are an instance of a loop (e.g. same test on different obstypes, or different geometries), or if they have a flow-dependency with an upstream/downstream test (e.g. bator > screening > minimization).

    When a test fails within a job and the user wants to re-run it without re-runnning the other tests from the same job, it is possible to do so by deactivating them[1] :

    • loops: to deactivate members of a loop: open config file conf/davai_.ini, and in the section corresponding to the job or family, the loops can be found as list(...), e.g. obstypes, rundates or geometries. Items in the list can be reduced to the only required ones (note that if only one item remains, one needs to keep a final "," within the parenthesis).

    • dependency: open driver file corresponding to the job name in tasks/ directory, and comment out (#) the unrequired tasks or families of nodes, leaving only the required task.

    • 1including upstream tasks that produce flow-resources for the targeted test, as long as the resources stay in cache
    diff --git a/1.1.10/runtests/index.html b/1.1.10/runtests/index.html index 9513041..f893f67 100644 --- a/1.1.10/runtests/index.html +++ b/1.1.10/runtests/index.html @@ -1,2 +1,2 @@ -Running tests · Davai

    Run tests

    1. Create your experiment, specifying which version of the tests you want to use:

      davai-new_xp <my_branch> -v <tests_version>
      Example
      davai-new_xp mary_CY48T1_cleaning -v DV48T1

      An experiment with a unique experiment ID is created and prompted as output of the command, together with its path.

      • To know what is the version to be used for a given development: See here
      • See davai-new_xp -h for more options on this command
      • See Appendix for a more comprehensive approach to tests versioning.
      • If the version you are requesting is not known, you may need to specify the DAVAI-tests origin repository from which to clone/fetch it, using argument –origin <URL of the remote DAVAI-tests.git>
    2. Go to the (prompted) experiment directory.

      If you want to set some options differently from the default, open file conf/davai_nrv.ini and tune the parameters in the [DEFAULT] section. The usual tunable parameters are detailed in Section options

    3. Launch the build and tests:

      davai-run_xp

      After initializing the Ciboulaï page for the experiment, the command will first run the build of the branch and wait for the executables (that step may take a while, depending on the scope of your modifications, especially with several compilation flavours). Once build completed, it will then launch the tests (through scheduler on HPC).

    To test a bundle, i.e. a combination of modifications in IAL and other repos

    Use command davai-new_xp_from_bundle. The rest is identical.

    +Running tests · Davai

    Run tests

    1. Create your experiment, specifying which version of the tests you want to use:

      davai-new_xp <my_branch> -v <tests_version>
      Example
      davai-new_xp mary_CY48T1_cleaning -v DV48T1

      An experiment with a unique experiment ID is created and prompted as output of the command, together with its path.

      • To know what is the version to be used for a given development: See here
      • See davai-new_xp -h for more options on this command
      • See Appendix for a more comprehensive approach to tests versioning.
      • If the version you are requesting is not known, you may need to specify the DAVAI-tests origin repository from which to clone/fetch it, using argument –origin <URL of the remote DAVAI-tests.git>
    2. Go to the (prompted) experiment directory.

      If you want to set some options differently from the default, open file conf/davai_nrv.ini and tune the parameters in the [DEFAULT] section. The usual tunable parameters are detailed in Section options

    3. Launch the build and tests:

      davai-run_xp

      After initializing the Ciboulaï page for the experiment, the command will first run the build of the branch and wait for the executables (that step may take a while, depending on the scope of your modifications, especially with several compilation flavours). Once build completed, it will then launch the tests (through scheduler on HPC).

    To test a bundle, i.e. a combination of modifications in IAL and other repos

    Use command davai-new_xp_from_bundle. The rest is identical.

    diff --git a/1.1.10/search_index.js b/1.1.10/search_index.js index 7678c30..90a0d07 100644 --- a/1.1.10/search_index.js +++ b/1.1.10/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"jobs_tasks/#Jobs-and-tasks","page":"Jobs & Tasks","title":"Jobs & tasks","text":"","category":"section"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"A Task is generally understood as the triplet: ","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"fetch input resources, \nrun an executable, \ndispatch the produced output. ","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"In a Vortex script, the tasks are written in Python, using classes and functionalities of the Vortex Python packages. In particular, running an executable is wrapped in what is called an AlgoComponent. In DAVAÏ, we add a second AlgoComponent right after the nominal one in (2) to \"expertise\" the outputs and compare to a reference.","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"The tasks templates are stored in the tasks/ directory, and all inherit from the abstract class: vortex.layout.nodes.Task. A Test is a Task that includes an expertise to a reference. A Job is understood as a series of one or several tasks, executed sequentially within one \"job submission\" to a job scheduler.","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"The jobs templates are stored in the tasks/ directory, and are defined as a function setup that return a Driver object, which itself contains a series of Task(s) and Family(ies).","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"In DAVAÏ, the idea is to have the tasks in independent jobs as far as possible, except: for flow-dependent tasks, or for loops on clones of a task with a varying parameter.","category":"page"},{"location":"continuousintegration/#Steps-and-updates-in-the-Continuous-Integration-process","page":"Continuous integration","title":"Steps and updates in the Continuous Integration process","text":"","category":"section"},{"location":"continuousintegration/","page":"Continuous integration","title":"Continuous integration","text":"Integration of b1 :\nReference: x0 is the default reference xp in dev_DV49_toT1 config file\nTests: b1 did not require to adapt the tests rightarrow we can test with branch dev_DV49_toT1 unchanged (and still equal to DV49)\ndavai-new_xp dev_CY49_toT1 -v dev_DV49_toT1\n hookrightarrow xi1 == x1 == x0\nIntegration of b2 :\nReference: xi1 should normally be the reference xp, but since its results are bit-identical to x0 as opposed to x2, it is more relevant to compare to x2, to check that the merge of b1 and b2 still give the same results as b2\nTests: b2 did not require to adapt the tests rightarrow tests branch DV49_toT1 unchanged\ndavai-new_xp dev_CY49_toT1 -v DV49_toT1\n and set ref_xpid = x2\n hookrightarrow xi2 == x2\nthen ref_xpid should be set to xi2 in branch DV49_toT1\nIntegration of b3 :\nReference: b3 does not change the results, so reference experiment is as expected by default xi2\nTests: b3 requires tests adaptations (DV49_b3) rightarrow update dev_DV49_toT1 by merging DV49_b3 in\ndavai-new_xp dev_CY49_toT1 -v DV49_toT1\n hookrightarrow xi3 == xi2\nIntegration of b4 : (where it becomes more or less tricky)\nReference: b4 changes the results, but the results of xi3 (current default reference for integration branch) are also changed from x0 (since b2) rightarrow the reference experiment becomes less obvious !\n The choice of the reference should be made depending on the width of impact on both sides:\nif there is more differences in the results between dev_CY49_toT1 and CY49 than between b4 and CY49:\n rightarrow xi3 should be taken as reference, and the differences finely compared to those shown in x4\nif there is more differences in the results between b4 and CY49 than between dev_CY49_toT1 and CY49:\n rightarrow x4 should be taken as reference, and the differences finely compared to those shown in xi3’, where xi3’ is a \"witness\" experiment comparing the integration branch after integration of b3 (commit ) to CY49 (experiment x0):\n davai-new_xp -v dev_DV49_toT1\n and set ref_xpid = x0\n hookrightarrow xi3’\nThis is still OK if the tests affected by dev_CY49_toT1 (via b2) and the tests affected by b4 are not the same subset, or if at least if the affected fields are not the same. If they are (e.g. numerical differences that propagate prognostically through the model), the conclusion becomes much more difficult !!!\n In this case, we do not really have explicit recommendation; the integrators should double-check the result of the merge with the author of the contribution b4. Any idea welcome to sort it out.\nTests: b4 requires tests adaptations (DV49_b4) rightarrow update dev_DV49_toT1 by merging in DV49_b4 in\ndavai-new_xp dev_CY49_toT1 -v dev_DV49_toT1\n and set ref_xpid = xi3|xi4\n hookrightarrow xi4","category":"page"},{"location":"atos_bologna/#Complementary-information-about-DAVAI-setup-on-aaabacad-HPC-machine-@-ECMWF/Bologna","page":"Atos","title":"Complementary information about DAVAI setup on aa|ab|ac|ad HPC machine @ ECMWF/Bologna","text":"","category":"section"},{"location":"atos_bologna/#Quick-install","page":"Atos","title":"Quick install","text":"","category":"section"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"module use ~rm9/public/modulefiles\nmodule load davai","category":"page"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"I advise to put the first line in your .bash_profile, and execute the second only when needed.","category":"page"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"","category":"page"},{"location":"atos_bologna/#Pre-requirements-(if-not-already-set-up)","page":"Atos","title":"Pre-requirements (if not already set up)","text":"","category":"section"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"Load the required environment for GMKPACK compilation and DAVAI execution. It is REQUIRED that you add the following to your .bash_profile:\nmodule purge\nmodule use /home/rm9/public/modulefiles\nmodule load intel/2021.4.0 prgenv/intel python3/3.10.10-01 ecmwf-toolbox/2021.08.3.0 davai/master\n\n# Gmkpack is installed at Ryad El Khatib's\nHOMEREK=~rme\nexport GMKROOT=$HOMEREK/public/bin/gmkpack\n# use efficiently filesystems\nexport ROOTPACK=$PERM/rootpack\nexport HOMEPACK=$PERM/pack\nexport GMKTMP=$TMPDIR/gmktmp\n# default compilation options\nexport GMKFILE=OMPIIFC2104.AA\nexport GMK_OPT=x\n# update paths\nexport PATH=$GMKROOT/util:$PATH\nexport MANPATH=$MANPATH:$GMKROOT/mani\nEnsure permissions to accord group (e.g. with chgrp) for support, something like:\nfor d in $HOME/davai $HOME/pack $SCRATCH/mtool/depot\ndo\nmkdir -p $d\nchgrp -R accord $d\nchmod g+s $d\ndone","category":"page"},{"location":"fixingproblems/#Investigating-a-problem","page":"-","title":"Investigating a problem","text":"","category":"section"},{"location":"fixingproblems/","page":"-","title":"-","text":"The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.\nELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue\nPC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.","category":"page"},{"location":"fixingproblems/#Smaller-tests-for-smaller-problems","page":"-","title":"Smaller tests for smaller problems","text":"","category":"section"},{"location":"fixingproblems/","page":"-","title":"-","text":"To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"To switch to this mode:","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"create a new experiment with the same arguments but -u ELP and go in it\nfor a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False\ndavai-run_xp","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"Instead of 50^+ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.","category":"page"},{"location":"tips/#First-tips","page":"First tips","title":"First tips","text":"","category":"section"},{"location":"tips/","page":"First tips","title":"First tips","text":"All Davai commands are prefixed davai-* and can be listed with davai-help. All commands are auto-documented with option -h.\nIf the pack preparation or compilation fails, for whatever reason, the build step prints an error message and the davai-run_xp command stops before running the tests. You can find the output of the pack preparation or compilation in logs/ directory, as any other test log file.\nA very common error is when the pack already exists; if you actually want to overwrite the contents of the pack (e.g. because you just fixed a code issue in the branch), you may need option -e/–preexisting_pack:\ndavai-run_xp -e\nor\ndavai-build -e\nOtherwise, if the pack preexists independently for valid reasons, you will need to move/delete the existing pack, or rename your branch.\nThe tests are organised as tasks and jobs:\na task consists in fetching input resources, running an executable, analyzing its outputs to the Ciboulai dashboard and dispatching (archiving) them: 1 test = 1 task\na job consists in a sequential driver of one or several task(s): either a flow sequence (i.e. outputs of task N is an input of task N+1) or family sequence (e.g. run independently an IFS and an Arpege forecast)\nTo fix a piece of code, the best is to modify the code in your Git repo, then re-run\ndavai-run_xp -e\n(or davai-build -e and then davai-run_tests).\nYou don't necessarily need to commit the change rightaway, the non-committed changes are exported from Git to the pack. Don't forget to commit eventually though, before issuing pull request.\nTo re-run one job only after re-compilation, type\ndavai-run_tests -l\nto list the jobs and then\ndavai-run_tests \nnote: Example\ndavai-run_tests forecasts.standalone_forecasts\nThe syntax category.job indicates that the job to be run is the Driver in ./tasks/category/job.py\nTo re-run a single test within a job, e.g. the IFS forecast in forecasts/standalone_forecasts.py: edit this file, comment the other Family(s) or Task(s) (nodes) therein, and re-run the job as indicated above.\nEventually, after code modifications and fixing particular tests, you should re-run the whole set of tests, to make sure your fix does not break any other test.","category":"page"},{"location":"belenos/#Complementary-information-about-DAVAI-setup-on-belenos-HPC-machine-@-MF","page":"Belenos","title":"Complementary information about DAVAI setup on belenos HPC machine @ MF","text":"","category":"section"},{"location":"belenos/#Quick-install","page":"Belenos","title":"Quick install","text":"","category":"section"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"module use ~mary/public/modulefiles\nmodule load davai","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"I advise to put the first line in your .bash_profile, and execute the second only when needed.","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"","category":"page"},{"location":"belenos/#Pre-requirements-(if-not-already-set-up)","page":"Belenos","title":"Pre-requirements (if not already set up)","text":"","category":"section"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"Load modules (conveniently in your .bash_profile):\nmodule load python/3.7.6\nmodule load git\nConfigure your ~/.netrc file for FTP communications with archive machine hendrix, if not already done:\nmachine hendrix login password \nmachine hendrix.meteo.fr login password \n(! don't forget to chmod 600 ~/.netrc if you are creating this file !)\nTo be updated when you change your password\nConfigure ftserv (information is stored encrypted in ~/.ftuas):\nftmotpasse -h hendrix -u \n(and give your actual password)\nAND\nftmotpasse -h hendrix.meteo.fr -u \n(same)\nTo be updated when you change your password\nConfigure Git proxy certificate info :\ngit config --global http.sslVerify false\nEnsure SSH connectivity between compute and transfer nodes, if not already done:\ncat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"","category":"page"},{"location":"belenos/#And-maybe","page":"Belenos","title":"And maybe","text":"","category":"section"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"with a version of tests prior to DV48T1_op0.04-1, you may also need epygram:","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"~mary/public/EPyGrAM/stable/_install/setup_epygram.py -v\nthen to avoid a matplotlib/display issue, set:\nbackend : Agg in ~/.config/matplotlib/matplotlibrc","category":"page"},{"location":"userconfiguration/#User-configuration","page":"User configuration","title":"User configuration","text":"","category":"section"},{"location":"userconfiguration/","page":"User configuration","title":"User configuration","text":"Some more general parameters are configurable, such as the default directory in which the experiments are stored, or the directory in which the logs of jobs are put. This can be set in ~/.davairc/user_config.ini. If the user, for whatever reason, needs to modify the packages linked in the experiments on a regular basis, it is possible to specify that in the same user config file. An example of these variables is available in the DAVAI-env repository, under templates/user_config.ini.","category":"page"},{"location":"expertthresholds/#Experts-thresholds","page":"Expert thresholds","title":"Experts thresholds","text":"","category":"section"},{"location":"expertthresholds/","page":"Expert thresholds","title":"Expert thresholds","text":"Experts are the tools developed to parse outputs of the tasks and compare them to a reference. Each expert has its expertise field: norms, Jo-tables, etc...","category":"page"},{"location":"expertthresholds/","page":"Expert thresholds","title":"Expert thresholds","text":"See Information on experts in the left tab of Ciboulaï to get information about the tunable thresholds of the various experts (e.g. the allowed error on Jo). Then, set according attributes in the experts definitions in the concerned tasks.","category":"page"},{"location":"expertthresholds/","page":"Expert thresholds","title":"Expert thresholds","text":"Again, if you need to modify these, please ***explain and describe in the integration request***.","category":"page"},{"location":"internalorganization/#Davai-ecosystem","page":"Internal organization","title":"Davai ecosystem","text":"","category":"section"},{"location":"internalorganization/","page":"Internal organization","title":"Internal organization","text":"(Image: )","category":"page"},{"location":"exercise4developers/#Adding-an-ALAROSURFEX-test-to-DAVAÏ","page":"Exercises","title":"Adding an ALARO+SURFEX test to DAVAÏ","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"This section describes what was done to add an ALARO+SURFEX test to DAVAÏ. It may serve as a recipe to add other tests.","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"First, create a new DAVAÏ experiment with davai-new_xp. Also, run following commands to set the environment:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"source ~acrd/.vortexrc/profile\ncp ~rm9/.vortexrc/uget-client-defaults.ini .vortexrc/","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Next, initialize the hack directory for your user:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"uget.py bootstrap_hack ${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Note: directories in this document are usually relative to the experiment's base directory.","category":"page"},{"location":"exercise4developers/#Creating-the-test-itself","page":"Exercises","title":"Creating the test itself","text":"","category":"section"},{"location":"exercise4developers/#Modifications-to-the-file-conf/davai_nrv.ini:","page":"Exercises","title":"Modifications to the file conf/davai_nrv.ini:","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"add a section for the model:\n[alaro]\nmodel = alaro\nLAM = True\ninput_shelf = &{input_shelf_lam}\nfcst_term = 12\nexpertise_term = 12\ncoupling_frequency = 3\nadd a section for the forecast itself:\n[forecast-alaro1_sfx-chmh2325]\nalaro_version = 1_sfx\nrundate = date(2021022000)\nsince we're using a new domain (chmh2325), add a section for this domain:\n[chmh2325]\ngeometry = geometry(chmh2325)\ntimestep = 90","category":"page"},{"location":"exercise4developers/#Modifications-to-the-file-tasks/forecasts/standalone_forecasts.py:","page":"Exercises","title":"Modifications to the file tasks/forecasts/standalone_forecasts.py:","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"The easiest is to copy and modify an existing forecast. In this case, we added to the following to the alaro family:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":" Family(tag='chmh2325', ticket=t, nodes=[\n StandaloneAlaroForecast(tag='forecast-alaro1_sfx-chmh2325', ticket=t, **kw),\n , **kw),","category":"page"},{"location":"exercise4developers/#Modifications-to-the-file-tasks/forecasts/standalone/alaro.py:","page":"Exercises","title":"Modifications to the file tasks/forecasts/standalone/alaro.py:","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"We need to add the fetching of the SURFEX initial file, the SURFEX namelist and the PGD file. This was done using the AROME forecast task as an example. The fetching of these files is put under a condition self.conf.alaro_version == '1_sfx', to make sure the files are only fetched when running ALARO with SURFEX.","category":"page"},{"location":"exercise4developers/#Setup-a-custom-catalogue","page":"Exercises","title":"Setup a custom catalogue","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Find out which catalogue is used by your test. In the case of ALARO, the file alaro.py uses self.conf.davaienv, which is set in davai_nrv.ini to be cy49.davai_specials.02@davai. A local copy of this catalogue is created with","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"uget.py hack env cy49.davai_specials.02@davai into cy49.davai_specials.02@@${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"This will create a local catalogue file under ~/.vortexrc/hack/uget/${USER}/env/. Make sure to modify the value in davai_nrv.ini to use your local copy.","category":"page"},{"location":"exercise4developers/#Adding-constant-files-such-as-namelist-files,-PGD-file,-etc.","page":"Exercises","title":"Adding constant files such as namelist files, PGD file, etc.","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Constant files go into the ~/.vortexrc/hack/ directory. To add/modify a namelist file, first find out which namelists are used by your test in the local catalogue file you copied before (cy49.davai_specials.02@${USER}). In the case of the ALARO forecast, the namelists that are used are 49.arpifs@davai.02.nam.tgz@davai, so a local copy is taken of these with","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"uget.py hack data 49.arpifs@davai.02.nam.tgz@davai into 49.arpifs@davai.02.nam.tgz@${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"This creates a tgz file under ~/.vortexrc/hack/uget/${USER}/data/, which then needs to be unpacked. Make sure to modify the catalogue file to use your local copy of the namelists.","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"You then can modify existing namelist files, or - as was the case for the ALARO+SURFEX test - add new namelist files. The location and name of the required namelists can be found in the forecast script (alaro.py). The namelists created were model/alaro/fcst.alaro1_sfx.nam and model/alaro/fcst.alaro1_sfx.nam_surfex. Make sure to use the following variables/values:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"CNMEXP=__CEXP__,\nNPROC=__NBPROC__,\nNSTRIN=__NBPROC__,\nNSTROUT=__NBPROC__,\nCSTOP=__FCSTOP__,\nTSTEP=__TIMESTEP__,","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"since these are substituted by DAVAÏ.","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"The name of the PGD file needs to be set in the catalogue cy49.davai_specials.02@${USER} by adding the line","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"PGD_FA_CHMH2325=uget:pgd.chmh2325-02km33.fa.01@${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"The PGD file itself should be put just under ~/.vortexrc/hack/uget/${USER}/data/.","category":"page"},{"location":"exercise4developers/#Setting-non-constant-files-such-as-initial-conditions,-LBC-files,-etc.","page":"Exercises","title":"Setting non-constant files such as initial conditions, LBC files, etc.","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"These files should go into the shelf (since in mixed tests they could be generated by an earlier task). The name of the shelf can be found in davai_nrv.ini, and turns out to be input_shelf_LAM = shelf_cy48t1_LAM.01@davai, so we'll create a directory /scratch/${USER}/mtool/cache/vortex/davai/shelves/shelf_cy48t1_LAM.01@davai/. Following files are put in this directory:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"20210220T0000A/surfan/analysis.surf-surfex.chmh2325-02km33.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0003:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0009:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0000:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0006:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0012:00.fa","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"To know how to name these files, look at similar data for other experiments, or just your experiment and see where it crashes.","category":"page"},{"location":"exercise4developers/#Defining-a-new-geometry","page":"Exercises","title":"Defining a new geometry","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Since the ALARO+SURFEX test runs on a new domain, this domain should also be registred. This is done in a file ~/.vortexrc/geometries.ini, following the examples from the file vortex/conf/geometries.ini.","category":"page"},{"location":"uget/uget/#User-Documentation-Uenv/Uget","page":"uget","title":"User Documentation Uenv/Uget","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Alexandre Mary et al.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The uenv/uget tool developped in Vortex is the counterpart of genv/gget (MF/GCO op team), but user-oriented (hence the u instead of g) and shareable with other users. It enables, in Vortex experiments, to get resources the same way as within an official genv but from your own catalogs or your colleagues.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"This tool hence enables to work in research mode the same way as with official op resources, changing just the uenv in the Vortex experiment.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"How does it work ? Quite simple, but a few explanations are necessary to use it properly.","category":"page"},{"location":"uget/uget/#Tutorial","page":"uget","title":"Tutorial","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The following example shows how to modify components of an Arome-France genv catalog and modify its components piece by piece.","category":"page"},{"location":"uget/uget/#Before-first-use","page":"uget","title":"Before first use","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"load Genv/Gget (in your profile, if not already done):\nexport PATH=/home/mf/dp/marp/gco/public/bin:$PATH\nload Vortex (in your profile, if not already done):\nmodule load python\nVORTEX_INSTALL_DIR=/home/mf/dp/marp/verolive/vortex/vortex\nPYTHONPATH=$VORTEX_INSTALL_DIR/src:$PYTHONPATH\nPYTHONPATH=$VORTEX_INSTALL_DIR/site:$PYTHONPATH\nPYTHONPATH=$VORTEX_INSTALL_DIR/project:$PYTHONPATH\nexport PYTHONPATH\nexport PATH=$VORTEX_INSTALL_DIR/bin:$PATH\ninitialisation of directories:\nuget.py bootstrap_hack [user]\ntip: Example\nuget.py bootstrap_hack mary","category":"page"},{"location":"uget/uget/#Clone-an-existing-env-(catalog)-{#uget-clone-existant-en}","page":"uget","title":"Clone an existing env (catalog) {#uget-clone-existant-en}","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Syntax:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py hack genv [cycle_source] into [cycle_cible]@[user]","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nuget.py hack genv al42_arome-op2.30 into al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"This \"hack\" command creates a copy of the genv catalog (genv al42_arome-op2.30), under: $HOME/.vortexrc/hack/uget/mary/env/al42_arome-dble.02.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The initial env can be a GCO official one (genv), or a user one (uenv); in which case the syntax is slightly different, in order to precise who we want to get the env from:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py hack env al42_arome-dble.01@faure into al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is a sort of convention within uget : genv blabla stands for a GCO env named blabla whereas env blabla@someone points to a user-owned env named blabla hosted at someone.","category":"page"},{"location":"uget/uget/#Modification-of-the-cloned-env","page":"uget","title":"Modification of the cloned env","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"For each element in the cloned catalog (obtained at step uget-clone-existant-en, we can modify the the resource (i.e. to the right of the =), by pointing at an element in the \"GCO official store\", or at a colleague's or one of your own's (under $HOME/.vortexrc/hack/uget/$USER/data/).","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"We can mix such elements of a uenv","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nI am user mary, the element:  CLIM_FRANMG_01KM30=clim_franmg.01km30.03 (at GCO) can be replaced by : CLIM_FRANMG_01KM30=uget:mes_clims@mary (uget: to identify it is an element managed by uget and @mary because the element is in my store) or: CLIM_FRANMG_01KM30=uget:mes_clims.04@faure (@faure because it is an element stored at user faure)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Beware a little difference with genv for namelists packages: these packages being stored as tar/tgz, you need to specify explicitly in the uenv.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nnote the extension .tgz:NAMELIST_AROME=uget:my_namelist_package.tgz@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"However, uget will be able to get either the directory $HOME/.vortexrc/hack/uget/mary/data/my_namelist_package soit le tgz $HOME/.vortexrc/hack/uget/mary/data/my_namelist_package.tgz (actually, the most recently modified of both).","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"We can also add new resources in a uenv. The keys (left of the = just need to follow a precise Vortex syntax; for instance for a clim file: CLIM_[AREA]_[RESOLUTION].","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"To modify an existing element (e.g. a namelist package), we get it via uget:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py hack gdata [element] into [clone_element]@[user]","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nuget.py hack gdata al42_arome-op2.15.nam into al42_arome-op2.16.nam.tgz@maryor:uget.py hack data al42_arome-dble.01.nam.tgz@faure into al42_arome-op2.16.nam.tgz@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The convention used here by uget is consistent with the one used before: gdata blabla stands for a GCO element named blabla when data blabla@someone points to a data stored via uget/uenv, named blabla and stored at someone.","category":"page"},{"location":"uget/uget/#Historisation","page":"uget","title":"Historisation","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is a good practice to first check there are no inconsistency within your uenv, i.e. check that all elements listed there actually exist, either locally or on archive, and at your user, someone else or GCO:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py check env al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Then, to freeze a version and share it with other users, you need to push the uenv to archive:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py push env al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The command (can take a little while) archives the uenv AND the elements locally present onto archive. It is then strongly recommended to clean them locally, to avoid to modify something that has been archived and end up with inconsistencies between local and archived versions:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py clean_hack","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Caution: all uenv and elements having been pushed are then deleted locally from directories env et data !","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"We may also want to push just one element to make it available before a whole uenv is ready.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"In this case:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py push data [element]@[user]}","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nuget.py push data al42_arome-op2.16.nam.tgz@mary","category":"page"},{"location":"uget/uget/#Explore","page":"uget","title":"Explore","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(new in Vortex-1.2.3)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is possible to list all uenv existing from a user:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py list env from faure","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"or the elements, potentially with a filter (based on a regular expression):","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py list data from faure matching .nam","category":"page"},{"location":"uget/uget/#From-one-uenv-to-another","page":"uget","title":"From one uenv to another","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(new in Vortex-1.2.3)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is also possible to compare 2 uenv:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env [cycle_to_compare] wrt env [cycle_reference]","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Ex:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env al42_arome-dble.02@mary wrt genv al42_arome-op2.30","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"or:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env al42_arome-dble.02@mary wrt env al42_arome-dble.01@faure","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"If your uenv has been generated using uget.py hack, a comment has been left in the head of the file to trace its history, and enables you to use the alias parent as:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env [my_uenv] wrt parent","category":"page"},{"location":"uget/uget/#Export-catalog","page":"uget","title":"Export catalog","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(new in Vortex-1.2.3)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The command uget.py export enables to list the elements updated with regards to a reference, giving their path on the archive.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Ex:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py export env al42_arome-dble.02@mary [wrt genv al42_arome-op2.30]","category":"page"},{"location":"uget/uget/#Remarks-and-good-habits","page":"uget","title":"Remarks and good habits","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"clim files (and other monthly resources) are expanded: the key CLIM_BLABLA=uget:my_clims@mary aim at all files syntaxed my_clims.m?? located in the directory data ;\neven if it is technically feasable, it is strongly advised to forbid yourself to modify an element once pushed. With the cache system, you may face weird fetches in experiments...\nas a corollary, it is a good habit to number each uenv and each resource, and increment them push after push\non hendrix, the uenv and resources are archived under an archived and spread tree of directories. This is both for performance matters and an incitation to use uget.py to get these resources systematically\nbefore an element is pushed (uenv and resources), it is not accessible via uget.py nor a vortex experiment for other users, only for the owner.\nif large resources are to be pushed, one can advantageously log on a transfer node before the push\ncomments are accepted in a uenv, starting with #.","category":"page"},{"location":"uget/uget/#More-advanced-functionalities","page":"uget","title":"More advanced functionalities","text":"","category":"section"},{"location":"uget/uget/#Default-user","page":"uget","title":"Default user","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It can become cumbersome to repeat the user (e.g. @mary) in command lines. Hence a default user can be defined:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py set location mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The default user can be retrieved with uget.py info. Once set, one can only type:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py check env al42_arome-dble.02","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"or:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env al42_arome-dble.02 wrt env al42_arome-dble.01@faure","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(instead of uget.py check env al42_arome-dble.02@mary and uget.py diff env al42_arome-dble.02@mary wrt env al42_arome-dble.01@faure)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"However, the user is required inside the uenv file catalog, and in the experiments.","category":"page"},{"location":"uget/uget/#Using-*uget.py*-in-console-mode","page":"uget","title":"Using uget.py in console mode","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"In previous examples, we used uget.py via independent successive shell commands. Another mode exists, using the console mode. To do so, just type uget.py (without arguments) to open the interactive mode (to quit, use Ctrl-D); you can then type commands as following:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"$ uget.py\nVortex 1.2.2 loaded ( Monday 05. March 2018, at 14:07:13 )\n(Cmd) list env from mary\n\nal42_test.02\n[...]\ncy43t2_clim-op1.05\ncy43t2_climARP.01\n\n(Cmd) pull env cy43t2_clim-op1.05@mary\n\nARPREANALYSIS_SURFGEOPOTENTIAL=uget:Arp-reanalysis.surfgeopotential.bin@mary\n[...]\nUGAMP_OZONE=uget:UGAMP.ozone.ascii@mary\nUSNAVY_SOIL_CLIM=uget:US-Navy.soil_clim.bin@mary\n\n(Cmd) check env cy43t2_clim-op1.05@mary\n\nHack : MISSING (/home/meunierlf/.vortexrc/hack/uget/mary/env/cy43t2_clim-op1.05)\nArchive: Ok (meunierlf@hendrix.meteo.fr:~mary/uget/env/f/cy43t2_clim-op1.05)\n\nDigging into this particular Uenv:\n [...]\n ARPREANALYSIS_SURFGEOPOTENTIAL: Archive (uget:Arp-reanalysis.surfgeopotential.bin@mary)\n [...]\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m01@mary for month: 01)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m02@mary for month: 02)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m03@mary for month: 03)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m04@mary for month: 04)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m05@mary for month: 05)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m06@mary for month: 06)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m07@mary for month: 07)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m08@mary for month: 08)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m09@mary for month: 09)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m10@mary for month: 10)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m11@mary for month: 11)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m12@mary for month: 12)\n USNAVY_SOIL_CLIM : Archive (uget:US-Navy.soil_clim.bin@mary)\n\n(Cmd) [Ctrl-D]\nVortex 1.2.2 completed ( Monday 05. March 2018, at 14:09:06 )\n$","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"This mode can be interesting:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"For systems on which loading Vortex is slow, you will load it once only in the beginning instead of at each command.\nThere is auto-completion (Tab).\nWithin one session, you can navigate through commands history.","category":"page"},{"location":"uget/uget/#Cheatsheet","page":"uget","title":"Cheatsheet","text":"","category":"section"},{"location":"uget/uget/#Environnement","page":"uget","title":"Environnement","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Recommended version of Vortex on belenos/taranis is : /home/mf/dp/marp/verolive/vortex/vortex-olive\nuget.py is: /home/mf/dp/marp/verolive/vortex/vortex-olive/bin/uget.py\nGenv/Gget are to be found in: /home/mf/dp/marp/gco/public/bin\nThe workdir of uget is: $HOME/.vortexrc/hack/uget/$USER/\nenv/ : uenv catalogs\ndata/ : resources","category":"page"},{"location":"uget/uget/#Commands","page":"uget","title":"Commands","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"clone a GCO env:\nbash uget.py hack genv al42_arome-op2.30 into al42_arome-dble.02@mary\nclone a uenv:\nbash uget.py hack env al42_arome-dble.01@faure into al42_arome-dble.02@mary\ndisplay a uenv (equiv. command genv):\nbash uget.py pull env cy43t2_clim-op1.05@mary\ndownload a uget resource in CWD (equiv. command gget):\nbash uget.py pull data al42_arome-op2.15.nam.tgz@mary\nclone a GCO resource:\nbash uget.py hack gdata al42_arome-op2.15.nam into al42_arome-op2.16.nam.tgz@mary\nclone a uget resource:\nbash uget.py hack data al42_arome-dble.01.nam.tgz@faure into al42_arome-op2.16.nam.tgz@mary\ncheck that all elements exist, either locally or on archive:\nbash uget.py check env al42_arome-dble.02@mary\narchive a uenv (incl. resources implied):\nbash uget.py push env al42_arome-dble.02@mary\narchive a resource:\nbash uget.py push data al42_arome-op2.16.nam.tgz@mary\nclean the workdir (hack) wrt what has been archived:\nbash uget.py clean_hack\nlist uenv and resources from a user:\nbash uget.py list env from faure uget.py list data from faure\ncompare 2 uenv:\nbash uget.py diff env al42_arome-dble.02@mary wrt genv al42_arome-op2.30\nlist the resources modified and their path:\nbash uget.py export env al42_arome-dble.02@mary wrt genv al42_arome-op2.30\nI am lost:\nbash uget.py help\nand:\nbash uget.py help [hack|pull|check|push|diff|list|...]","category":"page"},{"location":"versioningtest/#Versioning-of-tests","page":"Versioning tests","title":"Versioning of tests","text":"","category":"section"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"The following reasons may require to update the tests:","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"Update the input resources or a task template script, to change the purpose or context of a test (e.g. new observations or modified namelists, to pull the tests more closely to operational configurations, ...). This usually comes with a change in the targeted tests outputs.\nAdd new tests.\nUpdate the resources to adapt to a code change (e.g. new radiative coefficients files format, or a mandatory namelist change), with or without change in the results.","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"Therefore it is necessary to track the evolutions of the tests properly, and version them clearly, so that it is clear what fixed or evolving version is to be used in any context. Hence the existence of the DAVAI-tests repository. The first two kinds of evolutions (a. and b.) are not necessarily linked to a contribution of code to the IAL repository, and therefore can be implemented at any moment in a dedicated branch of the tests repository (DAVAI-tests). This is described in more details in section add-modify-tests","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"The latter is on the other hand attached to a contribution, and will require to be given together with the contribution for an integration, and be integrated itself in an evolving tests branch dedicated to test successive steps of the IAL integration branch. This case is detailed in more details in section parallel-branches ","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"To follow more easily what version of the tests should be used in particular for contributions to the IAL codes, it is proposed to adopt a nomenclature that maps the IAL releases and integration/merge branches, but replacing \"CY\" by \"DV\" (for DAVAÏ), as illustrated ","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"(Image: )","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"With this principle, the version of the tests to be used by default would be, for example:","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"for a development based on CY49 rightarrow DV49\nfor an integration branch towards CY49T1, named dev_CY49_to_T1 rightarrow dev_DV49_to_T1","category":"page"},{"location":"versioningtest/#add-modify-tests","page":"Versioning tests","title":"Adding or updating tests independently from the code","text":"","category":"section"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"The tests modifications which are not intrinsically linked with a contribution (adding tests or modifying a test to modify its behaviour) can be done at any moment, in a development branch of the tests repository. However, in order not to disturb the users and integrators, they should be merged into the next official version of tests (i.e. the version used for contributions and integrations to IAL) [only between a declaration of an IAL release and a call for contribution]{.underline}.","category":"page"},{"location":"versioningtest/#parallel-branches","page":"Versioning tests","title":"Evolution of the tests w.r.t. Integration of an IAL release","text":"","category":"section"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"In the context of integration of an IAL release, it is suitable that the tests change as little as possible during the successive integration of contributions. Therefore we will set a version of the tests at the beginning of integration, and only adapt it for the contributions that require an update of the tests.\nLet's consider the process of integration of contribution branches on top of CY49 to build a CY49T1. For that purpose we would have set a reference experiment on CY49, hereafter named x0, generated with an identified version of the tests. That version of the tests would then be updated with x0 as reference experiment (ref_xpid), and tagged DV49. All contributions to CY49T1 would then be required to be tested with this version DV49 (hence against reference experiment x0). Cf. section set a ref tests version for more details about setting up a reference tests version and experiment.","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"Suppose then that we have 5 of these contribution branches based on CY49, and an integration branch named dev_CY49_toT1. These 4 contributions may have different levels of reproducibility: they may conserve the results or not; they may require resources/tests adaptations (e.g. namelist updates, ...) or not, in which case they come with tests adaptations in an associated tests branch. Cf. the table","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"branch results test XPID resources tested with integration XPID\nb1 = x1 = DV49 xi1\nb2 neq x2 = DV49 xi2\nb3 = x3 neq rightarrow DV49_b3 xi3\nb4 neq x4 neq rightarrow DV49_b4 xi4","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"In parallel to the integration branch dev_CY49_toT1, we start a tests branch from DV49 to collect the necessary adaptations of the tests, similarly named dev_DV49_toT1, which will be used to validate the integration branch, and updated as required along the integration.","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"In case some intermediate versions of the integration branch are tagged and some branches are based/rebased on these tagged versions, we could also tag accordingly the tests branch if necessary. The reference experiment for the integration branch is at any moment, by default, the experiment which tested the formerly integrated branch, e.g. the reference for xi2 is xi1. However, that may not be true in some cases, some of these being potentially more tricky to validate, as will be shown in the following example.","category":"page"},{"location":"buildoptions/#Build-options","page":"Build options","title":"Build options","text":"","category":"section"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"The choice of a build system is corollary to the versioning of the tests. However, at time of writing, only gmkpack is available within DAVAÏ.","category":"page"},{"location":"buildoptions/#Build-with-gmkpack","page":"Build options","title":"Build with gmkpack","text":"","category":"section"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"In the [gmkpack] section of config file conf/davai_.ini:","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"to make a main pack, instead of an incremental pack\n hookrightarrow set packtype = main\nto set the list of compilation flavours to build (a.k.a. compiler label/flag)\n hookrightarrow use compilation_flavours\n ! if you modify this, you potentially need to modify the compilation_flavour accordingly in the \"families\" sections that define it, as well as the programs_by_flavour that define the executables to be built for specific flavours","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"In the [gitref2pack] section:","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"to use a different $ROOTPACK (i.e. a different source of ancestor packs, for incremental packs)\n hookrightarrow use rootpack\n (preferably to modifying the environment variable, so that will be specific to that experiment only)\nto avoid cleaning all .o and .a when (re-)populating the pack:\n hookrightarrow set cleanpack = False\n","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"In the [pack2bin] section:","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"to make the pack2bin task crash more quickly after a compilation/link error, or do not crash at all\n hookrightarrow set fatal_build_failure =\n__finally__ Rightarrow crash after trying to compile and build all executables\n__any__ Rightarrow crash if compilation fails or right after the first executable linking to fail\n__none__ Rightarrow never == ignore failed builds\nto re-generate ics_ files before building\n hookrightarrow set regenerate_ics = True\nto (re-)compile local sources with gmkpack’s option Ofrt=2 (i.e. -O0 -check bounds):\n hookrightarrow set Ofrt = 2\nto use more/less threads for compilating (independent) sources files in parallel:\n hookrightarrow use threads\nto change the list of executables to be built, by default or depending on the compilation flavour:\n hookrightarrow use default_programs and programs_by_flavour","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"Also, any gmkpack native variables can be set in the .bash_profile, e.g. ROOTPACK, HOMEPACK, etc... Some might be overwritten by the config, e.g. if you set rootpack in config file.","category":"page"},{"location":"buildoptions/#Build-with-[cmake/makeup/ecbuild...]","page":"Build options","title":"Build with [cmake/makeup/ecbuild...]","text":"","category":"section"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"Not implemented yet.","category":"page"},{"location":"organization/#Organisation-of-an-experiment","page":"Organization of experiment","title":"Organisation of an experiment","text":"","category":"section"},{"location":"organization/","page":"Organization of experiment","title":"Organization of experiment","text":"The davai-new_xp command-line prepares a \"testing experiment\" directory, named uniquely after an incremental number, the platform and the user.","category":"page"},{"location":"organization/","page":"Organization of experiment","title":"Organization of experiment","text":"This testing experiment will consist in:","category":"page"},{"location":"organization/","page":"Organization of experiment","title":"Organization of experiment","text":"conf/davai_nrv.ini : config file, containing parameters such as the git reference to test, davai options, historisations of input resources to use, tunings of tests (e.g. the input obs files to take into account) and profiles of jobs\nconf/.yaml : contains an ordered and categorised list of jobs to be ran in the requested usecase.\nconf/sources.yaml : information about the sources to be tested, in terms of Git or bundle\ntasks/ : templates of single tasks and jobs\nlinks to the python packages that are used by the scripts (vortex, epygram, ial_build, ial_expertise)\na logs directory/link will appear after the first execution, containing log files of each job.\nDAVAI-tests : a clone of the DAVAI-tests repository, checkedout on the requested version of the tests, on which point the tasks/ and conf/","category":"page"},{"location":"otheroptions/#Other-options","page":"Other options","title":"Other options","text":"","category":"section"},{"location":"otheroptions/","page":"Other options","title":"Other options","text":"In the [DEFAULT] section, a few other general options can be set to tune the behaviour of the experiment:","category":"page"},{"location":"otheroptions/","page":"Other options","title":"Other options","text":"expertise_fatal_exceptions to raise/ignore errors that could occur in the expertise subsequent to the tests\ndrhook_profiling to activate DrHook profiling or not\nignore_reference to force to ignore reference outputs (and so deactivate comparison)\narchive_as_ref to archive the outputs (saving of a reference only)","category":"page"},{"location":"inputdata/#Input-data","page":"Input data","title":"Input data","text":"","category":"section"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"DAVAÏ gets its input data through 2 providers:","category":"page"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"\"shelves\" (pseudo Vortex experiments) for the data supposed to flow in real case (e.g. initial conditions file, observations files, etc...), where this data is statically stored, usually in a cache to fetch it faster\n\"uget\" for the static data (namelists, climatologic files, parameter files...), catalogued in ***uenv*** files.","category":"page"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"These shelves and uenv catalogs (cf. uget/uenv help documentation for the use of this tool.) can be modified in the [DEFAULT] section of config file.","category":"page"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"In case your contribution needs a modification in these, ***don't forget to describe these changes in the integration request***.","category":"page"},{"location":"rerun/#Re-run-a-test","page":"Rerun tests","title":"Re-run a test","text":"","category":"section"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"The Davai command davai-run_tests launches all the jobs listed in conf/.yaml, sequentially and independently (i.e. without waiting for the jobs to finish). The command can also be used complementary:","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"to list the jobs that would be launched by the command, according to the conf/.yaml config file: davai-run_tests -l\nto run a single job:\ndavai-run_tests ","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"Some tests are gathered together within a single job. There are 2 reasons for that: if they are an instance of a loop (e.g. same test on different obstypes, or different geometries), or if they have a flow-dependency with an upstream/downstream test (e.g. bator > screening > minimization).","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"When a test fails within a job and the user wants to re-run it without re-runnning the other tests from the same job, it is possible to do so by deactivating them[1] :","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"loops: to deactivate members of a loop: open config file conf/davai_.ini, and in the section corresponding to the job or family, the loops can be found as list(...), e.g. obstypes, rundates or geometries. Items in the list can be reduced to the only required ones (note that if only one item remains, one needs to keep a final \",\" within the parenthesis).\ndependency: open driver file corresponding to the job name in tasks/ directory, and comment out (#) the unrequired tasks or families of nodes, leaving only the required task.","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"[1]: including upstream tasks that produce flow-resources for the targeted test, as long as the resources stay in cache","category":"page"},{"location":"ciboulai/#Monitor-and-inspect-results","page":"Monitoring results","title":"Monitor and inspect results","text":"","category":"section"},{"location":"ciboulai/","page":"Monitoring results","title":"Monitoring results","text":"Monitor the execution of the jobs with the scheduler (with SLURM: squeue -u )\nCheck the tests results summary on the Ciboulaï dashboard, which URL is prompted at the end of tests launch, or visible in the config file:\nopen Ciboulaï dashboard in a web browser:\nTo guide you in the navigation in Ciboulaï, cf. Ciboulai \nTo get the paths to a job output or abort directory: button [+] then Context.\nif the dashboard is not accessible, a command-line version of the status is possible; in the XP directory, run:\ndavai-xp_status\nto see the status summary of each job. The detailed status and expertise of tests are also available as json files on the Vortex cache: belenos:/scratch/mtool//cache/vortex/davai///summaries_stack/ or\ndavai-xp_status -t \nTo get the paths to a job output or abort directory: davai-xp_status -t then open the itself file and look in the Context section.\nIf everything is OK (green) at the end of executions, your branch is validated !\nIf not, cf. Section advanced topics to re-compile a code modification and re-run tests.","category":"page"},{"location":"build/#(Re-)Build-of-executables","page":"Build","title":"(Re-)Build of executables","text":"","category":"section"},{"location":"build/#Build-with-gmkpack","page":"Build","title":"Build with gmkpack","text":"","category":"section"},{"location":"build/","page":"Build","title":"Build","text":"The tasks in the build job are respectively in charge of:","category":"page"},{"location":"build/","page":"Build","title":"Build","text":"gitref2pack : fetch/pull the sources from the requested Git reference and set one or several incremental gmkpack's pack(s) – depending on compilation_flavours as set in config. The packs are then populated with the set of modifications, from the latest official tag to the contents of your branch (including non-commited modifications).\npack2bin : compile sources and link necessary executables (i.e. those used in the tests), for each pack flavour.","category":"page"},{"location":"build/","page":"Build","title":"Build","text":"In case the compilation fails, or if you need to (re-)modify the sources for any reason (e.g. fix an issue):","category":"page"},{"location":"build/","page":"Build","title":"Build","text":"implement corrections in the branch (commited or not)\nre-run the build: \ndavai-build -e\n(option -e or –preexisting_pack assumes the pack already preexists; this is a protection against accidental overwrite of an existing pack. The option can also be passed to davai-run_xp)\nand then if build successful davai-run_tests","category":"page"},{"location":"build/#Build-with-[cmake/ecbuild...]","page":"Build","title":"Build with [cmake/ecbuild...]","text":"","category":"section"},{"location":"build/","page":"Build","title":"Build","text":"Not implemented yet.","category":"page"},{"location":"investigatingproblems/#Investigating-a-problem","page":"Investigate Problems","title":"Investigating a problem","text":"","category":"section"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.\nELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue\nPC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.","category":"page"},{"location":"investigatingproblems/#Smaller-tests-for-smaller-problems","page":"Investigate Problems","title":"Smaller tests for smaller problems","text":"","category":"section"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"To switch to this mode:","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"create a new experiment with the same arguments but -u ELP and go in it\nfor a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False\ndavai-run_xp","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"Instead of 50^+ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.","category":"page"},{"location":"create_branch/#Create-your-branch,-containing-your-modifications","page":"Creating a branch","title":"Create your branch, containing your modifications","text":"","category":"section"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"To use DAVAÏ to test your contribution to the next development release, you need to have your code in a Git branch starting from the latest official release (e.g. CY48T1 tag for contributions to 48T2, or CY49 tag for contributions to 49T1).","category":"page"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"In the following the example is taken on a contribution to 48T2:","category":"page"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"In your repository (e.g. ~/repositories/arpifs – make sure it is clean with git status beforehand), create your branch:\ngit checkout -b []\ntip: Example\ngit checkout -b mary_CY48T1_cleaning CY48T1\nnote: Note\nIt is strongly recommended to have explicit branch names with regards to their origin and their owner, hence the legacy branch naming syntax __\nImplement your developments in the branch. It is recommended to find a compromise between a whole development in only one commit, and a large number of very small commits (e.g. one by changed file). In case you then face compilation or runtime issues then, but only if you haven't pushed it yet, you can amend[1] the latest commit to avoid a whole series of commits just for debugging purpose.\nnote: Note\nDAVAÏ is currently able to include non-committed changes in the compilation and testing. However, in the next version based on bundle, this might not be possible anymore. ","category":"page"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"[1]: git commit –amend","category":"page"},{"location":"ciboulai_navigation/#ciboulai","page":"Ciboulaï navigation","title":"Navigation in Ciboulaï","text":"","category":"section"},{"location":"ciboulai_navigation/","page":"Ciboulaï navigation","title":"Ciboulaï navigation","text":"On the main page, the numbers in the columns to the right indicate the numbers of jobs which results are respectively:\nbit-reproducible or within acceptable numerical error;\nnumerically different;\njobs that have crashed before end;\nthe experts were not able to state on the test results, to be checked manually;\nthese tests have no expected result to be checked: they are assumed OK since they did not crash.\nWhen you get to an experiment page, you can find a few key features of the experiment, in the header. The [+] close to the XPID (experiment ID) will provide more. The others [+] to the left of the uenv's provide inner details from each one. The summary of tests results is also visible on the top right.\nEach task is summarized: its Pending/Crashed/Ended status, and in case of Ended, the comparison status. As a first glance, a main metric is shown, assumed to be the most meaningful for this test.\nThe ‘drHook rel diff’ and ‘rss rel diff’ columns show the relative difference in respectively: the elapse time of the execution, and the memory consumption (RSS) compared to the reference. \nwarning: Warning\nSo far the drHook figures have proven to be too volatile from an execution to another, to be meaningful. Don't pay too much attention, for now. Similarly, the RSS figures remain to be investigated (relevance and availability).\nA filter is available to show only a subset of tasks.\nWhen you click on the [+] of the more column, the detailed expertise is displayed:\nthe itself tab will show info from each Expert about the task independently from reference\nthe continuity tab will show the compared results from each Expert against the same task from reference experiment\nthe consistency tab will show the compared results from each Expert against a different reference task from the same experiment, when meaningful (very few cases, so far)\nClick on each Expert to unroll results.\nAt the experiment level as well as at the task level, a little pen symbol enables you to annotate it. That might be used for instance to justify numerical differences.","category":"page"},{"location":"parallelprofiling/#Parallel-profiling","page":"Parallel profiling","title":"Parallel profiling","text":"","category":"section"},{"location":"parallelprofiling/","page":"Parallel profiling","title":"Parallel profiling","text":"Each job has a section in the config file, in which one can tune the requested profile parameters to the jobs scheduler:","category":"page"},{"location":"parallelprofiling/","page":"Parallel profiling","title":"Parallel profiling","text":"time : elapse time\nntasks : number of MPI tasks per node\nnnodes : number of nodes\nopenmp : number of OpenMP threads\npartition : category of nodes\nmem : memory (helps to prevent OOM)","category":"page"},{"location":"parallelprofiling/","page":"Parallel profiling","title":"Parallel profiling","text":"The total number of MPI tasks is therefore nnodes \\times ntasks, and is automatically replaced in namelist","category":"page"},{"location":"setting_reference/#set_ref_version","page":"Setting reference exp","title":"Setting up a reference version","text":"","category":"section"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"! WORK IN PROGRESS...","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"We describe here how to set up a reference version of the tests and an associated reference experiment, typically for developments based on a given IAL release to be validated against this release.","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"For the example, let's consider CY49 and setting up a DV49 version for it, including its reference experiment, to validate the contributions to CY49T1.","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"Choose an initial version of the tests you want to be used. It may probably not be the previous reference one (e.g. DV48T2 or dev_CY48T1_toT2), as we may often want to modify or add tests in between cycles.\nIn your development DAVAI-tests repository, make a branch starting from this version and check it out, e.g.:\n git checkout -b on_49 [\n hookrightarrow dv-xxxx-machine@user\n Note:\nAs ELP usecase encompasses NRV, reference experiments should use this usecase so it could be used as a reference for both usecases.\n–origin to clone that repo in which you created the branch\nin config of the experiment, set archive_as_ref = True : the experiment will serve as a reference, so we want to archive its results\nin config of the experiment, set ignore_reference = True : if you are confident enough with the test version, it may not be useful/relevant to compare the experiment to any reference one.\nRun the experiment\nUpdate the DAVAI-tests repository:\ndefault config file for this machine (conf/.ini: with the name of this experiment as ref_xpid (and potentially the usecase chosen in minor case as ref_vconf)\nREADME.md: the table of correspondance of branches and tests\nThen commit, tag (DV49) and push:\n git commit -am \"Set reference experiment for as \"\n git tag DV49\n git push \n git push DV49","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"This way the tests experiment generated using davai-new_xp -v DV49 will use this version and be compared to this reference experiment.","category":"page"},{"location":"#DAVAÏ-User-Guide","page":"Home","title":"DAVAÏ User Guide","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"DAVAÏ embeds the whole workflow from the source code to the green/red light validation status: fetching sources from Git, building executables, running test cases, analysing the results and displaying them on a dashboard.","category":"page"},{"location":"","page":"Home","title":"Home","text":"For now, the only build system embedded is gmkpack, but we expect other systems to be plugged when required. The second limitation of this version is that the starting point is still an IAL[1] Git reference only. The next version of the DAVAÏ system will include multi-projects/repositories fetching, using the bundle concept as starting point.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The dimensioning of tests (grid sizes, number of observations, parallelization...) is done in order to conceal representativity and execution speed. Therefore, in the general usecases, the tests are supposed to run on HPC. A dedicated usecase will target smaller configurations to run on workstation (not available yet). An accessible source code forge is set within the ACCORD consortium to host the IAL central repository on which updates and releases are published, and where integration requests will be posted, reviewed and monitored.","category":"page"},{"location":"","page":"Home","title":"Home","text":"By the way: DAVAI stands for \"Device Aiming at the VAlidation of IAL\"","category":"page"},{"location":"","page":"Home","title":"Home","text":"[1]: IAL = IFS-Arpege-LAM","category":"page"},{"location":"mtool/#Running-jobs-on-HPC-:-MTOOL","page":"MTOOL","title":"Running jobs on HPC : MTOOL","text":"","category":"section"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"On HPCs, the compute nodes are \"expensive\" and so we try as much as possible to save the elapse time spent on compute nodes for actual computations, i.e. execution of the executable. Therefore in DAVAÏ, the generation of the scripts uses the MTOOL filter to replicate and cut a job script into several steps:","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"on transfer nodes, fetch the resources, either locally on the file system(s) or using FTP connections to outer machines\non compute nodes, execute the AlgoComponent(s)\non transfer nodes, dispatch the produced output\nfinal step to clean the temporary environment created for the jobs","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"In addition to this separation and chaining these 4 steps, MTOOL initially sets up a clean environment with a temporary unique execution directory. It also collects log files of the script's execution, and in the case of a failure (missing input resources, execution aborted), it takes a screenshot of the execution directory. Therefore for each job, one will find :","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"a depot directory in which to find the actual 4 scripts and their log files\nan abort directory, in which to find the exact copy of the execution directory when the execution failed","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"These directories are registered by the DAVAÏ expertise and are displayed in the Context item of the expertise for each task in Ciboulaï.","category":"page"},{"location":"runtests/#Run-tests","page":"Running tests","title":"Run tests","text":"","category":"section"},{"location":"runtests/","page":"Running tests","title":"Running tests","text":"Create your experiment, specifying which version of the tests you want to use:\ndavai-new_xp -v \ntip: Example\ndavai-new_xp mary_CY48T1_cleaning -v DV48T1\nAn experiment with a unique experiment ID is created and prompted as output of the command, together with its path.\nTo know what is the version to be used for a given development: See here\nSee davai-new_xp -h for more options on this command\nSee Appendix for a more comprehensive approach to tests versioning.\nIf the version you are requesting is not known, you may need to specify the DAVAI-tests origin repository from which to clone/fetch it, using argument –origin \nGo to the (prompted) experiment directory.\nIf you want to set some options differently from the default, open file conf/davai_nrv.ini and tune the parameters in the [DEFAULT] section. The usual tunable parameters are detailed in Section options \nLaunch the build and tests:\ndavai-run_xp\nAfter initializing the Ciboulaï page for the experiment, the command will first run the build of the branch and wait for the executables (that step may take a while, depending on the scope of your modifications, especially with several compilation flavours). Once build completed, it will then launch the tests (through scheduler on HPC).","category":"page"},{"location":"runtests/#To-test-a-bundle,-i.e.-a-combination-of-modifications-in-IAL-and-other-repos","page":"Running tests","title":"To test a bundle, i.e. a combination of modifications in IAL and other repos","text":"","category":"section"},{"location":"runtests/","page":"Running tests","title":"Running tests","text":"Use command davai-new_xp_from_bundle. The rest is identical.","category":"page"}] +[{"location":"jobs_tasks/#Jobs-and-tasks","page":"Jobs & Tasks","title":"Jobs & tasks","text":"","category":"section"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"A Task is generally understood as the triplet: ","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"fetch input resources, \nrun an executable, \ndispatch the produced output. ","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"In a Vortex script, the tasks are written in Python, using classes and functionalities of the Vortex Python packages. In particular, running an executable is wrapped in what is called an AlgoComponent. In DAVAÏ, we add a second AlgoComponent right after the nominal one in (2) to \"expertise\" the outputs and compare to a reference.","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"The tasks templates are stored in the tasks/ directory, and all inherit from the abstract class: vortex.layout.nodes.Task. A Test is a Task that includes an expertise to a reference. A Job is understood as a series of one or several tasks, executed sequentially within one \"job submission\" to a job scheduler.","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"The jobs templates are stored in the tasks/ directory, and are defined as a function setup that return a Driver object, which itself contains a series of Task(s) and Family(ies).","category":"page"},{"location":"jobs_tasks/","page":"Jobs & Tasks","title":"Jobs & Tasks","text":"In DAVAÏ, the idea is to have the tasks in independent jobs as far as possible, except: for flow-dependent tasks, or for loops on clones of a task with a varying parameter.","category":"page"},{"location":"continuousintegration/#Steps-and-updates-in-the-Continuous-Integration-process","page":"Continuous integration","title":"Steps and updates in the Continuous Integration process","text":"","category":"section"},{"location":"continuousintegration/","page":"Continuous integration","title":"Continuous integration","text":"Integration of b1 :\nReference: x0 is the default reference xp in dev_DV49_toT1 config file\nTests: b1 did not require to adapt the tests rightarrow we can test with branch dev_DV49_toT1 unchanged (and still equal to DV49)\ndavai-new_xp dev_CY49_toT1 -v dev_DV49_toT1\n hookrightarrow xi1 == x1 == x0\nIntegration of b2 :\nReference: xi1 should normally be the reference xp, but since its results are bit-identical to x0 as opposed to x2, it is more relevant to compare to x2, to check that the merge of b1 and b2 still give the same results as b2\nTests: b2 did not require to adapt the tests rightarrow tests branch DV49_toT1 unchanged\ndavai-new_xp dev_CY49_toT1 -v DV49_toT1\n and set ref_xpid = x2\n hookrightarrow xi2 == x2\nthen ref_xpid should be set to xi2 in branch DV49_toT1\nIntegration of b3 :\nReference: b3 does not change the results, so reference experiment is as expected by default xi2\nTests: b3 requires tests adaptations (DV49_b3) rightarrow update dev_DV49_toT1 by merging DV49_b3 in\ndavai-new_xp dev_CY49_toT1 -v DV49_toT1\n hookrightarrow xi3 == xi2\nIntegration of b4 : (where it becomes more or less tricky)\nReference: b4 changes the results, but the results of xi3 (current default reference for integration branch) are also changed from x0 (since b2) rightarrow the reference experiment becomes less obvious !\n The choice of the reference should be made depending on the width of impact on both sides:\nif there is more differences in the results between dev_CY49_toT1 and CY49 than between b4 and CY49:\n rightarrow xi3 should be taken as reference, and the differences finely compared to those shown in x4\nif there is more differences in the results between b4 and CY49 than between dev_CY49_toT1 and CY49:\n rightarrow x4 should be taken as reference, and the differences finely compared to those shown in xi3’, where xi3’ is a \"witness\" experiment comparing the integration branch after integration of b3 (commit ) to CY49 (experiment x0):\n davai-new_xp -v dev_DV49_toT1\n and set ref_xpid = x0\n hookrightarrow xi3’\nThis is still OK if the tests affected by dev_CY49_toT1 (via b2) and the tests affected by b4 are not the same subset, or if at least if the affected fields are not the same. If they are (e.g. numerical differences that propagate prognostically through the model), the conclusion becomes much more difficult !!!\n In this case, we do not really have explicit recommendation; the integrators should double-check the result of the merge with the author of the contribution b4. Any idea welcome to sort it out.\nTests: b4 requires tests adaptations (DV49_b4) rightarrow update dev_DV49_toT1 by merging in DV49_b4 in\ndavai-new_xp dev_CY49_toT1 -v dev_DV49_toT1\n and set ref_xpid = xi3|xi4\n hookrightarrow xi4","category":"page"},{"location":"atos_bologna/#Complementary-information-about-DAVAI-setup-on-aaabacad-HPC-machine-@-ECMWF/Bologna","page":"Atos","title":"Complementary information about DAVAI setup on aa|ab|ac|ad HPC machine @ ECMWF/Bologna","text":"","category":"section"},{"location":"atos_bologna/#Quick-install","page":"Atos","title":"Quick install","text":"","category":"section"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"module use ~acrd/public/modulefiles\nmodule load davai","category":"page"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"I advise to put the first line in your .bash_profile, and execute the second only when needed.","category":"page"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"","category":"page"},{"location":"atos_bologna/#Pre-requirements-(if-not-already-set-up)","page":"Atos","title":"Pre-requirements (if not already set up)","text":"","category":"section"},{"location":"atos_bologna/","page":"Atos","title":"Atos","text":"Load the required environment for GMKPACK compilation and DAVAI execution. It is REQUIRED that you add the following to your .bash_profile:\nmodule purge\nmodule use /home/acrd/public/modulefiles\nmodule load intel/2021.4.0 prgenv/intel python3/3.10.10-01 ecmwf-toolbox/2021.08.3.0 davai/master\n\n# Gmkpack is installed at Ryad El Khatib's\nHOMEREK=~rme\nexport GMKROOT=$HOMEREK/public/bin/gmkpack\n# use efficiently filesystems\nexport ROOTPACK=$PERM/rootpack\nexport HOMEPACK=$PERM/pack\nexport GMKTMP=$TMPDIR/gmktmp\n# default compilation options\nexport GMKFILE=OMPIIFC2104.AA\nexport GMK_OPT=x\n# update paths\nexport PATH=$GMKROOT/util:$PATH\nexport MANPATH=$MANPATH:$GMKROOT/mani\nEnsure permissions to accord group (e.g. with chgrp) for support, something like:\nfor d in $HOME/davai $HOME/pack $SCRATCH/mtool/depot\ndo\nmkdir -p $d\nchgrp -R accord $d\nchmod g+s $d\ndone","category":"page"},{"location":"fixingproblems/#Investigating-a-problem","page":"-","title":"Investigating a problem","text":"","category":"section"},{"location":"fixingproblems/","page":"-","title":"-","text":"The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.\nELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue\nPC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.","category":"page"},{"location":"fixingproblems/#Smaller-tests-for-smaller-problems","page":"-","title":"Smaller tests for smaller problems","text":"","category":"section"},{"location":"fixingproblems/","page":"-","title":"-","text":"To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"To switch to this mode:","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"create a new experiment with the same arguments but -u ELP and go in it\nfor a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False\ndavai-run_xp","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"Instead of 50^+ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...","category":"page"},{"location":"fixingproblems/","page":"-","title":"-","text":"The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.","category":"page"},{"location":"tips/#First-tips","page":"First tips","title":"First tips","text":"","category":"section"},{"location":"tips/","page":"First tips","title":"First tips","text":"All Davai commands are prefixed davai-* and can be listed with davai-help. All commands are auto-documented with option -h.\nIf the pack preparation or compilation fails, for whatever reason, the build step prints an error message and the davai-run_xp command stops before running the tests. You can find the output of the pack preparation or compilation in logs/ directory, as any other test log file.\nA very common error is when the pack already exists; if you actually want to overwrite the contents of the pack (e.g. because you just fixed a code issue in the branch), you may need option -e/–preexisting_pack:\ndavai-run_xp -e\nor\ndavai-build -e\nOtherwise, if the pack preexists independently for valid reasons, you will need to move/delete the existing pack, or rename your branch.\nThe tests are organised as tasks and jobs:\na task consists in fetching input resources, running an executable, analyzing its outputs to the Ciboulai dashboard and dispatching (archiving) them: 1 test = 1 task\na job consists in a sequential driver of one or several task(s): either a flow sequence (i.e. outputs of task N is an input of task N+1) or family sequence (e.g. run independently an IFS and an Arpege forecast)\nTo fix a piece of code, the best is to modify the code in your Git repo, then re-run\ndavai-run_xp -e\n(or davai-build -e and then davai-run_tests).\nYou don't necessarily need to commit the change rightaway, the non-committed changes are exported from Git to the pack. Don't forget to commit eventually though, before issuing pull request.\nTo re-run one job only after re-compilation, type\ndavai-run_tests -l\nto list the jobs and then\ndavai-run_tests \nnote: Example\ndavai-run_tests forecasts.standalone_forecasts\nThe syntax category.job indicates that the job to be run is the Driver in ./tasks/category/job.py\nTo re-run a single test within a job, e.g. the IFS forecast in forecasts/standalone_forecasts.py: edit this file, comment the other Family(s) or Task(s) (nodes) therein, and re-run the job as indicated above.\nEventually, after code modifications and fixing particular tests, you should re-run the whole set of tests, to make sure your fix does not break any other test.","category":"page"},{"location":"belenos/#Complementary-information-about-DAVAI-setup-on-belenos-HPC-machine-@-MF","page":"Belenos","title":"Complementary information about DAVAI setup on belenos HPC machine @ MF","text":"","category":"section"},{"location":"belenos/#Quick-install","page":"Belenos","title":"Quick install","text":"","category":"section"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"module use ~mary/public/modulefiles\nmodule load davai","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"I advise to put the first line in your .bash_profile, and execute the second only when needed.","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"","category":"page"},{"location":"belenos/#Pre-requirements-(if-not-already-set-up)","page":"Belenos","title":"Pre-requirements (if not already set up)","text":"","category":"section"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"Load modules (conveniently in your .bash_profile):\nmodule load python/3.7.6\nmodule load git\nConfigure your ~/.netrc file for FTP communications with archive machine hendrix, if not already done:\nmachine hendrix login password \nmachine hendrix.meteo.fr login password \n(! don't forget to chmod 600 ~/.netrc if you are creating this file !)\nTo be updated when you change your password\nConfigure ftserv (information is stored encrypted in ~/.ftuas):\nftmotpasse -h hendrix -u \n(and give your actual password)\nAND\nftmotpasse -h hendrix.meteo.fr -u \n(same)\nTo be updated when you change your password\nConfigure Git proxy certificate info :\ngit config --global http.sslVerify false\nEnsure SSH connectivity between compute and transfer nodes, if not already done:\ncat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"","category":"page"},{"location":"belenos/#And-maybe","page":"Belenos","title":"And maybe","text":"","category":"section"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"with a version of tests prior to DV48T1_op0.04-1, you may also need epygram:","category":"page"},{"location":"belenos/","page":"Belenos","title":"Belenos","text":"~mary/public/EPyGrAM/stable/_install/setup_epygram.py -v\nthen to avoid a matplotlib/display issue, set:\nbackend : Agg in ~/.config/matplotlib/matplotlibrc","category":"page"},{"location":"userconfiguration/#User-configuration","page":"User configuration","title":"User configuration","text":"","category":"section"},{"location":"userconfiguration/","page":"User configuration","title":"User configuration","text":"Some more general parameters are configurable, such as the default directory in which the experiments are stored, or the directory in which the logs of jobs are put. This can be set in ~/.davairc/user_config.ini. If the user, for whatever reason, needs to modify the packages linked in the experiments on a regular basis, it is possible to specify that in the same user config file. An example of these variables is available in the DAVAI-env repository, under templates/user_config.ini.","category":"page"},{"location":"expertthresholds/#Experts-thresholds","page":"Expert thresholds","title":"Experts thresholds","text":"","category":"section"},{"location":"expertthresholds/","page":"Expert thresholds","title":"Expert thresholds","text":"Experts are the tools developed to parse outputs of the tasks and compare them to a reference. Each expert has its expertise field: norms, Jo-tables, etc...","category":"page"},{"location":"expertthresholds/","page":"Expert thresholds","title":"Expert thresholds","text":"See Information on experts in the left tab of Ciboulaï to get information about the tunable thresholds of the various experts (e.g. the allowed error on Jo). Then, set according attributes in the experts definitions in the concerned tasks.","category":"page"},{"location":"expertthresholds/","page":"Expert thresholds","title":"Expert thresholds","text":"Again, if you need to modify these, please ***explain and describe in the integration request***.","category":"page"},{"location":"internalorganization/#Davai-ecosystem","page":"Internal organization","title":"Davai ecosystem","text":"","category":"section"},{"location":"internalorganization/","page":"Internal organization","title":"Internal organization","text":"(Image: )","category":"page"},{"location":"exercise4developers/#Adding-an-ALAROSURFEX-test-to-DAVAÏ","page":"Exercises","title":"Adding an ALARO+SURFEX test to DAVAÏ","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"This section describes what was done to add an ALARO+SURFEX test to DAVAÏ. It may serve as a recipe to add other tests.","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"First, create a new DAVAÏ experiment with davai-new_xp. Also, run following commands to set the environment:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"source ~acrd/.vortexrc/profile\ncp ~rm9/.vortexrc/uget-client-defaults.ini .vortexrc/","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Next, initialize the hack directory for your user:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"uget.py bootstrap_hack ${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Note: directories in this document are usually relative to the experiment's base directory.","category":"page"},{"location":"exercise4developers/#Creating-the-test-itself","page":"Exercises","title":"Creating the test itself","text":"","category":"section"},{"location":"exercise4developers/#Modifications-to-the-file-conf/davai_nrv.ini:","page":"Exercises","title":"Modifications to the file conf/davai_nrv.ini:","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"add a section for the model:\n[alaro]\nmodel = alaro\nLAM = True\ninput_shelf = &{input_shelf_lam}\nfcst_term = 12\nexpertise_term = 12\ncoupling_frequency = 3\nadd a section for the forecast itself:\n[forecast-alaro1_sfx-chmh2325]\nalaro_version = 1_sfx\nrundate = date(2021022000)\nsince we're using a new domain (chmh2325), add a section for this domain:\n[chmh2325]\ngeometry = geometry(chmh2325)\ntimestep = 90","category":"page"},{"location":"exercise4developers/#Modifications-to-the-file-tasks/forecasts/standalone_forecasts.py:","page":"Exercises","title":"Modifications to the file tasks/forecasts/standalone_forecasts.py:","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"The easiest is to copy and modify an existing forecast. In this case, we added to the following to the alaro family:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":" Family(tag='chmh2325', ticket=t, nodes=[\n StandaloneAlaroForecast(tag='forecast-alaro1_sfx-chmh2325', ticket=t, **kw),\n , **kw),","category":"page"},{"location":"exercise4developers/#Modifications-to-the-file-tasks/forecasts/standalone/alaro.py:","page":"Exercises","title":"Modifications to the file tasks/forecasts/standalone/alaro.py:","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"We need to add the fetching of the SURFEX initial file, the SURFEX namelist and the PGD file. This was done using the AROME forecast task as an example. The fetching of these files is put under a condition self.conf.alaro_version == '1_sfx', to make sure the files are only fetched when running ALARO with SURFEX.","category":"page"},{"location":"exercise4developers/#Setup-a-custom-catalogue","page":"Exercises","title":"Setup a custom catalogue","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Find out which catalogue is used by your test. In the case of ALARO, the file alaro.py uses self.conf.davaienv, which is set in davai_nrv.ini to be cy49.davai_specials.02@davai. A local copy of this catalogue is created with","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"uget.py hack env cy49.davai_specials.02@davai into cy49.davai_specials.02@@${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"This will create a local catalogue file under ~/.vortexrc/hack/uget/${USER}/env/. Make sure to modify the value in davai_nrv.ini to use your local copy.","category":"page"},{"location":"exercise4developers/#Adding-constant-files-such-as-namelist-files,-PGD-file,-etc.","page":"Exercises","title":"Adding constant files such as namelist files, PGD file, etc.","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Constant files go into the ~/.vortexrc/hack/ directory. To add/modify a namelist file, first find out which namelists are used by your test in the local catalogue file you copied before (cy49.davai_specials.02@${USER}). In the case of the ALARO forecast, the namelists that are used are 49.arpifs@davai.02.nam.tgz@davai, so a local copy is taken of these with","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"uget.py hack data 49.arpifs@davai.02.nam.tgz@davai into 49.arpifs@davai.02.nam.tgz@${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"This creates a tgz file under ~/.vortexrc/hack/uget/${USER}/data/, which then needs to be unpacked. Make sure to modify the catalogue file to use your local copy of the namelists.","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"You then can modify existing namelist files, or - as was the case for the ALARO+SURFEX test - add new namelist files. The location and name of the required namelists can be found in the forecast script (alaro.py). The namelists created were model/alaro/fcst.alaro1_sfx.nam and model/alaro/fcst.alaro1_sfx.nam_surfex. Make sure to use the following variables/values:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"CNMEXP=__CEXP__,\nNPROC=__NBPROC__,\nNSTRIN=__NBPROC__,\nNSTROUT=__NBPROC__,\nCSTOP=__FCSTOP__,\nTSTEP=__TIMESTEP__,","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"since these are substituted by DAVAÏ.","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"The name of the PGD file needs to be set in the catalogue cy49.davai_specials.02@${USER} by adding the line","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"PGD_FA_CHMH2325=uget:pgd.chmh2325-02km33.fa.01@${USER}","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"The PGD file itself should be put just under ~/.vortexrc/hack/uget/${USER}/data/.","category":"page"},{"location":"exercise4developers/#Setting-non-constant-files-such-as-initial-conditions,-LBC-files,-etc.","page":"Exercises","title":"Setting non-constant files such as initial conditions, LBC files, etc.","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"These files should go into the shelf (since in mixed tests they could be generated by an earlier task). The name of the shelf can be found in davai_nrv.ini, and turns out to be input_shelf_LAM = shelf_cy48t1_LAM.01@davai, so we'll create a directory /scratch/${USER}/mtool/cache/vortex/davai/shelves/shelf_cy48t1_LAM.01@davai/. Following files are put in this directory:","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"20210220T0000A/surfan/analysis.surf-surfex.chmh2325-02km33.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0003:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0009:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0000:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0006:00.fa\n20210220T0000A/coupling/cpl.arpege-4dvarfr-prod.chmh2325-02km33+0012:00.fa","category":"page"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"To know how to name these files, look at similar data for other experiments, or just your experiment and see where it crashes.","category":"page"},{"location":"exercise4developers/#Defining-a-new-geometry","page":"Exercises","title":"Defining a new geometry","text":"","category":"section"},{"location":"exercise4developers/","page":"Exercises","title":"Exercises","text":"Since the ALARO+SURFEX test runs on a new domain, this domain should also be registred. This is done in a file ~/.vortexrc/geometries.ini, following the examples from the file vortex/conf/geometries.ini.","category":"page"},{"location":"uget/uget/#User-Documentation-Uenv/Uget","page":"uget","title":"User Documentation Uenv/Uget","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Alexandre Mary et al.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The uenv/uget tool developped in Vortex is the counterpart of genv/gget (MF/GCO op team), but user-oriented (hence the u instead of g) and shareable with other users. It enables, in Vortex experiments, to get resources the same way as within an official genv but from your own catalogs or your colleagues.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"This tool hence enables to work in research mode the same way as with official op resources, changing just the uenv in the Vortex experiment.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"How does it work ? Quite simple, but a few explanations are necessary to use it properly.","category":"page"},{"location":"uget/uget/#Tutorial","page":"uget","title":"Tutorial","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The following example shows how to modify components of an Arome-France genv catalog and modify its components piece by piece.","category":"page"},{"location":"uget/uget/#Before-first-use","page":"uget","title":"Before first use","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"load Genv/Gget (in your profile, if not already done):\nexport PATH=/home/mf/dp/marp/gco/public/bin:$PATH\nload Vortex (in your profile, if not already done):\nmodule load python\nVORTEX_INSTALL_DIR=/home/mf/dp/marp/verolive/vortex/vortex\nPYTHONPATH=$VORTEX_INSTALL_DIR/src:$PYTHONPATH\nPYTHONPATH=$VORTEX_INSTALL_DIR/site:$PYTHONPATH\nPYTHONPATH=$VORTEX_INSTALL_DIR/project:$PYTHONPATH\nexport PYTHONPATH\nexport PATH=$VORTEX_INSTALL_DIR/bin:$PATH\ninitialisation of directories:\nuget.py bootstrap_hack [user]\ntip: Example\nuget.py bootstrap_hack mary","category":"page"},{"location":"uget/uget/#Clone-an-existing-env-(catalog)-{#uget-clone-existant-en}","page":"uget","title":"Clone an existing env (catalog) {#uget-clone-existant-en}","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Syntax:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py hack genv [cycle_source] into [cycle_cible]@[user]","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nuget.py hack genv al42_arome-op2.30 into al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"This \"hack\" command creates a copy of the genv catalog (genv al42_arome-op2.30), under: $HOME/.vortexrc/hack/uget/mary/env/al42_arome-dble.02.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The initial env can be a GCO official one (genv), or a user one (uenv); in which case the syntax is slightly different, in order to precise who we want to get the env from:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py hack env al42_arome-dble.01@faure into al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is a sort of convention within uget : genv blabla stands for a GCO env named blabla whereas env blabla@someone points to a user-owned env named blabla hosted at someone.","category":"page"},{"location":"uget/uget/#Modification-of-the-cloned-env","page":"uget","title":"Modification of the cloned env","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"For each element in the cloned catalog (obtained at step uget-clone-existant-en, we can modify the the resource (i.e. to the right of the =), by pointing at an element in the \"GCO official store\", or at a colleague's or one of your own's (under $HOME/.vortexrc/hack/uget/$USER/data/).","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"We can mix such elements of a uenv","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nI am user mary, the element:  CLIM_FRANMG_01KM30=clim_franmg.01km30.03 (at GCO) can be replaced by : CLIM_FRANMG_01KM30=uget:mes_clims@mary (uget: to identify it is an element managed by uget and @mary because the element is in my store) or: CLIM_FRANMG_01KM30=uget:mes_clims.04@faure (@faure because it is an element stored at user faure)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Beware a little difference with genv for namelists packages: these packages being stored as tar/tgz, you need to specify explicitly in the uenv.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nnote the extension .tgz:NAMELIST_AROME=uget:my_namelist_package.tgz@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"However, uget will be able to get either the directory $HOME/.vortexrc/hack/uget/mary/data/my_namelist_package soit le tgz $HOME/.vortexrc/hack/uget/mary/data/my_namelist_package.tgz (actually, the most recently modified of both).","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"We can also add new resources in a uenv. The keys (left of the = just need to follow a precise Vortex syntax; for instance for a clim file: CLIM_[AREA]_[RESOLUTION].","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"To modify an existing element (e.g. a namelist package), we get it via uget:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py hack gdata [element] into [clone_element]@[user]","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nuget.py hack gdata al42_arome-op2.15.nam into al42_arome-op2.16.nam.tgz@maryor:uget.py hack data al42_arome-dble.01.nam.tgz@faure into al42_arome-op2.16.nam.tgz@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The convention used here by uget is consistent with the one used before: gdata blabla stands for a GCO element named blabla when data blabla@someone points to a data stored via uget/uenv, named blabla and stored at someone.","category":"page"},{"location":"uget/uget/#Historisation","page":"uget","title":"Historisation","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is a good practice to first check there are no inconsistency within your uenv, i.e. check that all elements listed there actually exist, either locally or on archive, and at your user, someone else or GCO:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py check env al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Then, to freeze a version and share it with other users, you need to push the uenv to archive:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py push env al42_arome-dble.02@mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The command (can take a little while) archives the uenv AND the elements locally present onto archive. It is then strongly recommended to clean them locally, to avoid to modify something that has been archived and end up with inconsistencies between local and archived versions:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py clean_hack","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Caution: all uenv and elements having been pushed are then deleted locally from directories env et data !","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"We may also want to push just one element to make it available before a whole uenv is ready.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"In this case:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py push data [element]@[user]}","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"tip: Example\nuget.py push data al42_arome-op2.16.nam.tgz@mary","category":"page"},{"location":"uget/uget/#Explore","page":"uget","title":"Explore","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(new in Vortex-1.2.3)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is possible to list all uenv existing from a user:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py list env from faure","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"or the elements, potentially with a filter (based on a regular expression):","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py list data from faure matching .nam","category":"page"},{"location":"uget/uget/#From-one-uenv-to-another","page":"uget","title":"From one uenv to another","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(new in Vortex-1.2.3)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It is also possible to compare 2 uenv:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env [cycle_to_compare] wrt env [cycle_reference]","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Ex:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env al42_arome-dble.02@mary wrt genv al42_arome-op2.30","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"or:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env al42_arome-dble.02@mary wrt env al42_arome-dble.01@faure","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"If your uenv has been generated using uget.py hack, a comment has been left in the head of the file to trace its history, and enables you to use the alias parent as:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env [my_uenv] wrt parent","category":"page"},{"location":"uget/uget/#Export-catalog","page":"uget","title":"Export catalog","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(new in Vortex-1.2.3)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The command uget.py export enables to list the elements updated with regards to a reference, giving their path on the archive.","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Ex:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py export env al42_arome-dble.02@mary [wrt genv al42_arome-op2.30]","category":"page"},{"location":"uget/uget/#Remarks-and-good-habits","page":"uget","title":"Remarks and good habits","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"clim files (and other monthly resources) are expanded: the key CLIM_BLABLA=uget:my_clims@mary aim at all files syntaxed my_clims.m?? located in the directory data ;\neven if it is technically feasable, it is strongly advised to forbid yourself to modify an element once pushed. With the cache system, you may face weird fetches in experiments...\nas a corollary, it is a good habit to number each uenv and each resource, and increment them push after push\non hendrix, the uenv and resources are archived under an archived and spread tree of directories. This is both for performance matters and an incitation to use uget.py to get these resources systematically\nbefore an element is pushed (uenv and resources), it is not accessible via uget.py nor a vortex experiment for other users, only for the owner.\nif large resources are to be pushed, one can advantageously log on a transfer node before the push\ncomments are accepted in a uenv, starting with #.","category":"page"},{"location":"uget/uget/#More-advanced-functionalities","page":"uget","title":"More advanced functionalities","text":"","category":"section"},{"location":"uget/uget/#Default-user","page":"uget","title":"Default user","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"It can become cumbersome to repeat the user (e.g. @mary) in command lines. Hence a default user can be defined:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py set location mary","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"The default user can be retrieved with uget.py info. Once set, one can only type:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py check env al42_arome-dble.02","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"or:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"uget.py diff env al42_arome-dble.02 wrt env al42_arome-dble.01@faure","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"(instead of uget.py check env al42_arome-dble.02@mary and uget.py diff env al42_arome-dble.02@mary wrt env al42_arome-dble.01@faure)","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"However, the user is required inside the uenv file catalog, and in the experiments.","category":"page"},{"location":"uget/uget/#Using-*uget.py*-in-console-mode","page":"uget","title":"Using uget.py in console mode","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"In previous examples, we used uget.py via independent successive shell commands. Another mode exists, using the console mode. To do so, just type uget.py (without arguments) to open the interactive mode (to quit, use Ctrl-D); you can then type commands as following:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"$ uget.py\nVortex 1.2.2 loaded ( Monday 05. March 2018, at 14:07:13 )\n(Cmd) list env from mary\n\nal42_test.02\n[...]\ncy43t2_clim-op1.05\ncy43t2_climARP.01\n\n(Cmd) pull env cy43t2_clim-op1.05@mary\n\nARPREANALYSIS_SURFGEOPOTENTIAL=uget:Arp-reanalysis.surfgeopotential.bin@mary\n[...]\nUGAMP_OZONE=uget:UGAMP.ozone.ascii@mary\nUSNAVY_SOIL_CLIM=uget:US-Navy.soil_clim.bin@mary\n\n(Cmd) check env cy43t2_clim-op1.05@mary\n\nHack : MISSING (/home/meunierlf/.vortexrc/hack/uget/mary/env/cy43t2_clim-op1.05)\nArchive: Ok (meunierlf@hendrix.meteo.fr:~mary/uget/env/f/cy43t2_clim-op1.05)\n\nDigging into this particular Uenv:\n [...]\n ARPREANALYSIS_SURFGEOPOTENTIAL: Archive (uget:Arp-reanalysis.surfgeopotential.bin@mary)\n [...]\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m01@mary for month: 01)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m02@mary for month: 02)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m03@mary for month: 03)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m04@mary for month: 04)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m05@mary for month: 05)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m06@mary for month: 06)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m07@mary for month: 07)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m08@mary for month: 08)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m09@mary for month: 09)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m10@mary for month: 10)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m11@mary for month: 11)\n UGAMP_OZONE : Archive (uget:UGAMP.ozone.ascii.m12@mary for month: 12)\n USNAVY_SOIL_CLIM : Archive (uget:US-Navy.soil_clim.bin@mary)\n\n(Cmd) [Ctrl-D]\nVortex 1.2.2 completed ( Monday 05. March 2018, at 14:09:06 )\n$","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"This mode can be interesting:","category":"page"},{"location":"uget/uget/","page":"uget","title":"uget","text":"For systems on which loading Vortex is slow, you will load it once only in the beginning instead of at each command.\nThere is auto-completion (Tab).\nWithin one session, you can navigate through commands history.","category":"page"},{"location":"uget/uget/#Cheatsheet","page":"uget","title":"Cheatsheet","text":"","category":"section"},{"location":"uget/uget/#Environnement","page":"uget","title":"Environnement","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"Recommended version of Vortex on belenos/taranis is : /home/mf/dp/marp/verolive/vortex/vortex-olive\nuget.py is: /home/mf/dp/marp/verolive/vortex/vortex-olive/bin/uget.py\nGenv/Gget are to be found in: /home/mf/dp/marp/gco/public/bin\nThe workdir of uget is: $HOME/.vortexrc/hack/uget/$USER/\nenv/ : uenv catalogs\ndata/ : resources","category":"page"},{"location":"uget/uget/#Commands","page":"uget","title":"Commands","text":"","category":"section"},{"location":"uget/uget/","page":"uget","title":"uget","text":"clone a GCO env:\nbash uget.py hack genv al42_arome-op2.30 into al42_arome-dble.02@mary\nclone a uenv:\nbash uget.py hack env al42_arome-dble.01@faure into al42_arome-dble.02@mary\ndisplay a uenv (equiv. command genv):\nbash uget.py pull env cy43t2_clim-op1.05@mary\ndownload a uget resource in CWD (equiv. command gget):\nbash uget.py pull data al42_arome-op2.15.nam.tgz@mary\nclone a GCO resource:\nbash uget.py hack gdata al42_arome-op2.15.nam into al42_arome-op2.16.nam.tgz@mary\nclone a uget resource:\nbash uget.py hack data al42_arome-dble.01.nam.tgz@faure into al42_arome-op2.16.nam.tgz@mary\ncheck that all elements exist, either locally or on archive:\nbash uget.py check env al42_arome-dble.02@mary\narchive a uenv (incl. resources implied):\nbash uget.py push env al42_arome-dble.02@mary\narchive a resource:\nbash uget.py push data al42_arome-op2.16.nam.tgz@mary\nclean the workdir (hack) wrt what has been archived:\nbash uget.py clean_hack\nlist uenv and resources from a user:\nbash uget.py list env from faure uget.py list data from faure\ncompare 2 uenv:\nbash uget.py diff env al42_arome-dble.02@mary wrt genv al42_arome-op2.30\nlist the resources modified and their path:\nbash uget.py export env al42_arome-dble.02@mary wrt genv al42_arome-op2.30\nI am lost:\nbash uget.py help\nand:\nbash uget.py help [hack|pull|check|push|diff|list|...]","category":"page"},{"location":"versioningtest/#Versioning-of-tests","page":"Versioning tests","title":"Versioning of tests","text":"","category":"section"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"The following reasons may require to update the tests:","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"Update the input resources or a task template script, to change the purpose or context of a test (e.g. new observations or modified namelists, to pull the tests more closely to operational configurations, ...). This usually comes with a change in the targeted tests outputs.\nAdd new tests.\nUpdate the resources to adapt to a code change (e.g. new radiative coefficients files format, or a mandatory namelist change), with or without change in the results.","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"Therefore it is necessary to track the evolutions of the tests properly, and version them clearly, so that it is clear what fixed or evolving version is to be used in any context. Hence the existence of the DAVAI-tests repository. The first two kinds of evolutions (a. and b.) are not necessarily linked to a contribution of code to the IAL repository, and therefore can be implemented at any moment in a dedicated branch of the tests repository (DAVAI-tests). This is described in more details in section add-modify-tests","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"The latter is on the other hand attached to a contribution, and will require to be given together with the contribution for an integration, and be integrated itself in an evolving tests branch dedicated to test successive steps of the IAL integration branch. This case is detailed in more details in section parallel-branches ","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"To follow more easily what version of the tests should be used in particular for contributions to the IAL codes, it is proposed to adopt a nomenclature that maps the IAL releases and integration/merge branches, but replacing \"CY\" by \"DV\" (for DAVAÏ), as illustrated ","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"(Image: )","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"With this principle, the version of the tests to be used by default would be, for example:","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"for a development based on CY49 rightarrow DV49\nfor an integration branch towards CY49T1, named dev_CY49_to_T1 rightarrow dev_DV49_to_T1","category":"page"},{"location":"versioningtest/#add-modify-tests","page":"Versioning tests","title":"Adding or updating tests independently from the code","text":"","category":"section"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"The tests modifications which are not intrinsically linked with a contribution (adding tests or modifying a test to modify its behaviour) can be done at any moment, in a development branch of the tests repository. However, in order not to disturb the users and integrators, they should be merged into the next official version of tests (i.e. the version used for contributions and integrations to IAL) [only between a declaration of an IAL release and a call for contribution]{.underline}.","category":"page"},{"location":"versioningtest/#parallel-branches","page":"Versioning tests","title":"Evolution of the tests w.r.t. Integration of an IAL release","text":"","category":"section"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"In the context of integration of an IAL release, it is suitable that the tests change as little as possible during the successive integration of contributions. Therefore we will set a version of the tests at the beginning of integration, and only adapt it for the contributions that require an update of the tests.\nLet's consider the process of integration of contribution branches on top of CY49 to build a CY49T1. For that purpose we would have set a reference experiment on CY49, hereafter named x0, generated with an identified version of the tests. That version of the tests would then be updated with x0 as reference experiment (ref_xpid), and tagged DV49. All contributions to CY49T1 would then be required to be tested with this version DV49 (hence against reference experiment x0). Cf. section set a ref tests version for more details about setting up a reference tests version and experiment.","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"Suppose then that we have 5 of these contribution branches based on CY49, and an integration branch named dev_CY49_toT1. These 4 contributions may have different levels of reproducibility: they may conserve the results or not; they may require resources/tests adaptations (e.g. namelist updates, ...) or not, in which case they come with tests adaptations in an associated tests branch. Cf. the table","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"branch results test XPID resources tested with integration XPID\nb1 = x1 = DV49 xi1\nb2 neq x2 = DV49 xi2\nb3 = x3 neq rightarrow DV49_b3 xi3\nb4 neq x4 neq rightarrow DV49_b4 xi4","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"In parallel to the integration branch dev_CY49_toT1, we start a tests branch from DV49 to collect the necessary adaptations of the tests, similarly named dev_DV49_toT1, which will be used to validate the integration branch, and updated as required along the integration.","category":"page"},{"location":"versioningtest/","page":"Versioning tests","title":"Versioning tests","text":"In case some intermediate versions of the integration branch are tagged and some branches are based/rebased on these tagged versions, we could also tag accordingly the tests branch if necessary. The reference experiment for the integration branch is at any moment, by default, the experiment which tested the formerly integrated branch, e.g. the reference for xi2 is xi1. However, that may not be true in some cases, some of these being potentially more tricky to validate, as will be shown in the following example.","category":"page"},{"location":"buildoptions/#Build-options","page":"Build options","title":"Build options","text":"","category":"section"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"The choice of a build system is corollary to the versioning of the tests. However, at time of writing, only gmkpack is available within DAVAÏ.","category":"page"},{"location":"buildoptions/#Build-with-gmkpack","page":"Build options","title":"Build with gmkpack","text":"","category":"section"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"In the [gmkpack] section of config file conf/davai_.ini:","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"to make a main pack, instead of an incremental pack\n hookrightarrow set packtype = main\nto set the list of compilation flavours to build (a.k.a. compiler label/flag)\n hookrightarrow use compilation_flavours\n ! if you modify this, you potentially need to modify the compilation_flavour accordingly in the \"families\" sections that define it, as well as the programs_by_flavour that define the executables to be built for specific flavours","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"In the [gitref2pack] section:","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"to use a different $ROOTPACK (i.e. a different source of ancestor packs, for incremental packs)\n hookrightarrow use rootpack\n (preferably to modifying the environment variable, so that will be specific to that experiment only)\nto avoid cleaning all .o and .a when (re-)populating the pack:\n hookrightarrow set cleanpack = False\n","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"In the [pack2bin] section:","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"to make the pack2bin task crash more quickly after a compilation/link error, or do not crash at all\n hookrightarrow set fatal_build_failure =\n__finally__ Rightarrow crash after trying to compile and build all executables\n__any__ Rightarrow crash if compilation fails or right after the first executable linking to fail\n__none__ Rightarrow never == ignore failed builds\nto re-generate ics_ files before building\n hookrightarrow set regenerate_ics = True\nto (re-)compile local sources with gmkpack’s option Ofrt=2 (i.e. -O0 -check bounds):\n hookrightarrow set Ofrt = 2\nto use more/less threads for compilating (independent) sources files in parallel:\n hookrightarrow use threads\nto change the list of executables to be built, by default or depending on the compilation flavour:\n hookrightarrow use default_programs and programs_by_flavour","category":"page"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"Also, any gmkpack native variables can be set in the .bash_profile, e.g. ROOTPACK, HOMEPACK, etc... Some might be overwritten by the config, e.g. if you set rootpack in config file.","category":"page"},{"location":"buildoptions/#Build-with-[cmake/makeup/ecbuild...]","page":"Build options","title":"Build with [cmake/makeup/ecbuild...]","text":"","category":"section"},{"location":"buildoptions/","page":"Build options","title":"Build options","text":"Not implemented yet.","category":"page"},{"location":"organization/#Organisation-of-an-experiment","page":"Organization of experiment","title":"Organisation of an experiment","text":"","category":"section"},{"location":"organization/","page":"Organization of experiment","title":"Organization of experiment","text":"The davai-new_xp command-line prepares a \"testing experiment\" directory, named uniquely after an incremental number, the platform and the user.","category":"page"},{"location":"organization/","page":"Organization of experiment","title":"Organization of experiment","text":"This testing experiment will consist in:","category":"page"},{"location":"organization/","page":"Organization of experiment","title":"Organization of experiment","text":"conf/davai_nrv.ini : config file, containing parameters such as the git reference to test, davai options, historisations of input resources to use, tunings of tests (e.g. the input obs files to take into account) and profiles of jobs\nconf/.yaml : contains an ordered and categorised list of jobs to be ran in the requested usecase.\nconf/sources.yaml : information about the sources to be tested, in terms of Git or bundle\ntasks/ : templates of single tasks and jobs\nlinks to the python packages that are used by the scripts (vortex, epygram, ial_build, ial_expertise)\na logs directory/link will appear after the first execution, containing log files of each job.\nDAVAI-tests : a clone of the DAVAI-tests repository, checkedout on the requested version of the tests, on which point the tasks/ and conf/","category":"page"},{"location":"otheroptions/#Other-options","page":"Other options","title":"Other options","text":"","category":"section"},{"location":"otheroptions/","page":"Other options","title":"Other options","text":"In the [DEFAULT] section, a few other general options can be set to tune the behaviour of the experiment:","category":"page"},{"location":"otheroptions/","page":"Other options","title":"Other options","text":"expertise_fatal_exceptions to raise/ignore errors that could occur in the expertise subsequent to the tests\ndrhook_profiling to activate DrHook profiling or not\nignore_reference to force to ignore reference outputs (and so deactivate comparison)\narchive_as_ref to archive the outputs (saving of a reference only)","category":"page"},{"location":"inputdata/#Input-data","page":"Input data","title":"Input data","text":"","category":"section"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"DAVAÏ gets its input data through 2 providers:","category":"page"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"\"shelves\" (pseudo Vortex experiments) for the data supposed to flow in real case (e.g. initial conditions file, observations files, etc...), where this data is statically stored, usually in a cache to fetch it faster\n\"uget\" for the static data (namelists, climatologic files, parameter files...), catalogued in ***uenv*** files.","category":"page"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"These shelves and uenv catalogs (cf. uget/uenv help documentation for the use of this tool.) can be modified in the [DEFAULT] section of config file.","category":"page"},{"location":"inputdata/","page":"Input data","title":"Input data","text":"In case your contribution needs a modification in these, ***don't forget to describe these changes in the integration request***.","category":"page"},{"location":"rerun/#Re-run-a-test","page":"Rerun tests","title":"Re-run a test","text":"","category":"section"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"The Davai command davai-run_tests launches all the jobs listed in conf/.yaml, sequentially and independently (i.e. without waiting for the jobs to finish). The command can also be used complementary:","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"to list the jobs that would be launched by the command, according to the conf/.yaml config file: davai-run_tests -l\nto run a single job:\ndavai-run_tests ","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"Some tests are gathered together within a single job. There are 2 reasons for that: if they are an instance of a loop (e.g. same test on different obstypes, or different geometries), or if they have a flow-dependency with an upstream/downstream test (e.g. bator > screening > minimization).","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"When a test fails within a job and the user wants to re-run it without re-runnning the other tests from the same job, it is possible to do so by deactivating them[1] :","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"loops: to deactivate members of a loop: open config file conf/davai_.ini, and in the section corresponding to the job or family, the loops can be found as list(...), e.g. obstypes, rundates or geometries. Items in the list can be reduced to the only required ones (note that if only one item remains, one needs to keep a final \",\" within the parenthesis).\ndependency: open driver file corresponding to the job name in tasks/ directory, and comment out (#) the unrequired tasks or families of nodes, leaving only the required task.","category":"page"},{"location":"rerun/","page":"Rerun tests","title":"Rerun tests","text":"[1]: including upstream tasks that produce flow-resources for the targeted test, as long as the resources stay in cache","category":"page"},{"location":"ciboulai/#Monitor-and-inspect-results","page":"Monitoring results","title":"Monitor and inspect results","text":"","category":"section"},{"location":"ciboulai/","page":"Monitoring results","title":"Monitoring results","text":"Monitor the execution of the jobs with the scheduler (with SLURM: squeue -u )\nCheck the tests results summary on the Ciboulaï dashboard, which URL is prompted at the end of tests launch, or visible in the config file:\nopen Ciboulaï dashboard in a web browser:\nTo guide you in the navigation in Ciboulaï, cf. Ciboulai \nTo get the paths to a job output or abort directory: button [+] then Context.\nif the dashboard is not accessible, a command-line version of the status is possible; in the XP directory, run:\ndavai-xp_status\nto see the status summary of each job. The detailed status and expertise of tests are also available as json files on the Vortex cache: belenos:/scratch/mtool//cache/vortex/davai///summaries_stack/ or\ndavai-xp_status -t \nTo get the paths to a job output or abort directory: davai-xp_status -t then open the itself file and look in the Context section.\nIf everything is OK (green) at the end of executions, your branch is validated !\nIf not, cf. Section advanced topics to re-compile a code modification and re-run tests.","category":"page"},{"location":"build/#(Re-)Build-of-executables","page":"Build","title":"(Re-)Build of executables","text":"","category":"section"},{"location":"build/#Build-with-gmkpack","page":"Build","title":"Build with gmkpack","text":"","category":"section"},{"location":"build/","page":"Build","title":"Build","text":"The tasks in the build job are respectively in charge of:","category":"page"},{"location":"build/","page":"Build","title":"Build","text":"gitref2pack : fetch/pull the sources from the requested Git reference and set one or several incremental gmkpack's pack(s) – depending on compilation_flavours as set in config. The packs are then populated with the set of modifications, from the latest official tag to the contents of your branch (including non-commited modifications).\npack2bin : compile sources and link necessary executables (i.e. those used in the tests), for each pack flavour.","category":"page"},{"location":"build/","page":"Build","title":"Build","text":"In case the compilation fails, or if you need to (re-)modify the sources for any reason (e.g. fix an issue):","category":"page"},{"location":"build/","page":"Build","title":"Build","text":"implement corrections in the branch (commited or not)\nre-run the build: \ndavai-build -e\n(option -e or –preexisting_pack assumes the pack already preexists; this is a protection against accidental overwrite of an existing pack. The option can also be passed to davai-run_xp)\nand then if build successful davai-run_tests","category":"page"},{"location":"build/#Build-with-[cmake/ecbuild...]","page":"Build","title":"Build with [cmake/ecbuild...]","text":"","category":"section"},{"location":"build/","page":"Build","title":"Build","text":"Not implemented yet.","category":"page"},{"location":"investigatingproblems/#Investigating-a-problem","page":"Investigate Problems","title":"Investigating a problem","text":"","category":"section"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"The usecase parameter of an experiment (to be set in the davai-new_xp command) determines the span of tests to be generated and run. Several usecases have been (or will be) implemented with various purposes:","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"NRV (default): Non-Regression Validation, minimal set of tests that any contribution must pass.\nELP: Exploration and Localization of Problems, extended set of isolated components, to help localizing an issue\nPC: [not implemented yet] set of toy tests ported on workstation; the compilation with GNU (usually less permissive than vendor compilers) enables to raise issues that might not have been seen with NRV/ELP tests.","category":"page"},{"location":"investigatingproblems/#Smaller-tests-for-smaller-problems","page":"Investigate Problems","title":"Smaller tests for smaller problems","text":"","category":"section"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"To investigate a non-reproducibility or crash issue, the ELP usecase of Davaï can help localizing its context, with a set of more elementary tests, that run smaller parts of code.","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"To switch to this mode:","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"create a new experiment with the same arguments but -u ELP and go in it\nfor a faster build (no re-compilation), edit config file conf/davai_elp.ini and in section [gitref2pack], set cleanpack = False\ndavai-run_xp","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"Instead of 50^+ tests, the ELP mode will provide hundreds of more elementary and focused tests. For instance, if you had a problem in the 4DVar minimization, you can run the 3 observation operators tests, observation by observation, and/or a screening, and/or a 3DVar or 4DVar single-obs minimization, in order to understand if the problem is in a specific observation operator (which obs type ?), in its direct, TL or AD version, or in the Variational algorithm, or in the preceding screening, and so on...","category":"page"},{"location":"investigatingproblems/","page":"Investigate Problems","title":"Investigate Problems","text":"The user may want, at some point, to run only a subset of this very large set of tests. In this case, simply open the conf/ELP.yaml and comment (#) the launch of the various jobs. To reduce the number of tests that are innerly looped, e.g. the loop on observation types within the *__obstype jobs: open config file conf/davai_elp.ini, look for the section named after job name and select the obstype(s) to be kept only in list.","category":"page"},{"location":"create_branch/#Create-your-branch,-containing-your-modifications","page":"Creating a branch","title":"Create your branch, containing your modifications","text":"","category":"section"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"To use DAVAÏ to test your contribution to the next development release, you need to have your code in a Git branch starting from the latest official release (e.g. CY48T1 tag for contributions to 48T2, or CY49 tag for contributions to 49T1).","category":"page"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"In the following the example is taken on a contribution to 48T2:","category":"page"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"In your repository (e.g. ~/repositories/arpifs – make sure it is clean with git status beforehand), create your branch:\ngit checkout -b []\ntip: Example\ngit checkout -b mary_CY48T1_cleaning CY48T1\nnote: Note\nIt is strongly recommended to have explicit branch names with regards to their origin and their owner, hence the legacy branch naming syntax __\nImplement your developments in the branch. It is recommended to find a compromise between a whole development in only one commit, and a large number of very small commits (e.g. one by changed file). In case you then face compilation or runtime issues then, but only if you haven't pushed it yet, you can amend[1] the latest commit to avoid a whole series of commits just for debugging purpose.\nnote: Note\nDAVAÏ is currently able to include non-committed changes in the compilation and testing. However, in the next version based on bundle, this might not be possible anymore. ","category":"page"},{"location":"create_branch/","page":"Creating a branch","title":"Creating a branch","text":"[1]: git commit –amend","category":"page"},{"location":"ciboulai_navigation/#ciboulai","page":"Ciboulaï navigation","title":"Navigation in Ciboulaï","text":"","category":"section"},{"location":"ciboulai_navigation/","page":"Ciboulaï navigation","title":"Ciboulaï navigation","text":"On the main page, the numbers in the columns to the right indicate the numbers of jobs which results are respectively:\nbit-reproducible or within acceptable numerical error;\nnumerically different;\njobs that have crashed before end;\nthe experts were not able to state on the test results, to be checked manually;\nthese tests have no expected result to be checked: they are assumed OK since they did not crash.\nWhen you get to an experiment page, you can find a few key features of the experiment, in the header. The [+] close to the XPID (experiment ID) will provide more. The others [+] to the left of the uenv's provide inner details from each one. The summary of tests results is also visible on the top right.\nEach task is summarized: its Pending/Crashed/Ended status, and in case of Ended, the comparison status. As a first glance, a main metric is shown, assumed to be the most meaningful for this test.\nThe ‘drHook rel diff’ and ‘rss rel diff’ columns show the relative difference in respectively: the elapse time of the execution, and the memory consumption (RSS) compared to the reference. \nwarning: Warning\nSo far the drHook figures have proven to be too volatile from an execution to another, to be meaningful. Don't pay too much attention, for now. Similarly, the RSS figures remain to be investigated (relevance and availability).\nA filter is available to show only a subset of tasks.\nWhen you click on the [+] of the more column, the detailed expertise is displayed:\nthe itself tab will show info from each Expert about the task independently from reference\nthe continuity tab will show the compared results from each Expert against the same task from reference experiment\nthe consistency tab will show the compared results from each Expert against a different reference task from the same experiment, when meaningful (very few cases, so far)\nClick on each Expert to unroll results.\nAt the experiment level as well as at the task level, a little pen symbol enables you to annotate it. That might be used for instance to justify numerical differences.","category":"page"},{"location":"parallelprofiling/#Parallel-profiling","page":"Parallel profiling","title":"Parallel profiling","text":"","category":"section"},{"location":"parallelprofiling/","page":"Parallel profiling","title":"Parallel profiling","text":"Each job has a section in the config file, in which one can tune the requested profile parameters to the jobs scheduler:","category":"page"},{"location":"parallelprofiling/","page":"Parallel profiling","title":"Parallel profiling","text":"time : elapse time\nntasks : number of MPI tasks per node\nnnodes : number of nodes\nopenmp : number of OpenMP threads\npartition : category of nodes\nmem : memory (helps to prevent OOM)","category":"page"},{"location":"parallelprofiling/","page":"Parallel profiling","title":"Parallel profiling","text":"The total number of MPI tasks is therefore nnodes \\times ntasks, and is automatically replaced in namelist","category":"page"},{"location":"setting_reference/#set_ref_version","page":"Setting reference exp","title":"Setting up a reference version","text":"","category":"section"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"! WORK IN PROGRESS...","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"We describe here how to set up a reference version of the tests and an associated reference experiment, typically for developments based on a given IAL release to be validated against this release.","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"For the example, let's consider CY49 and setting up a DV49 version for it, including its reference experiment, to validate the contributions to CY49T1.","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"Choose an initial version of the tests you want to be used. It may probably not be the previous reference one (e.g. DV48T2 or dev_CY48T1_toT2), as we may often want to modify or add tests in between cycles.\nIn your development DAVAI-tests repository, make a branch starting from this version and check it out, e.g.:\n git checkout -b on_49 [\n hookrightarrow dv-xxxx-machine@user\n Note:\nAs ELP usecase encompasses NRV, reference experiments should use this usecase so it could be used as a reference for both usecases.\n–origin to clone that repo in which you created the branch\nin config of the experiment, set archive_as_ref = True : the experiment will serve as a reference, so we want to archive its results\nin config of the experiment, set ignore_reference = True : if you are confident enough with the test version, it may not be useful/relevant to compare the experiment to any reference one.\nRun the experiment\nUpdate the DAVAI-tests repository:\ndefault config file for this machine (conf/.ini: with the name of this experiment as ref_xpid (and potentially the usecase chosen in minor case as ref_vconf)\nREADME.md: the table of correspondance of branches and tests\nThen commit, tag (DV49) and push:\n git commit -am \"Set reference experiment for as \"\n git tag DV49\n git push \n git push DV49","category":"page"},{"location":"setting_reference/","page":"Setting reference exp","title":"Setting reference exp","text":"This way the tests experiment generated using davai-new_xp -v DV49 will use this version and be compared to this reference experiment.","category":"page"},{"location":"#DAVAÏ-User-Guide","page":"Home","title":"DAVAÏ User Guide","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"DAVAÏ embeds the whole workflow from the source code to the green/red light validation status: fetching sources from Git, building executables, running test cases, analysing the results and displaying them on a dashboard.","category":"page"},{"location":"","page":"Home","title":"Home","text":"For now, the only build system embedded is gmkpack, but we expect other systems to be plugged when required. The second limitation of this version is that the starting point is still an IAL[1] Git reference only. The next version of the DAVAÏ system will include multi-projects/repositories fetching, using the bundle concept as starting point.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The dimensioning of tests (grid sizes, number of observations, parallelization...) is done in order to conceal representativity and execution speed. Therefore, in the general usecases, the tests are supposed to run on HPC. A dedicated usecase will target smaller configurations to run on workstation (not available yet). An accessible source code forge is set within the ACCORD consortium to host the IAL central repository on which updates and releases are published, and where integration requests will be posted, reviewed and monitored.","category":"page"},{"location":"","page":"Home","title":"Home","text":"By the way: DAVAI stands for \"Device Aiming at the VAlidation of IAL\"","category":"page"},{"location":"","page":"Home","title":"Home","text":"[1]: IAL = IFS-Arpege-LAM","category":"page"},{"location":"mtool/#Running-jobs-on-HPC-:-MTOOL","page":"MTOOL","title":"Running jobs on HPC : MTOOL","text":"","category":"section"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"On HPCs, the compute nodes are \"expensive\" and so we try as much as possible to save the elapse time spent on compute nodes for actual computations, i.e. execution of the executable. Therefore in DAVAÏ, the generation of the scripts uses the MTOOL filter to replicate and cut a job script into several steps:","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"on transfer nodes, fetch the resources, either locally on the file system(s) or using FTP connections to outer machines\non compute nodes, execute the AlgoComponent(s)\non transfer nodes, dispatch the produced output\nfinal step to clean the temporary environment created for the jobs","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"In addition to this separation and chaining these 4 steps, MTOOL initially sets up a clean environment with a temporary unique execution directory. It also collects log files of the script's execution, and in the case of a failure (missing input resources, execution aborted), it takes a screenshot of the execution directory. Therefore for each job, one will find :","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"a depot directory in which to find the actual 4 scripts and their log files\nan abort directory, in which to find the exact copy of the execution directory when the execution failed","category":"page"},{"location":"mtool/","page":"MTOOL","title":"MTOOL","text":"These directories are registered by the DAVAÏ expertise and are displayed in the Context item of the expertise for each task in Ciboulaï.","category":"page"},{"location":"runtests/#Run-tests","page":"Running tests","title":"Run tests","text":"","category":"section"},{"location":"runtests/","page":"Running tests","title":"Running tests","text":"Create your experiment, specifying which version of the tests you want to use:\ndavai-new_xp -v \ntip: Example\ndavai-new_xp mary_CY48T1_cleaning -v DV48T1\nAn experiment with a unique experiment ID is created and prompted as output of the command, together with its path.\nTo know what is the version to be used for a given development: See here\nSee davai-new_xp -h for more options on this command\nSee Appendix for a more comprehensive approach to tests versioning.\nIf the version you are requesting is not known, you may need to specify the DAVAI-tests origin repository from which to clone/fetch it, using argument –origin \nGo to the (prompted) experiment directory.\nIf you want to set some options differently from the default, open file conf/davai_nrv.ini and tune the parameters in the [DEFAULT] section. The usual tunable parameters are detailed in Section options \nLaunch the build and tests:\ndavai-run_xp\nAfter initializing the Ciboulaï page for the experiment, the command will first run the build of the branch and wait for the executables (that step may take a while, depending on the scope of your modifications, especially with several compilation flavours). Once build completed, it will then launch the tests (through scheduler on HPC).","category":"page"},{"location":"runtests/#To-test-a-bundle,-i.e.-a-combination-of-modifications-in-IAL-and-other-repos","page":"Running tests","title":"To test a bundle, i.e. a combination of modifications in IAL and other repos","text":"","category":"section"},{"location":"runtests/","page":"Running tests","title":"Running tests","text":"Use command davai-new_xp_from_bundle. The rest is identical.","category":"page"}] } diff --git a/1.1.10/setting_reference/index.html b/1.1.10/setting_reference/index.html index 5fa6966..65f0526 100644 --- a/1.1.10/setting_reference/index.html +++ b/1.1.10/setting_reference/index.html @@ -2,4 +2,4 @@ Setting reference exp · Davai

    Setting up a reference version

    ! WORK IN PROGRESS...

    We describe here how to set up a reference version of the tests and an associated reference experiment, typically for developments based on a given IAL release to be validated against this release.

    For the example, let's consider CY49 and setting up a DV49 version for it, including its reference experiment, to validate the contributions to CY49T1.

    1. Choose an initial version of the tests you want to be used. It may probably not be the previous reference one (e.g. DV48T2 or dev_CY48T1_toT2), as we may often want to modify or add tests in between cycles.

    2. In your development DAVAI-tests repository, make a branch starting from this version and check it out, e.g.:
      git checkout -b on_49 [<chosen_initial_ref]

    3. Set the reference experiment:
      davai-new_xp CY49 -v on_49 -u ELP –origin <URL of my DAVAI-tests repo>
      $\hookrightarrow$ dv-xxxx-machine@user
      Note:

      • As ELP usecase encompasses NRV, reference experiments should use this usecase so it could be used as a reference for both usecases.

      • –origin <URL...> to clone that repo in which you created the branch

      • in config of the experiment, set archive_as_ref = True : the experiment will serve as a reference, so we want to archive its results

      • in config of the experiment, set ignore_reference = True : if you are confident enough with the test version, it may not be useful/relevant to compare the experiment to any reference one.

    4. Run the experiment

    5. Update the DAVAI-tests repository:

      • default config file for this machine (conf/<machine>.ini: with the name of this experiment as ref_xpid (and potentially the usecase chosen in minor case as ref_vconf)

      • README.md: the table of correspondance of branches and tests

      Then commit, tag (DV49) and push:

         git commit -am "Set reference experiment for <machine> as <dv-xxxx-machine@user>"
          git tag DV49
          git push <remote>
      -   git push <remote> DV49

    This way the tests experiment generated using davai-new_xp -v DV49 will use this version and be compared to this reference experiment.

    + git push <remote> DV49

    This way the tests experiment generated using davai-new_xp -v DV49 will use this version and be compared to this reference experiment.

    diff --git a/1.1.10/tips/index.html b/1.1.10/tips/index.html index a559d79..c6e3340 100644 --- a/1.1.10/tips/index.html +++ b/1.1.10/tips/index.html @@ -1,2 +1,2 @@ -First tips · Davai

    First tips

    • All Davai commands are prefixed davai-* and can be listed with davai-help. All commands are auto-documented with option -h.

    • If the pack preparation or compilation fails, for whatever reason, the build step prints an error message and the davai-run_xp command stops before running the tests. You can find the output of the pack preparation or compilation in logs/ directory, as any other test log file.

      A very common error is when the pack already exists; if you actually want to overwrite the contents of the pack (e.g. because you just fixed a code issue in the branch), you may need option -e/–preexisting_pack:

      davai-run_xp -e

      or

      davai-build -e

      Otherwise, if the pack preexists independently for valid reasons, you will need to move/delete the existing pack, or rename your branch.

    • The tests are organised as tasks and jobs:

      • a task consists in fetching input resources, running an executable, analyzing its outputs to the Ciboulai dashboard and dispatching (archiving) them: 1 test = 1 task
      • a job consists in a sequential driver of one or several task(s): either a flow sequence (i.e. outputs of task N is an input of task N+1) or family sequence (e.g. run independently an IFS and an Arpege forecast)
    • To fix a piece of code, the best is to modify the code in your Git repo, then re-run

      davai-run_xp -e

      (or davai-build -e and then davai-run_tests).

      You don't necessarily need to commit the change rightaway, the non-committed changes are exported from Git to the pack. Don't forget to commit eventually though, before issuing pull request.

    • To re-run one job only after re-compilation, type

      davai-run_tests -l

      to list the jobs and then

      davai-run_tests <category.job>
      Example
      davai-run_tests forecasts.standalone_forecasts
    • The syntax category.job indicates that the job to be run is the Driver in ./tasks/category/job.py

    • To re-run a single test within a job, e.g. the IFS forecast in forecasts/standalone_forecasts.py: edit this file, comment the other Family(s) or Task(s) (nodes) therein, and re-run the job as indicated above.

    • Eventually, after code modifications and fixing particular tests, you should re-run the whole set of tests, to make sure your fix does not break any other test.

    +First tips · Davai

    First tips

    • All Davai commands are prefixed davai-* and can be listed with davai-help. All commands are auto-documented with option -h.

    • If the pack preparation or compilation fails, for whatever reason, the build step prints an error message and the davai-run_xp command stops before running the tests. You can find the output of the pack preparation or compilation in logs/ directory, as any other test log file.

      A very common error is when the pack already exists; if you actually want to overwrite the contents of the pack (e.g. because you just fixed a code issue in the branch), you may need option -e/–preexisting_pack:

      davai-run_xp -e

      or

      davai-build -e

      Otherwise, if the pack preexists independently for valid reasons, you will need to move/delete the existing pack, or rename your branch.

    • The tests are organised as tasks and jobs:

      • a task consists in fetching input resources, running an executable, analyzing its outputs to the Ciboulai dashboard and dispatching (archiving) them: 1 test = 1 task
      • a job consists in a sequential driver of one or several task(s): either a flow sequence (i.e. outputs of task N is an input of task N+1) or family sequence (e.g. run independently an IFS and an Arpege forecast)
    • To fix a piece of code, the best is to modify the code in your Git repo, then re-run

      davai-run_xp -e

      (or davai-build -e and then davai-run_tests).

      You don't necessarily need to commit the change rightaway, the non-committed changes are exported from Git to the pack. Don't forget to commit eventually though, before issuing pull request.

    • To re-run one job only after re-compilation, type

      davai-run_tests -l

      to list the jobs and then

      davai-run_tests <category.job>
      Example
      davai-run_tests forecasts.standalone_forecasts
    • The syntax category.job indicates that the job to be run is the Driver in ./tasks/category/job.py

    • To re-run a single test within a job, e.g. the IFS forecast in forecasts/standalone_forecasts.py: edit this file, comment the other Family(s) or Task(s) (nodes) therein, and re-run the job as indicated above.

    • Eventually, after code modifications and fixing particular tests, you should re-run the whole set of tests, to make sure your fix does not break any other test.

    diff --git a/1.1.10/uget/uget/index.html b/1.1.10/uget/uget/index.html index 4110994..bb82423 100644 --- a/1.1.10/uget/uget/index.html +++ b/1.1.10/uget/uget/index.html @@ -46,4 +46,4 @@ (Cmd) [Ctrl-D] Vortex 1.2.2 completed ( Monday 05. March 2018, at 14:09:06 ) -$

    This mode can be interesting:

    • For systems on which loading Vortex is slow, you will load it once only in the beginning instead of at each command.
    • There is auto-completion (Tab).
    • Within one session, you can navigate through commands history.

    Cheatsheet

    Environnement

    • Recommended version of Vortex on belenos/taranis is : /home/mf/dp/marp/verolive/vortex/vortex-olive

    • uget.py is: /home/mf/dp/marp/verolive/vortex/vortex-olive/bin/uget.py

    • Genv/Gget are to be found in: /home/mf/dp/marp/gco/public/bin

    • The workdir of uget is: $HOME/.vortexrc/hack/uget/$USER/

      • env/ : uenv catalogs
      • data/ : resources

    Commands

    • clone a GCO env:

      bash uget.py hack genv al42_arome-op2.30 into al42_arome-dble.02@mary

    • clone a uenv:

      bash uget.py hack env al42_arome-dble.01@faure into al42_arome-dble.02@mary

    • display a uenv (equiv. command genv):

      bash uget.py pull env cy43t2_clim-op1.05@mary

    • download a uget resource in CWD (equiv. command gget):

      bash uget.py pull data al42_arome-op2.15.nam.tgz@mary

    • clone a GCO resource:

      bash uget.py hack gdata al42_arome-op2.15.nam into al42_arome-op2.16.nam.tgz@mary

    • clone a uget resource:

      bash uget.py hack data al42_arome-dble.01.nam.tgz@faure into al42_arome-op2.16.nam.tgz@mary

    • check that all elements exist, either locally or on archive:

      bash uget.py check env al42_arome-dble.02@mary

    • archive a uenv (incl. resources implied):

      bash uget.py push env al42_arome-dble.02@mary

    • archive a resource:

      bash uget.py push data al42_arome-op2.16.nam.tgz@mary

    • clean the workdir (hack) wrt what has been archived:

      bash uget.py clean_hack

    • list uenv and resources from a user:

      bash uget.py list env from faure uget.py list data from faure

    • compare 2 uenv:

      bash uget.py diff env al42_arome-dble.02@mary wrt genv al42_arome-op2.30

    • list the resources modified and their path:

      bash uget.py export env al42_arome-dble.02@mary wrt genv al42_arome-op2.30

    • I am lost:

      bash uget.py help

      and:

      bash uget.py help [hack|pull|check|push|diff|list|...]

    +$

    This mode can be interesting:

    • For systems on which loading Vortex is slow, you will load it once only in the beginning instead of at each command.
    • There is auto-completion (Tab).
    • Within one session, you can navigate through commands history.

    Cheatsheet

    Environnement

    • Recommended version of Vortex on belenos/taranis is : /home/mf/dp/marp/verolive/vortex/vortex-olive

    • uget.py is: /home/mf/dp/marp/verolive/vortex/vortex-olive/bin/uget.py

    • Genv/Gget are to be found in: /home/mf/dp/marp/gco/public/bin

    • The workdir of uget is: $HOME/.vortexrc/hack/uget/$USER/

      • env/ : uenv catalogs
      • data/ : resources

    Commands

    • clone a GCO env:

      bash uget.py hack genv al42_arome-op2.30 into al42_arome-dble.02@mary

    • clone a uenv:

      bash uget.py hack env al42_arome-dble.01@faure into al42_arome-dble.02@mary

    • display a uenv (equiv. command genv):

      bash uget.py pull env cy43t2_clim-op1.05@mary

    • download a uget resource in CWD (equiv. command gget):

      bash uget.py pull data al42_arome-op2.15.nam.tgz@mary

    • clone a GCO resource:

      bash uget.py hack gdata al42_arome-op2.15.nam into al42_arome-op2.16.nam.tgz@mary

    • clone a uget resource:

      bash uget.py hack data al42_arome-dble.01.nam.tgz@faure into al42_arome-op2.16.nam.tgz@mary

    • check that all elements exist, either locally or on archive:

      bash uget.py check env al42_arome-dble.02@mary

    • archive a uenv (incl. resources implied):

      bash uget.py push env al42_arome-dble.02@mary

    • archive a resource:

      bash uget.py push data al42_arome-op2.16.nam.tgz@mary

    • clean the workdir (hack) wrt what has been archived:

      bash uget.py clean_hack

    • list uenv and resources from a user:

      bash uget.py list env from faure uget.py list data from faure

    • compare 2 uenv:

      bash uget.py diff env al42_arome-dble.02@mary wrt genv al42_arome-op2.30

    • list the resources modified and their path:

      bash uget.py export env al42_arome-dble.02@mary wrt genv al42_arome-op2.30

    • I am lost:

      bash uget.py help

      and:

      bash uget.py help [hack|pull|check|push|diff|list|...]

    diff --git a/1.1.10/userconfiguration/index.html b/1.1.10/userconfiguration/index.html index 30b945c..c2cab37 100644 --- a/1.1.10/userconfiguration/index.html +++ b/1.1.10/userconfiguration/index.html @@ -1,2 +1,2 @@ -User configuration · Davai

    User configuration

    Some more general parameters are configurable, such as the default directory in which the experiments are stored, or the directory in which the logs of jobs are put. This can be set in ~/.davairc/user_config.ini. If the user, for whatever reason, needs to modify the packages linked in the experiments on a regular basis, it is possible to specify that in the same user config file. An example of these variables is available in the DAVAI-env repository, under templates/user_config.ini.

    +User configuration · Davai

    User configuration

    Some more general parameters are configurable, such as the default directory in which the experiments are stored, or the directory in which the logs of jobs are put. This can be set in ~/.davairc/user_config.ini. If the user, for whatever reason, needs to modify the packages linked in the experiments on a regular basis, it is possible to specify that in the same user config file. An example of these variables is available in the DAVAI-env repository, under templates/user_config.ini.

    diff --git a/1.1.10/versioningtest/index.html b/1.1.10/versioningtest/index.html index c79ab1a..14f4c1f 100644 --- a/1.1.10/versioningtest/index.html +++ b/1.1.10/versioningtest/index.html @@ -1,2 +1,2 @@ -Versioning tests · Davai

    Versioning of tests

    The following reasons may require to update the tests:

    1. Update the input resources or a task template script, to change the purpose or context of a test (e.g. new observations or modified namelists, to pull the tests more closely to operational configurations, ...). This usually comes with a change in the targeted tests outputs.

    2. Add new tests.

    3. Update the resources to adapt to a code change (e.g. new radiative coefficients files format, or a mandatory namelist change), with or without change in the results.

    Therefore it is necessary to track the evolutions of the tests properly, and version them clearly, so that it is clear what fixed or evolving version is to be used in any context. Hence the existence of the DAVAI-tests repository. The first two kinds of evolutions (a. and b.) are not necessarily linked to a contribution of code to the IAL repository, and therefore can be implemented at any moment in a dedicated branch of the tests repository (DAVAI-tests). This is described in more details in section add-modify-tests

    The latter is on the other hand attached to a contribution, and will require to be given together with the contribution for an integration, and be integrated itself in an evolving tests branch dedicated to test successive steps of the IAL integration branch. This case is detailed in more details in section parallel-branches

    To follow more easily what version of the tests should be used in particular for contributions to the IAL codes, it is proposed to adopt a nomenclature that maps the IAL releases and integration/merge branches, but replacing "CY" by "DV" (for DAVAÏ), as illustrated

    With this principle, the version of the tests to be used by default would be, for example:

    • for a development based on CY49 $\rightarrow$ DV49

    • for an integration branch towards CY49T1, named dev_CY49_to_T1 $\rightarrow$ dev_DV49_to_T1

    Adding or updating tests independently from the code

    The tests modifications which are not intrinsically linked with a contribution (adding tests or modifying a test to modify its behaviour) can be done at any moment, in a development branch of the tests repository. However, in order not to disturb the users and integrators, they should be merged into the next official version of tests (i.e. the version used for contributions and integrations to IAL) [only between a declaration of an IAL release and a call for contribution]{.underline}.

    Evolution of the tests w.r.t. Integration of an IAL release

    In the context of integration of an IAL release, it is suitable that the tests change as little as possible during the successive integration of contributions. Therefore we will set a version of the tests at the beginning of integration, and only adapt it for the contributions that require an update of the tests.
    Let's consider the process of integration of contribution branches on top of CY49 to build a CY49T1. For that purpose we would have set a reference experiment on CY49, hereafter named x0, generated with an identified version of the tests. That version of the tests would then be updated with x0 as reference experiment (ref_xpid), and tagged DV49. All contributions to CY49T1 would then be required to be tested with this version DV49 (hence against reference experiment x0). Cf. section set a ref tests version for more details about setting up a reference tests version and experiment.

    Suppose then that we have 5 of these contribution branches based on CY49, and an integration branch named dev_CY49_toT1. These 4 contributions may have different levels of reproducibility: they may conserve the results or not; they may require resources/tests adaptations (e.g. namelist updates, ...) or not, in which case they come with tests adaptations in an associated tests branch. Cf. the table

    branchresultstest XPIDresourcestested withintegration XPID
    b1$=$x1$=$DV49xi1
    b2$\neq$x2$=$DV49xi2
    b3$=$x3$\neq$$\rightarrow$ DV49_b3xi3
    b4$\neq$x4$\neq$$\rightarrow$ DV49_b4xi4

    In parallel to the integration branch dev_CY49_toT1, we start a tests branch from DV49 to collect the necessary adaptations of the tests, similarly named dev_DV49_toT1, which will be used to validate the integration branch, and updated as required along the integration.

    In case some intermediate versions of the integration branch are tagged and some branches are based/rebased on these tagged versions, we could also tag accordingly the tests branch if necessary. The reference experiment for the integration branch is at any moment, by default, the experiment which tested the formerly integrated branch, e.g. the reference for xi2 is xi1. However, that may not be true in some cases, some of these being potentially more tricky to validate, as will be shown in the following example.

    +Versioning tests · Davai

    Versioning of tests

    The following reasons may require to update the tests:

    1. Update the input resources or a task template script, to change the purpose or context of a test (e.g. new observations or modified namelists, to pull the tests more closely to operational configurations, ...). This usually comes with a change in the targeted tests outputs.

    2. Add new tests.

    3. Update the resources to adapt to a code change (e.g. new radiative coefficients files format, or a mandatory namelist change), with or without change in the results.

    Therefore it is necessary to track the evolutions of the tests properly, and version them clearly, so that it is clear what fixed or evolving version is to be used in any context. Hence the existence of the DAVAI-tests repository. The first two kinds of evolutions (a. and b.) are not necessarily linked to a contribution of code to the IAL repository, and therefore can be implemented at any moment in a dedicated branch of the tests repository (DAVAI-tests). This is described in more details in section add-modify-tests

    The latter is on the other hand attached to a contribution, and will require to be given together with the contribution for an integration, and be integrated itself in an evolving tests branch dedicated to test successive steps of the IAL integration branch. This case is detailed in more details in section parallel-branches

    To follow more easily what version of the tests should be used in particular for contributions to the IAL codes, it is proposed to adopt a nomenclature that maps the IAL releases and integration/merge branches, but replacing "CY" by "DV" (for DAVAÏ), as illustrated

    With this principle, the version of the tests to be used by default would be, for example:

    • for a development based on CY49 $\rightarrow$ DV49

    • for an integration branch towards CY49T1, named dev_CY49_to_T1 $\rightarrow$ dev_DV49_to_T1

    Adding or updating tests independently from the code

    The tests modifications which are not intrinsically linked with a contribution (adding tests or modifying a test to modify its behaviour) can be done at any moment, in a development branch of the tests repository. However, in order not to disturb the users and integrators, they should be merged into the next official version of tests (i.e. the version used for contributions and integrations to IAL) [only between a declaration of an IAL release and a call for contribution]{.underline}.

    Evolution of the tests w.r.t. Integration of an IAL release

    In the context of integration of an IAL release, it is suitable that the tests change as little as possible during the successive integration of contributions. Therefore we will set a version of the tests at the beginning of integration, and only adapt it for the contributions that require an update of the tests.
    Let's consider the process of integration of contribution branches on top of CY49 to build a CY49T1. For that purpose we would have set a reference experiment on CY49, hereafter named x0, generated with an identified version of the tests. That version of the tests would then be updated with x0 as reference experiment (ref_xpid), and tagged DV49. All contributions to CY49T1 would then be required to be tested with this version DV49 (hence against reference experiment x0). Cf. section set a ref tests version for more details about setting up a reference tests version and experiment.

    Suppose then that we have 5 of these contribution branches based on CY49, and an integration branch named dev_CY49_toT1. These 4 contributions may have different levels of reproducibility: they may conserve the results or not; they may require resources/tests adaptations (e.g. namelist updates, ...) or not, in which case they come with tests adaptations in an associated tests branch. Cf. the table

    branchresultstest XPIDresourcestested withintegration XPID
    b1$=$x1$=$DV49xi1
    b2$\neq$x2$=$DV49xi2
    b3$=$x3$\neq$$\rightarrow$ DV49_b3xi3
    b4$\neq$x4$\neq$$\rightarrow$ DV49_b4xi4

    In parallel to the integration branch dev_CY49_toT1, we start a tests branch from DV49 to collect the necessary adaptations of the tests, similarly named dev_DV49_toT1, which will be used to validate the integration branch, and updated as required along the integration.

    In case some intermediate versions of the integration branch are tagged and some branches are based/rebased on these tagged versions, we could also tag accordingly the tests branch if necessary. The reference experiment for the integration branch is at any moment, by default, the experiment which tested the formerly integrated branch, e.g. the reference for xi2 is xi1. However, that may not be true in some cases, some of these being potentially more tricky to validate, as will be shown in the following example.