You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is, however, computationally very beneficial to define two different types of job scripts for the VASP and Lobster runs, as VASP and Lobster runs are parallelized differently (MPI vs. OpenMP).
356
-
[FireWorks](https://github.com/materialsproject/fireworks) allows one to run the VASP and Lobster jobs with different job scripts. Please check out the [jobflow documentation on FireWorks](https://materialsproject.github.io/jobflow/tutorials/8-fireworks.html#setting-the-manager-configs) for more information.
374
+
There are currently three different ways available to run the workflow efficiently, as VASP and LOBSTER rely on a different parallelization (MPI vs. OpenMP).
375
+
One can use a job script (with some restrictions), or [Jobflow-remote](https://matgenix.github.io/jobflow-remote/) / [Fireworks](https://github.com/materialsproject/fireworks) for high-throughput runs.
376
+
377
+
378
+
#### Running the LOBSTER workflow without database and with one job script only
379
+
380
+
It is possible to run the VASP-LOBSTER workflow efficiently with a minimal setup.
381
+
In this case, you will run the VASP calculations on the same node as the LOBSTER calculations.
382
+
In between, the different computations you will switch from MPI to OpenMP parallelization.
383
+
384
+
For example, for a node with 48 cores, you could use an adapted version of the following SLURM script:
#This needs to be adapted if you run with different cores
397
+
#SBATCH --ntasks=48
398
+
399
+
# ensure you load the modules to run VASP, e.g., module load vasp
400
+
module load my_vasp_module
401
+
# please activate the required conda environment
402
+
conda activate my_environment
403
+
cd my_folder
404
+
# the following script needs to contain the workflow
405
+
python xyz.py
406
+
```
407
+
408
+
The `LOBSTER_CMD` now needs an additional export of the number of threads.
409
+
410
+
```yaml
411
+
VASP_CMD: <<VASP_CMD>>
412
+
LOBSTER_CMD: OMP_NUM_THREADS=48 <<LOBSTER_CMD>>
413
+
```
414
+
415
+
416
+
#### Jobflow-remote
417
+
Please refer first to the general documentation of jobflow-remote: [https://matgenix.github.io/jobflow-remote/](https://matgenix.github.io/jobflow-remote/).
418
+
419
+
```py
420
+
from atomate2.vasp.flows.lobster import VaspLobsterMaker
421
+
from pymatgen.core.structure import Structure
422
+
from jobflow_remote import submit_flow, set_run_config
423
+
from atomate2.vasp.powerups import update_user_incar_settings
The `LOBSTER_CMD` also needs an export of the threads.
443
+
444
+
```yaml
445
+
VASP_CMD: <<VASP_CMD>>
446
+
LOBSTER_CMD: OMP_NUM_THREADS=48 <<LOBSTER_CMD>>
447
+
```
448
+
449
+
450
+
451
+
#### Fireworks
452
+
Please first refer to the general documentation on running atomate2 workflows with fireworks: [https://materialsproject.github.io/atomate2/user/fireworks.html](https://materialsproject.github.io/atomate2/user/fireworks.html)
357
453
358
454
Specifically, you might want to change the `_fworker` for the LOBSTER runs and define a separate `lobster` worker within FireWorks:
359
455
@@ -389,6 +485,16 @@ lpad = LaunchPad.auto_load()
389
485
lpad.add_wf(wf)
390
486
```
391
487
488
+
489
+
The `LOBSTER_CMD` can now be adapted to not include the number of threads:
490
+
491
+
```yaml
492
+
VASP_CMD: <<VASP_CMD>>
493
+
LOBSTER_CMD: <<LOBSTER_CMD>>
494
+
```
495
+
496
+
#### Analyzing outputs
497
+
392
498
Outputs from the automatic analysis with LobsterPy can easily be extracted from the database and also plotted:
393
499
394
500
```py
@@ -425,42 +531,6 @@ for number, (key, cohp) in enumerate(
0 commit comments