Skip to content

Commit aa792b1

Browse files
Bhargav Siddanisiddanib
authored andcommitted
MPMD cases - first commit
1 parent ccbff96 commit aa792b1

File tree

11 files changed

+437
-0
lines changed

11 files changed

+437
-0
lines changed

Docs/source/MPMD_Tutorials.rst

Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
AMReX-MPMD
2+
==========
3+
4+
AMReX-MPMD utilizes the Multiple Program Multiple Data (MPMD) feature of MPI to provide cross-functionality for AMReX-based codes.
5+
The framework enables data transfer across two different applications through **MPMD::Copier** class, which takes **BoxArray** of its application as an argument.
6+
**Copier** instances created in both the applications together identify the overlapping cells for which the data transfer must occur.
7+
**Copier::send** & **Copier::recv** functions, which take a **MultiFab** as an argument, are used to transfer the desired data of overlapping regions.
8+
9+
Case-1
10+
------
11+
12+
The current case demonstrates the MPMD capability across two C++ applications.
13+
14+
Contents
15+
^^^^^^^^
16+
17+
**Source_1** subfolder contains ``main_1.cpp`` which will be treated as the first application.
18+
Similarly, **Source_2** subfolder contains ``main_2.cpp`` that will be treated as the second application.
19+
20+
Overview
21+
^^^^^^^^
22+
23+
The domain in ``main_1.cpp`` is set to ``lo = {0, 0, 0}`` and ``hi = {31, 31, 31}``, while the domain in ``main_2.cpp`` is set to ``lo = {16, 16, 16}`` and ``hi = {31, 31, 31}``.
24+
Hence, the data transfer will occur for the region ``lo = {16, 16, 16}`` and ``hi = {31, 31, 31}``.
25+
The data transfer demonstration is performed using a two component *MultiFab*.
26+
The first component is populated in ``main_1.cpp`` before it is transferred to ``main_2.cpp``.
27+
The second component is populated in ``main_2.cpp`` based on the received first component.
28+
Finally, the second component is transferred from ``main_2.cpp`` to ``main_1.cpp``.
29+
It can be seen from the plotfile generated by ``main_1.cpp`` that the second component is non-zero only for the overlapping region.
30+
31+
Compile
32+
^^^^^^^
33+
34+
The compile process here assumes that the current working directory is ``ExampleCodes/MPMD/Case-1/``.
35+
36+
.. code-block:: bash
37+
38+
# cd into Source_1 to compile the first application
39+
cd Source_1/
40+
# Include USE_CUDA=TRUE for CUDA GPUs
41+
make USE_MPI=TRUE
42+
43+
# cd into Source_2 to compile the second application
44+
cd ../Source_2/
45+
# Include USE_CUDA=TRUE for CUDA GPUs
46+
make USE_MPI=TRUE
47+
48+
Run
49+
^^^
50+
51+
Here, the current case is being run using a total of 12 MPI ranks with 8 allocated to ``main_1.cpp`` and the rest for ``main_2.cpp``.
52+
Please note that MPI ranks attributed to each application/code need to be continuous, i.e., MPI ranks 0-7 are for ``main_1.cpp`` and 8-11 are for ``main_2.cpp``.
53+
This may be default behaviour on several systems.
54+
Furthermore, the run process here assumes that the current working directory is ``ExampleCodes/MPMD/Case-1/``.
55+
56+
.. code-block:: bash
57+
58+
# Running the MPMD process with 12 ranks
59+
mpirun -np 8 Source_1/main3d.gnu.DEBUG.MPI.ex : -np 4 Source_2/main3d.gnu.DEBUG.MPI.ex
60+
61+
Case-2
62+
------
63+
64+
The current case demonstrates the MPMD capability across C++ and python applications.
65+
This language interoperability is achieved through the python bindings of AMReX, `pyAMReX <https://github.com/AMReX-Codes/pyamrex>`_.
66+
67+
Contents
68+
^^^^^^^^
69+
70+
``main.cpp`` will be the C++ application and ``main.py`` will be the python application.
71+
72+
Overview
73+
^^^^^^^^
74+
75+
In the previous case (Case-1) of MPMD each application has its own domain, and, therefore, different **BoxArray**.
76+
However, there exist scenarios where both applications deal with the same **BoxArray**.
77+
The current case presents such a scenario where the **BoxArray** is defined only in the ``main.cpp`` application, but this information is relayed to ``main.py`` application through the **MPMD::Copier**.
78+
79+
**Please ensure that the same AMReX source code is used to compile both the applications.**
80+
81+
pyAMReX compile
82+
^^^^^^^^^^^^^^^
83+
The compile process for pyAMReX is only briefly described here.
84+
Please refer to the `pyAMReX documentation <https://pyamrex.readthedocs.io/en/latest/install/cmake.html#>`_ for more details.
85+
It must be mentioned that **mpi4py** is an important dependency.
86+
87+
.. code-block:: bash
88+
89+
# find dependencies & configure
90+
# Include -DAMReX_GPU_BACKEND=CUDA for gpu version
91+
cmake -S . -B build -DAMReX_SPACEDIM="1;2;3" -DAMReX_MPI=ON -DpyAMReX_amrex_src=/path/to/amrex
92+
93+
# compile & install, here we use four threads
94+
cmake --build build -j 4 --target pip_install
95+
96+
main.cpp compile
97+
^^^^^^^^^^^^^^^^
98+
99+
The compile process here assumes that the current working directory is ``ExampleCodes/MPMD/Case-2/``.
100+
101+
.. code-block:: bash
102+
103+
# Include USE_CUDA=TRUE for CUDA GPUs
104+
make USE_MPI=TRUE
105+
106+
Run
107+
^^^
108+
109+
Here, the current case is being run using a total of 12 MPI ranks with 8 allocated to ``main.cpp`` and the rest for ``main.py``.
110+
As mentioned earlier, the MPI ranks attributed to each application/code need to be continuous, i.e., MPI ranks 0-7 are for ``main.cpp`` and 8-11 are for ``main.py``.
111+
This may be default behaviour on several systems.
112+
Furthermore, the run process here assumes that the current working directory is ``ExampleCodes/MPMD/Case-2/``.
113+
114+
.. code-block:: bash
115+
116+
# Running the MPMD process with 12 ranks
117+
mpirun -np 8 ./main3d.gnu.DEBUG.MPI.ex : -np 4 python main.py
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
AMREX_HOME ?= ../../../../../amrex
2+
3+
DEBUG = TRUE
4+
5+
DIM = 3
6+
7+
COMP = gcc
8+
9+
USE_MPI = TRUE
10+
11+
USE_OMP = FALSE
12+
USE_CUDA = FALSE
13+
USE_HIP = FALSE
14+
15+
include $(AMREX_HOME)/Tools/GNUMake/Make.defs
16+
17+
include ./Make.package
18+
include $(AMREX_HOME)/Src/Base/Make.package
19+
20+
include $(AMREX_HOME)/Tools/GNUMake/Make.rules
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
CEXE_sources += main_1.cpp
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
2+
#include <AMReX.H>
3+
#include <AMReX_Print.H>
4+
#include <AMReX_MultiFab.H>
5+
#include <AMReX_PlotFileUtil.H>
6+
#include <mpi.h>
7+
#include <AMReX_MPMD.H>
8+
9+
int main(int argc, char* argv[])
10+
{
11+
// Initialize amrex::MPMD to establish communication across the two apps
12+
MPI_Comm comm = amrex::MPMD::Initialize(argc, argv);
13+
amrex::Initialize(argc,argv,true,comm);
14+
{
15+
amrex::Print() << "Hello world from AMReX version " << amrex::Version() << "\n";
16+
// Number of data components at each grid point in the MultiFab
17+
int ncomp = 2;
18+
// how many grid cells in each direction over the problem domain
19+
int n_cell = 32;
20+
// how many grid cells are allowed in each direction over each box
21+
int max_grid_size = 16;
22+
//BoxArray -- Abstract Domain Setup
23+
// integer vector indicating the lower coordindate bounds
24+
amrex::IntVect dom_lo(0,0,0);
25+
// integer vector indicating the upper coordindate bounds
26+
amrex::IntVect dom_hi(n_cell-1, n_cell-1, n_cell-1);
27+
// box containing the coordinates of this domain
28+
amrex::Box domain(dom_lo, dom_hi);
29+
// will contain a list of boxes describing the problem domain
30+
amrex::BoxArray ba(domain);
31+
// chop the single grid into many small boxes
32+
ba.maxSize(max_grid_size);
33+
// Distribution Mapping
34+
amrex::DistributionMapping dm(ba);
35+
// Create an MPMD Copier based on current ba & dm
36+
auto copr = amrex::MPMD::Copier(ba,dm,false);
37+
//Define MuliFab
38+
amrex::MultiFab mf(ba, dm, ncomp, 0);
39+
//Geometry -- Physical Properties for data on our domain
40+
amrex::RealBox real_box ({0., 0., 0.}, {1. , 1., 1.});
41+
amrex::Geometry geom(domain, &real_box);
42+
//Calculate Cell Sizes
43+
amrex::GpuArray<amrex::Real,3> dx = geom.CellSizeArray(); //dx[0] = dx dx[1] = dy dx[2] = dz
44+
//Fill only the first component of the MultiFab
45+
for(amrex::MFIter mfi(mf); mfi.isValid(); ++mfi){
46+
const amrex::Box& bx = mfi.validbox();
47+
const amrex::Array4<amrex::Real>& mf_array = mf.array(mfi);
48+
49+
amrex::ParallelFor(bx, [=] AMREX_GPU_DEVICE(int i, int j, int k){
50+
51+
amrex::Real x = (i+0.5) * dx[0];
52+
amrex::Real y = (j+0.5) * dx[1];
53+
amrex::Real z = (k+0.5) * dx[2];
54+
amrex::Real r_squared = ((x-0.5)*(x-0.5)+(y-0.5)*(y-0.5)+(z-0.5)*(z-0.5))/0.01;
55+
56+
mf_array(i,j,k,0) = 1.0 + std::exp(-r_squared);
57+
58+
});
59+
}
60+
// Send ONLY the first populated MultiFab component to main_2.cpp
61+
copr.send(mf,0,1);
62+
// Receive ONLY the second MultiFab component from main_2.cpp
63+
copr.recv(mf,1,1);
64+
//Plot MultiFab Data
65+
WriteSingleLevelPlotfile("plt_cpp_1", mf, {"comp0","comp1"}, geom, 0., 0);
66+
}
67+
amrex::Finalize();
68+
amrex::MPMD::Finalize();
69+
}
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
AMREX_HOME ?= ../../../../../amrex
2+
3+
DEBUG = TRUE
4+
5+
DIM = 3
6+
7+
COMP = gcc
8+
9+
USE_MPI = TRUE
10+
11+
USE_OMP = FALSE
12+
USE_CUDA = FALSE
13+
USE_HIP = FALSE
14+
15+
include $(AMREX_HOME)/Tools/GNUMake/Make.defs
16+
17+
include ./Make.package
18+
include $(AMREX_HOME)/Src/Base/Make.package
19+
20+
include $(AMREX_HOME)/Tools/GNUMake/Make.rules
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
CEXE_sources += main_2.cpp
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
2+
#include <AMReX.H>
3+
#include <AMReX_Print.H>
4+
#include <AMReX_MultiFab.H>
5+
#include <AMReX_PlotFileUtil.H>
6+
#include <mpi.h>
7+
#include <AMReX_MPMD.H>
8+
9+
int main(int argc, char* argv[])
10+
{
11+
// Initialize amrex::MPMD to establish communication across the two apps
12+
MPI_Comm comm = amrex::MPMD::Initialize(argc, argv);
13+
amrex::Initialize(argc,argv,true,comm);
14+
{
15+
amrex::Print() << "Hello world from AMReX version " << amrex::Version() << "\n";
16+
// Number of data components at each grid point in the MultiFab
17+
int ncomp = 2;
18+
// how many grid cells in each direction over the problem domain
19+
int n_cell = 32;
20+
// how many grid cells are allowed in each direction over each box
21+
int max_grid_size = 8;
22+
//BoxArray -- Abstract Domain Setup
23+
// integer vector indicating the lower coordindate bounds
24+
amrex::IntVect dom_lo(n_cell/2, n_cell/2, n_cell/2);
25+
// integer vector indicating the upper coordindate bounds
26+
amrex::IntVect dom_hi(n_cell-1, n_cell-1, n_cell-1);
27+
// box containing the coordinates of this domain
28+
amrex::Box domain(dom_lo, dom_hi);
29+
// will contain a list of boxes describing the problem domain
30+
amrex::BoxArray ba(domain);
31+
// chop the single grid into many small boxes
32+
ba.maxSize(max_grid_size);
33+
// Distribution Mapping
34+
amrex::DistributionMapping dm(ba);
35+
// Create an MPMD Copier based on current ba & dm
36+
auto copr = amrex::MPMD::Copier(ba,dm,false);
37+
//Define MuliFab
38+
amrex::MultiFab mf(ba, dm, ncomp, 0);
39+
//Geometry -- Physical Properties for data on our domain
40+
amrex::RealBox real_box ({0.5, 0.5, 0.5}, {1. , 1., 1.});
41+
amrex::Geometry geom(domain, &real_box);
42+
// Receive ONLY the first populated MultiFab component from main_1.cpp
43+
copr.recv(mf,0,1);
44+
//Fill the second component of the MultiFab
45+
for(amrex::MFIter mfi(mf); mfi.isValid(); ++mfi){
46+
const amrex::Box& bx = mfi.validbox();
47+
const amrex::Array4<amrex::Real>& mf_array = mf.array(mfi);
48+
49+
amrex::ParallelFor(bx, [=] AMREX_GPU_DEVICE(int i, int j, int k){
50+
51+
mf_array(i,j,k,1) = amrex::Real(10.)*mf_array(i,j,k,0);
52+
53+
});
54+
}
55+
// Send ONLY the second MultiFab component (populated here) to main_1.cpp
56+
copr.send(mf,1,1);
57+
//Plot MultiFab Data
58+
WriteSingleLevelPlotfile("plt_cpp_2", mf, {"comp0","comp1"}, geom, 0., 0);
59+
}
60+
amrex::Finalize();
61+
amrex::MPMD::Finalize();
62+
}

ExampleCodes/MPMD/Case-2/GNUmakefile

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
AMREX_HOME ?= ../../../../amrex
2+
3+
DEBUG = TRUE
4+
5+
DIM = 3
6+
7+
COMP = gcc
8+
9+
USE_MPI = TRUE
10+
11+
USE_OMP = FALSE
12+
USE_CUDA = FALSE
13+
USE_HIP = FALSE
14+
15+
include $(AMREX_HOME)/Tools/GNUMake/Make.defs
16+
17+
include ./Make.package
18+
include $(AMREX_HOME)/Src/Base/Make.package
19+
20+
include $(AMREX_HOME)/Tools/GNUMake/Make.rules

ExampleCodes/MPMD/Case-2/Make.package

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
CEXE_sources += main.cpp

ExampleCodes/MPMD/Case-2/main.cpp

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
2+
#include <AMReX.H>
3+
#include <AMReX_Print.H>
4+
#include <AMReX_MultiFab.H>
5+
#include <AMReX_PlotFileUtil.H>
6+
#include <mpi.h>
7+
#include <AMReX_MPMD.H>
8+
9+
int main(int argc, char* argv[])
10+
{
11+
// Initialize amrex::MPMD to establish communication across the two apps
12+
MPI_Comm comm = amrex::MPMD::Initialize(argc, argv);
13+
amrex::Initialize(argc,argv,true,comm);
14+
{
15+
amrex::Print() << "Hello world from AMReX version " << amrex::Version() << "\n";
16+
// Number of data components at each grid point in the MultiFab
17+
int ncomp = 2;
18+
// how many grid cells in each direction over the problem domain
19+
int n_cell = 32;
20+
// how many grid cells are allowed in each direction over each box
21+
int max_grid_size = 16;
22+
//BoxArray -- Abstract Domain Setup
23+
// integer vector indicating the lower coordindate bounds
24+
amrex::IntVect dom_lo(0,0,0);
25+
// integer vector indicating the upper coordindate bounds
26+
amrex::IntVect dom_hi(n_cell-1, n_cell-1, n_cell-1);
27+
// box containing the coordinates of this domain
28+
amrex::Box domain(dom_lo, dom_hi);
29+
// will contain a list of boxes describing the problem domain
30+
amrex::BoxArray ba(domain);
31+
// chop the single grid into many small boxes
32+
ba.maxSize(max_grid_size);
33+
// Distribution Mapping
34+
amrex::DistributionMapping dm(ba);
35+
// Create an MPMD Copier that
36+
// sends the BoxArray information to the other (python) application
37+
auto copr = amrex::MPMD::Copier(ba,dm,true);
38+
//Define MuliFab
39+
amrex::MultiFab mf(ba, dm, ncomp, 0);
40+
//Geometry -- Physical Properties for data on our domain
41+
amrex::RealBox real_box ({0., 0., 0.}, {1. , 1., 1.});
42+
amrex::Geometry geom(domain, &real_box);
43+
//Calculate Cell Sizes
44+
amrex::GpuArray<amrex::Real,3> dx = geom.CellSizeArray(); //dx[0] = dx dx[1] = dy dx[2] = dz
45+
//Fill only the first component of the MultiFab
46+
for(amrex::MFIter mfi(mf); mfi.isValid(); ++mfi){
47+
const amrex::Box& bx = mfi.validbox();
48+
const amrex::Array4<amrex::Real>& mf_array = mf.array(mfi);
49+
50+
amrex::ParallelFor(bx, [=] AMREX_GPU_DEVICE(int i, int j, int k){
51+
52+
amrex::Real x = (i+0.5) * dx[0];
53+
amrex::Real y = (j+0.5) * dx[1];
54+
amrex::Real z = (k+0.5) * dx[2];
55+
amrex::Real r_squared = ((x-0.5)*(x-0.5)+(y-0.5)*(y-0.5)+(z-0.5)*(z-0.5))/0.01;
56+
57+
mf_array(i,j,k,0) = 1.0 + std::exp(-r_squared);
58+
59+
});
60+
}
61+
// Send ONLY the first populated MultiFab component to the other app
62+
copr.send(mf,0,1);
63+
// Receive ONLY the second MultiFab component from the other app
64+
copr.recv(mf,1,1);
65+
//Plot MultiFab Data
66+
WriteSingleLevelPlotfile("plt_cpp_001", mf, {"comp0","comp1"}, geom, 0., 0);
67+
}
68+
amrex::Finalize();
69+
amrex::MPMD::Finalize();
70+
}

0 commit comments

Comments
 (0)