Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update OpenFOAM demo #32

Merged
merged 11 commits into from
May 8, 2024
2 changes: 1 addition & 1 deletion OpenFOAM/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@ OpenFOAM example

motorBike tutorial case

can be scaled up/down to N cores by changing X, Y, Z in run.sh
can be scaled up/down to N cores by changing X, Y, Z in the bike scripts.

Runtime (2M cells): ~5min on 8 cores (Intel Skylake)
103 changes: 103 additions & 0 deletions OpenFOAM/bike_OpenFOAM8.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
#!/bin/bash

if [[ $EESSI_CVMFS_REPO == "/cvmfs/software.eessi.io" ]] && [[ $EESSI_VERSION == "2023.06" ]]; then module load OpenFOAM/10-foss-2023a
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2021.12" ]]; then module load OpenFOAM/8-foss-2020a
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2023.06" ]];
then echo There is no demo for OpenFOAM in "/cvmfs/pilot.eessi-hpc.org/versions/2023.06". Please use the EESSI production repo "/cvmfs/software.eessi.io".;
exit 1;
else echo "Don't know which OpenFOAM module to load for ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}$EESSI_PILOT_VERSION" >&2; exit 1
fi


which ssh &> /dev/null
if [ $? -ne 0 ]; then
# if ssh is not available, set plm_rsh_agent to empty value to avoid OpenMPI failing over it
# that's OK, because this is a single-node run
export OMPI_MCA_plm_rsh_agent=''
fi

source $FOAM_BASH

if [ -z $EBROOTOPENFOAM ]; then
echo "ERROR: OpenFOAM module not loaded?" >&2
exit 1
fi

# Allow users to define the WORKDIR externally (for example a shared FS for multinode runs)
export WORKDIR="${WORKDIR:-/tmp/$USER/$$}"
echo "WORKDIR: $WORKDIR"
mkdir -p $WORKDIR
cd $WORKDIR
pwd

# motorBike, 2M cells
BLOCKMESH_DIMENSIONS="100 40 40"
# motorBike, 150M cells
#BLOCKMESH_DIMENSIONS="200 80 80"

# X*Y*Z should be equal to total number of available cores (across all nodes)
X=${X:-4}
Y=${Y:-2}
Z=${Z:-1}
# number of nodes
NODES=${NODES:-1}
# total number of cores
NP=$((X * Y * Z))
# cores per node
PPN=$(((NP + NODES -1)/NODES))

CASE_NAME=motorBike

if [ -d $CASE_NAME ]; then
echo "$CASE_NAME already exists in $PWD!" >&2
exit 1
fi

cp -r $WM_PROJECT_DIR/tutorials/incompressible/simpleFoam/motorBike $CASE_NAME
cd $CASE_NAME
pwd

# generate mesh
echo "generating mesh..."
foamDictionary -entry castellatedMeshControls.maxGlobalCells -set 200000000 system/snappyHexMeshDict
foamDictionary -entry blocks -set "( hex ( 0 1 2 3 4 5 6 7 ) ( $BLOCKMESH_DIMENSIONS ) simpleGrading ( 1 1 1 ) )" system/blockMeshDict
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict
foamDictionary -entry hierarchicalCoeffs.n -set "($X $Y $Z)" system/decomposeParDict

cp $WM_PROJECT_DIR/tutorials/resources/geometry/motorBike.obj.gz constant/triSurface/
surfaceFeatures 2>&1 | tee log.surfaceFeatures
blockMesh 2>&1 | tee log.blockMesh
decomposePar -copyZero 2>&1 | tee log.decomposePar
mpirun -np $NP -ppn $PPN -hostfile hostlist snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A hostfile is a must here without which the mpirun command would fail.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this true here but not below?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than try to support multi-node in the demos, I think it would be better to just add the disclaimer that they are prepared for single node use cases, but could be extended to multi-node cases. If you submit this job via SLURM you wouldn't need the complication of a hostfile

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this stands, I don't think it would work would it since there is no hostlist file?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
mpirun -np $NP -ppn $PPN -hostfile hostlist snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh
mpirun -np $NP --oversubscribe snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have removed this in my latest script which @laraPPr is still testing. Her next commit should make things clear.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Satish script for version 11 it is changed to this mpirun -np $NP -npernode $PPN snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh. And that one now works correctly. I'm also less inclined to change this one because it still does something and is only their for the OpenFOAM in the pilot repo.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really have a problem with this (since it will soon be lost to history anyway), but the file hostlist does not exist as far as I can see, so it really does nothing (or is it just a workaround for the use of -ppn?).

Copy link

@satishskamath satishskamath May 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually -ppn is deprecated and -npernode is to be used instead with mpirun. Indeed the hostlist thing didn't make any sense to me so I also removed it as @ocaisa did. We can also get rid of --oversubscribe option and just declare that one needs to have at least 8 cores ( I would say we even need more) to run this script.

reconstructParMesh -constant
rm -rf ./processor*
renumberMesh -constant -overwrite 2>&1 | tee log.renumberMesh

# decompose mesh
echo "decomposing..."
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict
foamDictionary -entry method -set multiLevel system/decomposeParDict
foamDictionary -entry multiLevelCoeffs -set "{}" system/decomposeParDict
foamDictionary -entry scotchCoeffs -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level0 -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level0.numberOfSubdomains -set $NODES system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level0.method -set scotch system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level1 -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level1.numberOfSubdomains -set $PPN system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level1.method -set scotch system/decomposeParDict

decomposePar -copyZero 2>&1 | tee log.decomposeParMultiLevel

# run simulation
echo "running..."
# limit run to first 200 time steps
foamDictionary -entry endTime -set 200 system/controlDict
foamDictionary -entry writeInterval -set 1000 system/controlDict
foamDictionary -entry runTimeModifiable -set "false" system/controlDict
foamDictionary -entry functions -set "{}" system/controlDict

mpirun --oversubscribe -np $NP potentialFoam -parallel 2>&1 | tee log.potentialFoam
time mpirun --oversubscribe -np $NP simpleFoam -parallel 2>&1 | tee log.simpleFoam

echo "cleanup..."
rm -rf $WORKDIR
99 changes: 99 additions & 0 deletions OpenFOAM/bike_OpenFOAM_11.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
#!/bin/bash

which ssh &> /dev/null
if [ $? -ne 0 ]; then
# if ssh is not available, set plm_rsh_agent to empty value to avoid OpenMPI failing over it
# that's OK, because this is a single-node run
export OMPI_MCA_plm_rsh_agent=''
fi

source $FOAM_BASH

if [ -z $EBROOTOPENFOAM ]; then
echo "ERROR: OpenFOAM module not loaded?" >&2
exit 1
fi

# Allow users to define the WORKDIR externally (for example a shared FS for multinode runs)
export WORKDIR="${WORKDIR:-/tmp/$USER/$$}"
echo "WORKDIR: $WORKDIR"
mkdir -p $WORKDIR
cd $WORKDIR
pwd

# motorBike, 2M cells
BLOCKMESH_DIMENSIONS="100 40 40"
# motorBike, 150M cells
#BLOCKMESH_DIMENSIONS="200 80 80"

# X*Y*Z should be equal to total number of available cores (across all nodes)
X=${X:-4}
Y=${Y:-2}
Z=${Z:-1}
# number of nodes
NODES=${NODES:-1}
# total number of cores
NP=$((X * Y * Z))
# cores per node
PPN=$(((NP + NODES -1)/NODES))

CASE_NAME=motorBike

if [ -d $CASE_NAME ]; then
echo "$CASE_NAME already exists in $PWD!" >&2
exit 1
fi

cp -r $WM_PROJECT_DIR/tutorials/incompressibleFluid/motorBike $CASE_NAME
chmod -R u+w $CASE_NAME
cd $CASE_NAME/$CASE_NAME
pwd

# generate mesh
# All Foam dictionary sub entries are accssed using / (<main entry>/<sub entry>) rather than . (<main entry>.<sub entry>)
echo "generating mesh..."
# Needed to reduce this to a smaller value 200 million is too big for 8 processes, therefore setting to 7 million.
foamDictionary -entry castellatedMeshControls/maxGlobalCells -set 8000000 system/snappyHexMeshDict
foamDictionary -entry blocks -set "( hex ( 0 1 2 3 4 5 6 7 ) ( $BLOCKMESH_DIMENSIONS ) simpleGrading ( 1 1 1 ) )" system/blockMeshDict
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict
foamDictionary -entry hierarchicalCoeffs/n -set "($X $Y $Z)" system/decomposeParDict

# this needs to be moved to constant/geometry and not constant/triSurface/
cp $WM_PROJECT_DIR/tutorials/resources/geometry/motorBike.obj.gz constant/triSurface/
#surfaceFeaturesDict not available.
# surfaceFeatures 2>&1 | tee log.surfaceFeatures
blockMesh 2>&1 | tee log.blockMesh
decomposePar -copyZero 2>&1 | tee log.decomposePar
mpirun -np $NP -ppn $PPN -hostfile hostlist snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh
laraPPr marked this conversation as resolved.
Show resolved Hide resolved
reconstructPar -constant
rm -rf ./processor*
renumberMesh -constant -overwrite 2>&1 | tee log.renumberMesh

# decompose mesh
echo "decomposing..."
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict
foamDictionary -entry decomposer -set multiLevel system/decomposeParDict # keyword method changed to decomposer
foamDictionary -entry multiLevelCoeffs -set "{}" system/decomposeParDict
foamDictionary -entry scotchCoeffs -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs/level0 -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs/level0/numberOfSubdomains -set $NODES system/decomposeParDict
foamDictionary -entry multiLevelCoeffs/level0/method -set scotch system/decomposeParDict
foamDictionary -entry multiLevelCoeffs/level1 -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs/level1/numberOfSubdomains -set $PPN system/decomposeParDict
foamDictionary -entry multiLevelCoeffs/level1/method -set scotch system/decomposeParDict

decomposePar -copyZero 2>&1 | tee log.decomposeParMultiLevel

# run simulation
echo "running..."
# limit run to first 200 time steps
foamDictionary -entry endTime -set 200 system/controlDict
foamDictionary -entry writeInterval -set 1000 system/controlDict
foamDictionary -entry runTimeModifiable -set "false" system/controlDict
foamDictionary -entry functions -set "{}" system/controlDict

mpirun --oversubscribe -np $NP potentialFoam -parallel 2>&1 | tee log.potentialFoam
time mpirun --oversubscribe -np $NP simpleFoam -parallel 2>&1 | tee log.simpleFoam

echo "cleanup..."
rm -rf $WORKDIR
103 changes: 9 additions & 94 deletions OpenFOAM/run.sh
Original file line number Diff line number Diff line change
@@ -1,96 +1,11 @@
#!/bin/bash

module load OpenFOAM/8-foss-2020a

which ssh &> /dev/null
if [ $? -ne 0 ]; then
# if ssh is not available, set plm_rsh_agent to empty value to avoid OpenMPI failing over it
# that's OK, because this is a single-node run
export OMPI_MCA_plm_rsh_agent=''
fi

source $FOAM_BASH

if [ -z $EBROOTOPENFOAM ]; then
echo "ERROR: OpenFOAM module not loaded?" >&2
exit 1
fi

# Allow users to define the WORKDIR externally (for example a shared FS for multinode runs)
export WORKDIR="${WORKDIR:-/tmp/$USER/$$}"
echo "WORKDIR: $WORKDIR"
mkdir -p $WORKDIR
cd $WORKDIR
pwd

# motorBike, 2M cells
BLOCKMESH_DIMENSIONS="100 40 40"
# motorBike, 150M cells
#BLOCKMESH_DIMENSIONS="200 80 80"

# X*Y*Z should be equal to total number of available cores (across all nodes)
X=${X:-4}
Y=${Y:-2}
Z=${Z:-1}
# number of nodes
NODES=${NODES:-1}
# total number of cores
NP=$((X * Y * Z))
# cores per node
PPN=$(((NP + NODES -1)/NODES))

CASE_NAME=motorBike

if [ -d $CASE_NAME ]; then
echo "$CASE_NAME already exists in $PWD!" >&2
exit 1
fi

cp -r $WM_PROJECT_DIR/tutorials/incompressible/simpleFoam/motorBike $CASE_NAME
cd $CASE_NAME
pwd

# generate mesh
echo "generating mesh..."
foamDictionary -entry castellatedMeshControls.maxGlobalCells -set 200000000 system/snappyHexMeshDict
foamDictionary -entry blocks -set "( hex ( 0 1 2 3 4 5 6 7 ) ( $BLOCKMESH_DIMENSIONS ) simpleGrading ( 1 1 1 ) )" system/blockMeshDict
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict
foamDictionary -entry hierarchicalCoeffs.n -set "($X $Y $Z)" system/decomposeParDict

cp $WM_PROJECT_DIR/tutorials/resources/geometry/motorBike.obj.gz constant/triSurface/
surfaceFeatures 2>&1 | tee log.surfaceFeatures
blockMesh 2>&1 | tee log.blockMesh
decomposePar -copyZero 2>&1 | tee log.decomposePar
mpirun -np $NP -ppn $PPN -hostfile hostlist snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh
reconstructParMesh -constant
rm -rf ./processor*
renumberMesh -constant -overwrite 2>&1 | tee log.renumberMesh

# decompose mesh
echo "decomposing..."
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict
foamDictionary -entry method -set multiLevel system/decomposeParDict
foamDictionary -entry multiLevelCoeffs -set "{}" system/decomposeParDict
foamDictionary -entry scotchCoeffs -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level0 -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level0.numberOfSubdomains -set $NODES system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level0.method -set scotch system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level1 -set "{}" system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level1.numberOfSubdomains -set $PPN system/decomposeParDict
foamDictionary -entry multiLevelCoeffs.level1.method -set scotch system/decomposeParDict

decomposePar -copyZero 2>&1 | tee log.decomposeParMultiLevel

# run simulation
echo "running..."
# limit run to first 200 time steps
foamDictionary -entry endTime -set 200 system/controlDict
foamDictionary -entry writeInterval -set 1000 system/controlDict
foamDictionary -entry runTimeModifiable -set "false" system/controlDict
foamDictionary -entry functions -set "{}" system/controlDict

mpirun --oversubscribe -np $NP potentialFoam -parallel 2>&1 | tee log.potentialFoam
time mpirun --oversubscribe -np $NP simpleFoam -parallel 2>&1 | tee log.simpleFoam

echo "cleanup..."
rm -rf $WORKDIR
if [[ $EESSI_CVMFS_REPO == "/cvmfs/software.eessi.io" ]] && [[ $EESSI_VERSION == "2023.06" ]]; then module load OpenFOAM/11-foss-2023a
laraPPr marked this conversation as resolved.
Show resolved Hide resolved
./bike_OpenFOAM_11.sh
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2021.12" ]]; then module load OpenFOAM/8-foss-2020a
./bike_OpenFOAM8.sh
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2023.06" ]]
then echo There is no demo for OpenFOAM in "/cvmfs/pilot.eessi-hpc.org/versions/2023.06". Please use the EESSI production repo "/cvmfs/software.eessi.io".
exit 1;
else echo "Don't know which OpenFOAM module to load for ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}$EESSI_PILOT_VERSION" >&2; exit 1
fi