-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update OpenFOAM demo #32
Changes from 2 commits
1ebd3be
24359b1
448cfd4
300f142
8ca4ba3
67d9493
e3d71e2
f1587a9
4692bb5
7a4e21c
4741c27
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,103 @@ | ||||||
#!/bin/bash | ||||||
|
||||||
if [[ $EESSI_CVMFS_REPO == "/cvmfs/software.eessi.io" ]] && [[ $EESSI_VERSION == "2023.06" ]]; then module load OpenFOAM/10-foss-2023a | ||||||
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2021.12" ]]; then module load OpenFOAM/8-foss-2020a | ||||||
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2023.06" ]]; | ||||||
then echo There is no demo for OpenFOAM in "/cvmfs/pilot.eessi-hpc.org/versions/2023.06". Please use the EESSI production repo "/cvmfs/software.eessi.io".; | ||||||
exit 1; | ||||||
else echo "Don't know which OpenFOAM module to load for ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}$EESSI_PILOT_VERSION" >&2; exit 1 | ||||||
fi | ||||||
|
||||||
|
||||||
which ssh &> /dev/null | ||||||
if [ $? -ne 0 ]; then | ||||||
# if ssh is not available, set plm_rsh_agent to empty value to avoid OpenMPI failing over it | ||||||
# that's OK, because this is a single-node run | ||||||
export OMPI_MCA_plm_rsh_agent='' | ||||||
fi | ||||||
|
||||||
source $FOAM_BASH | ||||||
|
||||||
if [ -z $EBROOTOPENFOAM ]; then | ||||||
echo "ERROR: OpenFOAM module not loaded?" >&2 | ||||||
exit 1 | ||||||
fi | ||||||
|
||||||
# Allow users to define the WORKDIR externally (for example a shared FS for multinode runs) | ||||||
export WORKDIR="${WORKDIR:-/tmp/$USER/$$}" | ||||||
echo "WORKDIR: $WORKDIR" | ||||||
mkdir -p $WORKDIR | ||||||
cd $WORKDIR | ||||||
pwd | ||||||
|
||||||
# motorBike, 2M cells | ||||||
BLOCKMESH_DIMENSIONS="100 40 40" | ||||||
# motorBike, 150M cells | ||||||
#BLOCKMESH_DIMENSIONS="200 80 80" | ||||||
|
||||||
# X*Y*Z should be equal to total number of available cores (across all nodes) | ||||||
X=${X:-4} | ||||||
Y=${Y:-2} | ||||||
Z=${Z:-1} | ||||||
# number of nodes | ||||||
NODES=${NODES:-1} | ||||||
# total number of cores | ||||||
NP=$((X * Y * Z)) | ||||||
# cores per node | ||||||
PPN=$(((NP + NODES -1)/NODES)) | ||||||
|
||||||
CASE_NAME=motorBike | ||||||
|
||||||
if [ -d $CASE_NAME ]; then | ||||||
echo "$CASE_NAME already exists in $PWD!" >&2 | ||||||
exit 1 | ||||||
fi | ||||||
|
||||||
cp -r $WM_PROJECT_DIR/tutorials/incompressible/simpleFoam/motorBike $CASE_NAME | ||||||
cd $CASE_NAME | ||||||
pwd | ||||||
|
||||||
# generate mesh | ||||||
echo "generating mesh..." | ||||||
foamDictionary -entry castellatedMeshControls.maxGlobalCells -set 200000000 system/snappyHexMeshDict | ||||||
foamDictionary -entry blocks -set "( hex ( 0 1 2 3 4 5 6 7 ) ( $BLOCKMESH_DIMENSIONS ) simpleGrading ( 1 1 1 ) )" system/blockMeshDict | ||||||
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict | ||||||
foamDictionary -entry hierarchicalCoeffs.n -set "($X $Y $Z)" system/decomposeParDict | ||||||
|
||||||
cp $WM_PROJECT_DIR/tutorials/resources/geometry/motorBike.obj.gz constant/triSurface/ | ||||||
surfaceFeatures 2>&1 | tee log.surfaceFeatures | ||||||
blockMesh 2>&1 | tee log.blockMesh | ||||||
decomposePar -copyZero 2>&1 | tee log.decomposePar | ||||||
mpirun -np $NP -ppn $PPN -hostfile hostlist snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I have removed this in my latest script which @laraPPr is still testing. Her next commit should make things clear. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In Satish script for version 11 it is changed to this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't really have a problem with this (since it will soon be lost to history anyway), but the file There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually |
||||||
reconstructParMesh -constant | ||||||
rm -rf ./processor* | ||||||
renumberMesh -constant -overwrite 2>&1 | tee log.renumberMesh | ||||||
|
||||||
# decompose mesh | ||||||
echo "decomposing..." | ||||||
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict | ||||||
foamDictionary -entry method -set multiLevel system/decomposeParDict | ||||||
foamDictionary -entry multiLevelCoeffs -set "{}" system/decomposeParDict | ||||||
foamDictionary -entry scotchCoeffs -set "{}" system/decomposeParDict | ||||||
foamDictionary -entry multiLevelCoeffs.level0 -set "{}" system/decomposeParDict | ||||||
foamDictionary -entry multiLevelCoeffs.level0.numberOfSubdomains -set $NODES system/decomposeParDict | ||||||
foamDictionary -entry multiLevelCoeffs.level0.method -set scotch system/decomposeParDict | ||||||
foamDictionary -entry multiLevelCoeffs.level1 -set "{}" system/decomposeParDict | ||||||
foamDictionary -entry multiLevelCoeffs.level1.numberOfSubdomains -set $PPN system/decomposeParDict | ||||||
foamDictionary -entry multiLevelCoeffs.level1.method -set scotch system/decomposeParDict | ||||||
|
||||||
decomposePar -copyZero 2>&1 | tee log.decomposeParMultiLevel | ||||||
|
||||||
# run simulation | ||||||
echo "running..." | ||||||
# limit run to first 200 time steps | ||||||
foamDictionary -entry endTime -set 200 system/controlDict | ||||||
foamDictionary -entry writeInterval -set 1000 system/controlDict | ||||||
foamDictionary -entry runTimeModifiable -set "false" system/controlDict | ||||||
foamDictionary -entry functions -set "{}" system/controlDict | ||||||
|
||||||
mpirun --oversubscribe -np $NP potentialFoam -parallel 2>&1 | tee log.potentialFoam | ||||||
time mpirun --oversubscribe -np $NP simpleFoam -parallel 2>&1 | tee log.simpleFoam | ||||||
|
||||||
echo "cleanup..." | ||||||
rm -rf $WORKDIR |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,99 @@ | ||
#!/bin/bash | ||
|
||
which ssh &> /dev/null | ||
if [ $? -ne 0 ]; then | ||
# if ssh is not available, set plm_rsh_agent to empty value to avoid OpenMPI failing over it | ||
# that's OK, because this is a single-node run | ||
export OMPI_MCA_plm_rsh_agent='' | ||
fi | ||
|
||
source $FOAM_BASH | ||
|
||
if [ -z $EBROOTOPENFOAM ]; then | ||
echo "ERROR: OpenFOAM module not loaded?" >&2 | ||
exit 1 | ||
fi | ||
|
||
# Allow users to define the WORKDIR externally (for example a shared FS for multinode runs) | ||
export WORKDIR="${WORKDIR:-/tmp/$USER/$$}" | ||
echo "WORKDIR: $WORKDIR" | ||
mkdir -p $WORKDIR | ||
cd $WORKDIR | ||
pwd | ||
|
||
# motorBike, 2M cells | ||
BLOCKMESH_DIMENSIONS="100 40 40" | ||
# motorBike, 150M cells | ||
#BLOCKMESH_DIMENSIONS="200 80 80" | ||
|
||
# X*Y*Z should be equal to total number of available cores (across all nodes) | ||
X=${X:-4} | ||
Y=${Y:-2} | ||
Z=${Z:-1} | ||
# number of nodes | ||
NODES=${NODES:-1} | ||
# total number of cores | ||
NP=$((X * Y * Z)) | ||
# cores per node | ||
PPN=$(((NP + NODES -1)/NODES)) | ||
|
||
CASE_NAME=motorBike | ||
|
||
if [ -d $CASE_NAME ]; then | ||
echo "$CASE_NAME already exists in $PWD!" >&2 | ||
exit 1 | ||
fi | ||
|
||
cp -r $WM_PROJECT_DIR/tutorials/incompressibleFluid/motorBike $CASE_NAME | ||
chmod -R u+w $CASE_NAME | ||
cd $CASE_NAME/$CASE_NAME | ||
pwd | ||
|
||
# generate mesh | ||
# All Foam dictionary sub entries are accssed using / (<main entry>/<sub entry>) rather than . (<main entry>.<sub entry>) | ||
echo "generating mesh..." | ||
# Needed to reduce this to a smaller value 200 million is too big for 8 processes, therefore setting to 7 million. | ||
foamDictionary -entry castellatedMeshControls/maxGlobalCells -set 8000000 system/snappyHexMeshDict | ||
foamDictionary -entry blocks -set "( hex ( 0 1 2 3 4 5 6 7 ) ( $BLOCKMESH_DIMENSIONS ) simpleGrading ( 1 1 1 ) )" system/blockMeshDict | ||
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict | ||
foamDictionary -entry hierarchicalCoeffs/n -set "($X $Y $Z)" system/decomposeParDict | ||
|
||
# this needs to be moved to constant/geometry and not constant/triSurface/ | ||
cp $WM_PROJECT_DIR/tutorials/resources/geometry/motorBike.obj.gz constant/triSurface/ | ||
#surfaceFeaturesDict not available. | ||
# surfaceFeatures 2>&1 | tee log.surfaceFeatures | ||
blockMesh 2>&1 | tee log.blockMesh | ||
decomposePar -copyZero 2>&1 | tee log.decomposePar | ||
mpirun -np $NP -ppn $PPN -hostfile hostlist snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh | ||
laraPPr marked this conversation as resolved.
Show resolved
Hide resolved
|
||
reconstructPar -constant | ||
rm -rf ./processor* | ||
renumberMesh -constant -overwrite 2>&1 | tee log.renumberMesh | ||
|
||
# decompose mesh | ||
echo "decomposing..." | ||
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict | ||
foamDictionary -entry decomposer -set multiLevel system/decomposeParDict # keyword method changed to decomposer | ||
foamDictionary -entry multiLevelCoeffs -set "{}" system/decomposeParDict | ||
foamDictionary -entry scotchCoeffs -set "{}" system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs/level0 -set "{}" system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs/level0/numberOfSubdomains -set $NODES system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs/level0/method -set scotch system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs/level1 -set "{}" system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs/level1/numberOfSubdomains -set $PPN system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs/level1/method -set scotch system/decomposeParDict | ||
|
||
decomposePar -copyZero 2>&1 | tee log.decomposeParMultiLevel | ||
|
||
# run simulation | ||
echo "running..." | ||
# limit run to first 200 time steps | ||
foamDictionary -entry endTime -set 200 system/controlDict | ||
foamDictionary -entry writeInterval -set 1000 system/controlDict | ||
foamDictionary -entry runTimeModifiable -set "false" system/controlDict | ||
foamDictionary -entry functions -set "{}" system/controlDict | ||
|
||
mpirun --oversubscribe -np $NP potentialFoam -parallel 2>&1 | tee log.potentialFoam | ||
time mpirun --oversubscribe -np $NP simpleFoam -parallel 2>&1 | tee log.simpleFoam | ||
|
||
echo "cleanup..." | ||
rm -rf $WORKDIR |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,96 +1,11 @@ | ||
#!/bin/bash | ||
|
||
module load OpenFOAM/8-foss-2020a | ||
|
||
which ssh &> /dev/null | ||
if [ $? -ne 0 ]; then | ||
# if ssh is not available, set plm_rsh_agent to empty value to avoid OpenMPI failing over it | ||
# that's OK, because this is a single-node run | ||
export OMPI_MCA_plm_rsh_agent='' | ||
fi | ||
|
||
source $FOAM_BASH | ||
|
||
if [ -z $EBROOTOPENFOAM ]; then | ||
echo "ERROR: OpenFOAM module not loaded?" >&2 | ||
exit 1 | ||
fi | ||
|
||
# Allow users to define the WORKDIR externally (for example a shared FS for multinode runs) | ||
export WORKDIR="${WORKDIR:-/tmp/$USER/$$}" | ||
echo "WORKDIR: $WORKDIR" | ||
mkdir -p $WORKDIR | ||
cd $WORKDIR | ||
pwd | ||
|
||
# motorBike, 2M cells | ||
BLOCKMESH_DIMENSIONS="100 40 40" | ||
# motorBike, 150M cells | ||
#BLOCKMESH_DIMENSIONS="200 80 80" | ||
|
||
# X*Y*Z should be equal to total number of available cores (across all nodes) | ||
X=${X:-4} | ||
Y=${Y:-2} | ||
Z=${Z:-1} | ||
# number of nodes | ||
NODES=${NODES:-1} | ||
# total number of cores | ||
NP=$((X * Y * Z)) | ||
# cores per node | ||
PPN=$(((NP + NODES -1)/NODES)) | ||
|
||
CASE_NAME=motorBike | ||
|
||
if [ -d $CASE_NAME ]; then | ||
echo "$CASE_NAME already exists in $PWD!" >&2 | ||
exit 1 | ||
fi | ||
|
||
cp -r $WM_PROJECT_DIR/tutorials/incompressible/simpleFoam/motorBike $CASE_NAME | ||
cd $CASE_NAME | ||
pwd | ||
|
||
# generate mesh | ||
echo "generating mesh..." | ||
foamDictionary -entry castellatedMeshControls.maxGlobalCells -set 200000000 system/snappyHexMeshDict | ||
foamDictionary -entry blocks -set "( hex ( 0 1 2 3 4 5 6 7 ) ( $BLOCKMESH_DIMENSIONS ) simpleGrading ( 1 1 1 ) )" system/blockMeshDict | ||
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict | ||
foamDictionary -entry hierarchicalCoeffs.n -set "($X $Y $Z)" system/decomposeParDict | ||
|
||
cp $WM_PROJECT_DIR/tutorials/resources/geometry/motorBike.obj.gz constant/triSurface/ | ||
surfaceFeatures 2>&1 | tee log.surfaceFeatures | ||
blockMesh 2>&1 | tee log.blockMesh | ||
decomposePar -copyZero 2>&1 | tee log.decomposePar | ||
mpirun -np $NP -ppn $PPN -hostfile hostlist snappyHexMesh -parallel -overwrite 2>&1 | tee log.snappyHexMesh | ||
reconstructParMesh -constant | ||
rm -rf ./processor* | ||
renumberMesh -constant -overwrite 2>&1 | tee log.renumberMesh | ||
|
||
# decompose mesh | ||
echo "decomposing..." | ||
foamDictionary -entry numberOfSubdomains -set $NP system/decomposeParDict | ||
foamDictionary -entry method -set multiLevel system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs -set "{}" system/decomposeParDict | ||
foamDictionary -entry scotchCoeffs -set "{}" system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs.level0 -set "{}" system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs.level0.numberOfSubdomains -set $NODES system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs.level0.method -set scotch system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs.level1 -set "{}" system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs.level1.numberOfSubdomains -set $PPN system/decomposeParDict | ||
foamDictionary -entry multiLevelCoeffs.level1.method -set scotch system/decomposeParDict | ||
|
||
decomposePar -copyZero 2>&1 | tee log.decomposeParMultiLevel | ||
|
||
# run simulation | ||
echo "running..." | ||
# limit run to first 200 time steps | ||
foamDictionary -entry endTime -set 200 system/controlDict | ||
foamDictionary -entry writeInterval -set 1000 system/controlDict | ||
foamDictionary -entry runTimeModifiable -set "false" system/controlDict | ||
foamDictionary -entry functions -set "{}" system/controlDict | ||
|
||
mpirun --oversubscribe -np $NP potentialFoam -parallel 2>&1 | tee log.potentialFoam | ||
time mpirun --oversubscribe -np $NP simpleFoam -parallel 2>&1 | tee log.simpleFoam | ||
|
||
echo "cleanup..." | ||
rm -rf $WORKDIR | ||
if [[ $EESSI_CVMFS_REPO == "/cvmfs/software.eessi.io" ]] && [[ $EESSI_VERSION == "2023.06" ]]; then module load OpenFOAM/11-foss-2023a | ||
laraPPr marked this conversation as resolved.
Show resolved
Hide resolved
|
||
./bike_OpenFOAM_11.sh | ||
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2021.12" ]]; then module load OpenFOAM/8-foss-2020a | ||
./bike_OpenFOAM8.sh | ||
elif [[ $EESSI_CVMFS_REPO == "/cvmfs/pilot.eessi-hpc.org" ]] && [[ $EESSI_PILOT_VERSION == "2023.06" ]] | ||
then echo There is no demo for OpenFOAM in "/cvmfs/pilot.eessi-hpc.org/versions/2023.06". Please use the EESSI production repo "/cvmfs/software.eessi.io". | ||
exit 1; | ||
else echo "Don't know which OpenFOAM module to load for ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}$EESSI_PILOT_VERSION" >&2; exit 1 | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A hostfile is a must here without which the mpirun command would fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this true here but not below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than try to support multi-node in the demos, I think it would be better to just add the disclaimer that they are prepared for single node use cases, but could be extended to multi-node cases. If you submit this job via SLURM you wouldn't need the complication of a hostfile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this stands, I don't think it would work would it since there is no
hostlist
file?