High-Performance Finite Element Method Solvers with CPU/GPU Parallelism
ArcaneFEM provides Finite Element Method (FEM) solvers built on the Arcane Framework. Designed for modern HPC environments, these solvers deliver optimized performance across diverse parallel computing architectures: multi-CPU, multi-GPU, and hybrid CPU-GPU configurations.
- ArcaneFEM Documentation - Still in progress but usable user guide
- GitHub Repository - Source code and issue tracking
- Arcane Framework Docs - Core framework reference
- Flexible Parallelism: Seamlessly run on CPUs, GPUs, or heterogeneous CPU-GPU systems
- Multiple Physics Modules: Includes solvers for elasticity, heat transfer, and more
- Modern Visualization: Native support for ParaView via VTKHDF5 and Ensight formats
Detailed Installation procedure
- Arcane Framework - Core parallel computational framework
- Linear Solver Library (at least one):
- HYPRE (recommended for CPU and GPU parallelism)
- PETSc (recommended for CPU and GPU parallelism)
- Trilinos
Tip: Refer to the Arcane Installation Guide for detailed compilation instructions. Configure Arcane with HYPRE, PETSc, or Trilinos support to unlock ArcaneFEM's full capabilities.
Assuming Arcane Framework is already installed:
# Configure paths
export ARCANE_INSTALL_DIR=/path/to/arcane/installation
export ARCANEFEM_INSTALL_DIR=${HOME}/ArcaneFEM/install
export SOURCE_PATH=/path/to/ArcaneFEM/sources
# Configure build with CMake
cmake -S ${SOURCE_PATH} \
-B ${ARCANEFEM_INSTALL_DIR} \
-DCMAKE_PREFIX_PATH=${ARCANE_INSTALL_DIR}
# Compile and install
cmake --build ${ARCANEFEM_INSTALL_DIR}After compilation, navigate to a solver module. For example, the elasticity solver:
cd ${ARCANEFEM_INSTALL_DIR}/modules/elasticityTip: Explore other physics modules in
${ARCANEFEM_INSTALL_DIR}/modules/
Each module includes example input files in its inputs/ directory.
- Sequential (Single Core)
./Elasticity ./inputs/Test.Elasticity.arc- Parallel CPU (Domain Decomposition)
# Run on 4 CPU cores
mpirun -n 4 ./Elasticity ./inputs/Test.Elasticity.arc- GPU Accelerated
# Single NVIDIA GPU
mpirun -n 1 -A,AcceleratorRuntime=cuda ./Elasticity ./inputs/Test.Elasticity.arc
# Single AMD GPU
mpirun -n 1 -A,AcceleratorRuntime=hip ./Elasticity ./inputs/Test.Elasticity.arcNote: Replace
1with the number of available GPUs
- Hybrid CPU-GPU
# 8 CPU cores + 1 GPU
mpirun -n 8 -A,AcceleratorRuntime=cuda ./Elasticity ./inputs/Test.Elasticity.arcFor advanced runtime options, consult the Arcane Launcher Documentation.
ArcaneFEM outputs results in modern visualization formats:
- VTKHDF5 (
*.hdf) - Recommended for large datasets - Ensight (
*.case) - Legacy format support
Results are written to the output/ directory within each module.
# Open VTKHDF5 output
paraview ${ARCANEFEM_INSTALL_DIR}/modules/elasticity/output/depouillement/vtkhdfv2/Mesh0.hdfRequirements:
- ParaView ≥ 5.12
- Arcane compiled with MPI support for HDF5
Coming soon: Simulation examples, benchmark results, and application showcases
- Issues: Report bugs or request features on GitHub Issues
- Documentation: Detailed solver guides at arcaneframework.github.io/arcanefem