Skip to content

arcaneframework/arcanefem

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ArcaneFEM

High-Performance Finite Element Method Solvers with CPU/GPU Parallelism

ArcaneFEM provides Finite Element Method (FEM) solvers built on the Arcane Framework. Designed for modern HPC environments, these solvers deliver optimized performance across diverse parallel computing architectures: multi-CPU, multi-GPU, and hybrid CPU-GPU configurations.

Documentation & Resources
Key Features
  • Flexible Parallelism: Seamlessly run on CPUs, GPUs, or heterogeneous CPU-GPU systems
  • Multiple Physics Modules: Includes solvers for elasticity, heat transfer, and more
  • Modern Visualization: Native support for ParaView via VTKHDF5 and Ensight formats

Installation Notes

Detailed Installation procedure

Required Dependencies

  • Arcane Framework - Core parallel computational framework
  • Linear Solver Library (at least one):
    • HYPRE (recommended for CPU and GPU parallelism)
    • PETSc (recommended for CPU and GPU parallelism)
    • Trilinos

Tip: Refer to the Arcane Installation Guide for detailed compilation instructions. Configure Arcane with HYPRE, PETSc, or Trilinos support to unlock ArcaneFEM's full capabilities.

Building ArcaneFEM

Assuming Arcane Framework is already installed:

# Configure paths
export ARCANE_INSTALL_DIR=/path/to/arcane/installation
export ARCANEFEM_INSTALL_DIR=${HOME}/ArcaneFEM/install
export SOURCE_PATH=/path/to/ArcaneFEM/sources

# Configure build with CMake
cmake -S ${SOURCE_PATH} \
      -B ${ARCANEFEM_INSTALL_DIR} \
      -DCMAKE_PREFIX_PATH=${ARCANE_INSTALL_DIR}

# Compile and install
cmake --build ${ARCANEFEM_INSTALL_DIR}

Quick Start Guide

Running Your First Simulation

After compilation, navigate to a solver module. For example, the elasticity solver:

cd ${ARCANEFEM_INSTALL_DIR}/modules/elasticity

Tip: Explore other physics modules in ${ARCANEFEM_INSTALL_DIR}/modules/

Each module includes example input files in its inputs/ directory.

Execution Modes

  • Sequential (Single Core)
./Elasticity ./inputs/Test.Elasticity.arc
  • Parallel CPU (Domain Decomposition)
# Run on 4 CPU cores
mpirun -n 4 ./Elasticity ./inputs/Test.Elasticity.arc
  • GPU Accelerated
# Single NVIDIA GPU
mpirun -n 1 -A,AcceleratorRuntime=cuda ./Elasticity ./inputs/Test.Elasticity.arc

# Single AMD GPU
mpirun -n 1 -A,AcceleratorRuntime=hip ./Elasticity ./inputs/Test.Elasticity.arc

Note: Replace 1 with the number of available GPUs

  • Hybrid CPU-GPU
# 8 CPU cores + 1 GPU
mpirun -n 8 -A,AcceleratorRuntime=cuda ./Elasticity ./inputs/Test.Elasticity.arc

For advanced runtime options, consult the Arcane Launcher Documentation.

Visualization

ArcaneFEM outputs results in modern visualization formats:

  • VTKHDF5 (*.hdf) - Recommended for large datasets
  • Ensight (*.case) - Legacy format support

Results are written to the output/ directory within each module.

Viewing Results with ParaView
# Open VTKHDF5 output
paraview ${ARCANEFEM_INSTALL_DIR}/modules/elasticity/output/depouillement/vtkhdfv2/Mesh0.hdf

Requirements:

  • ParaView ≥ 5.12
  • Arcane compiled with MPI support for HDF5

Gallery

Coming soon: Simulation examples, benchmark results, and application showcases


Getting Help

About

Massively Parallel FEM Solver on CPU-GPU

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 8