Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wp6 update #47

Merged
merged 10 commits into from
Oct 15, 2024
34 changes: 34 additions & 0 deletions references.bib
Original file line number Diff line number Diff line change
@@ -1,5 +1,39 @@


@inproceedings{jamond_manta_2022,
address = {Giens, France},
title = {{MANTA} : un code {HPC} généraliste pour la simulation de problèmes complexes en mécanique},
shorttitle = {{MANTA}},
url = {https://hal.science/hal-03688160},
abstract = {Le code MANTA a l’ambition de permettre la réalisation de simulations complexes en mécanique sur des supercalculateurs actuels et futurs tout en préservant les fondamentaux des codes développés au CEA : adaptabilité au problème posé, robustesse des algorithmes, pérennité des modèles et du code. On expose les principes de développement de ce code de nouvelle génération, et quelques exemples représentatifs de ses capacités actuelles sont également décrits.},
urldate = {2024-10-14},
booktitle = {{CSMA} 2022 15ème {Colloque} {National} en {Calcul} des {Structures}},
author = {Jamond, Olivier and Lelong, Nicolas and Fourmont, Axel and Bluthé, Joffrey and Breuze, Matthieu and Bouda, Pascal and Brooking, Guillaume and Drui, Florence and Epalle, Alexandre and Fandeur, Olivier and Folzan, Gauthier and Helfer, Thomas and Kloss, Francis and Latu, Guillaume and Motte, Antoine and Nahed, Christopher and Picard, Alexis and Prat, Raphael and Ramière, Isabelle and Steins, Morgane and Prabel, Benoit},
month = may,
year = {2022},
keywords = {Code de calcul, Eléments finis, HPC, Implicite - explicite, Mécanique des fluides, Mécanique des structures, Toolbox, Volumes finis},
}

@inproceedings{jamond_manta_2024,
address = {Giens, France},
title = {{MANTA}: an industrial-strength open-source high performance explicit and implicit multi-physics solver},
shorttitle = {{MANTA}},
url = {https://hal.science/hal-04610968},
urldate = {2024-10-14},
booktitle = {16ème {Colloque} {National} en {Calcul} de {Structures}},
publisher = {CNRS, CSMA, ENS Paris-Saclay, CentraleSupélec},
author = {Jamond, Olivier and Lelong, Nicolas and Brooking, Guillaume and Helfer, Thomas and Prabel, Benoit and Prat, Raphael and Jaccon, Adrien},
month = may,
year = {2024},
keywords = {HPC, Industrial applications, PDEs solving, fluid mechanics, multiphysics coupling, structural mechanics},
}

@misc{noauthor_16eme_nodate,
title = {16ème {Colloque} {National} en {Calcul} de {Structures} - {Sciencesconf}.org},
url = {https://csma2024.sciencesconf.org/517460},
urldate = {2024-10-14},
}

@phdthesis{daver2016,
type = {phd},
title = {Reduced basis method applied to large non-linear multi-physics problems : application to high field magnets design},
Expand Down
72 changes: 1 addition & 71 deletions software/uranie/WP2/WP2.tex
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ \section{Software: Uranie}
\rowcolor{numpexlightergray}\textbf{Supported Architectures} & \begin{tabular}{l}
CPU Only\\
\end{tabular} \\
\rowcolor{white}\textbf{Repository} & \href{https://sourceforge.net/projects/uranie/}{https://sourceforge.net/projects/uranie/} \\
\rowcolor{white}\textbf{Repository} & \href{https://uranie.cea.fr}{https://uranie.cea.fr} \\
\rowcolor{numpexlightergray}\textbf{License} & \begin{tabular}{l}
OSS:: LGPL v*\\
\end{tabular} \\
Expand Down Expand Up @@ -88,73 +88,3 @@ \subsection{Parallel Capabilities}
\item URANIE allows performing simulations in parallel for uncertainty quantification.
\item \textbf{Scalability:} The scalability is constant because each the software distributes the simulation. if we add resource, they are devoted to run new simulations.
\end{itemize}


\subsection{Initial Performance Metrics}
\label{sec:WP2:Uranie:metrics}

This section provides a summary of initial performance benchmarks performed in the context of WP2. It ensures reproducibility by detailing input/output datasets, benchmarking tools, and the results. All data should be publicly available, ideally with a DOI for future reference.

\begin{itemize}
\item \textbf{Overall Performance:} Summarize the software's computational performance, energy efficiency, and scalability results across different architectures (e.g., CPU, GPU, hybrid systems).
\item \textbf{Input/Output Dataset:} Provide a detailed description of the dataset used for the benchmark, including:
\begin{itemize}
\item Input dataset size, structure, and format (e.g., CSV, HDF5, NetCDF).
\item Output dataset format and key results.
\item Location of the dataset (e.g., GitHub repository, institutional repository, or open access platform).
\item DOI or permanent link for accessing the dataset.
\end{itemize}
\item \textbf{open-data Access:} Indicate whether the datasets used for the benchmark are open access, and provide a DOI or a direct link for download. Where applicable, highlight any licensing constraints.
\item \textbf{Challenges:} Identify any significant bottlenecks or challenges observed during the benchmarking process, including data handling and computational performance.
\item \textbf{Future Improvements:} Outline areas for optimization, including dataset handling, memory usage, or algorithmic efficiency, to address identified challenges.
\end{itemize}

\subsubsection{Benchmark \#1}
\begin{itemize}
\item \textbf{Description:} Briefly describe the benchmark case, including the problem size, target architecture (e.g., CPU, GPU), and the input data. Mention the specific goals of the benchmark (e.g., testing scalability, energy efficiency).
\item \textbf{Benchmarking Tools Used:} List the tools used for performance analysis, such as Extrae, Score-P, TAU, Vampir, or Nsight, and specify what metrics were measured (e.g., execution time, FLOPS, energy consumption).
\item \textbf{Input/Output Dataset Description:}
\begin{itemize}
\item \textbf{Input Data:} Describe the input dataset (size, format, data type) and provide a DOI or link to access it.
\item \textbf{Output Data:} Specify the structure of the results (e.g., memory usage, runtime logs) and how they can be accessed or replicated.
\item \textbf{Data Repository:} Indicate where the data is stored (e.g., Zenodo, institutional repository) and provide a DOI or URL for accessing the data.
\end{itemize}
\item \textbf{Results Summary:} Include a summary of key metrics (execution time, memory usage, FLOPS) and their comparison across architectures (e.g., CPU, GPU).
\item \textbf{Challenges Identified:} Describe any bottlenecks encountered (e.g., memory usage, parallelization inefficiencies) and how they impacted the benchmark.
\end{itemize}

\subsection{12-Month Roadmap}
\label{sec:WP2:Uranie:roadmap}

In this section, describe the roadmap for improving benchmarks and addressing the challenges identified. This should include:
\begin{itemize}
\item \textbf{Data Improvements:} Plans for improving input/output data management, including making datasets more accessible and ensuring reproducibility through open-data initiatives.
\item \textbf{Methodology Application:} Implementation of the benchmarking methodology proposed in this deliverable to streamline reproducibility and dataset management.
\item \textbf{Results Retention:} Plans to maintain benchmark results in a publicly accessible repository with appropriate metadata and documentation, ensuring long-term usability.
\end{itemize}

In~\cref{tab:WP2:Uranie:bottlenecks}, we briefly discuss the bottleneck roadmap associated to the software and relevant to the work package.

\begin{table}[h!]
\centering



\centering
{
\setlength{\parindent}{0pt}
\def\arraystretch{1.25}
\arrayrulecolor{numpexgray}
{
\fontsize{9}{11}\selectfont
\begin{tabular}{!{\color{numpexgray}\vrule}p{.25\linewidth}!{\color{numpexgray}\vrule}p{.6885\linewidth}!{\color{numpexgray}\vrule}}

\rowcolor{numpexgray}{\rule{0pt}{2.5ex}\color{white}\bf Bottlenecks} & {\rule{0pt}{2.5ex}\color{white}\bf Short Description }\\

\rowcolor{white} None & provide short description here \\
\end{tabular}
}
}
\caption{WP2: Uranie plan with Respect to Relevant Bottlenecks}
\label{tab:WP2:Uranie:bottlenecks}
\end{table}
62 changes: 1 addition & 61 deletions software/uranie/WP5/WP5.tex
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ \section{Software: Uranie}
\rowcolor{numpexlightergray}\textbf{Supported Architectures} & \begin{tabular}{l}
CPU Only\\
\end{tabular} \\
\rowcolor{white}\textbf{Repository} & \href{https://sourceforge.net/projects/uranie/}{https://sourceforge.net/projects/uranie/} \\
\rowcolor{white}\textbf{Repository} & \href{https://uranie.cea.fr}{https://uranie.cea.fr} \\
\rowcolor{numpexlightergray}\textbf{License} & \begin{tabular}{l}
OSS:: LGPL v*\\
\end{tabular} \\
Expand Down Expand Up @@ -91,63 +91,3 @@ \subsection{Parallel Capabilities}
\end{itemize}


\subsection{Initial Performance Metrics}
\label{sec:WP5:Uranie:metrics}

This section provides a summary of initial performance benchmarks performed in the context of WP5. It ensures reproducibility by detailing input/output datasets, benchmarking tools, and the results. All data should be publicly available, ideally with a DOI for future reference.

\begin{itemize}
\item \textbf{Overall Performance:} Summarize the software's computational performance, energy efficiency, and scalability results across different architectures (e.g., CPU, GPU, hybrid systems).
\item \textbf{Input/Output Dataset:} Provide a detailed description of the dataset used for the benchmark, including:
\begin{itemize}
\item Input dataset size, structure, and format (e.g., CSV, HDF5, NetCDF).
\item Output dataset format and key results.
\item Location of the dataset (e.g., GitHub repository, institutional repository, or open access platform).
\item DOI or permanent link for accessing the dataset.
\end{itemize}
\item \textbf{open-data Access:} Indicate whether the datasets used for the benchmark are open access, and provide a DOI or a direct link for download. Where applicable, highlight any licensing constraints.
\item \textbf{Challenges:} Identify any significant bottlenecks or challenges observed during the benchmarking process, including data handling and computational performance.
\item \textbf{Future Improvements:} Outline areas for optimization, including dataset handling, memory usage, or algorithmic efficiency, to address identified challenges.
\end{itemize}

\subsubsection{Benchmark \#1}
\begin{itemize}
\item \textbf{Description:} Briefly describe the benchmark case, including the problem size, target architecture (e.g., CPU, GPU), and the input data. Mention the specific goals of the benchmark (e.g., testing scalability, energy efficiency).
\item \textbf{Benchmarking Tools Used:} List the tools used for performance analysis, such as Extrae, Score-P, TAU, Vampir, or Nsight, and specify what metrics were measured (e.g., execution time, FLOPS, energy consumption).
\item \textbf{Input/Output Dataset Description:}
\begin{itemize}
\item \textbf{Input Data:} Describe the input dataset (size, format, data type) and provide a DOI or link to access it.
\item \textbf{Output Data:} Specify the structure of the results (e.g., memory usage, runtime logs) and how they can be accessed or replicated.
\item \textbf{Data Repository:} Indicate where the data is stored (e.g., Zenodo, institutional repository) and provide a DOI or URL for accessing the data.
\end{itemize}
\item \textbf{Results Summary:} Include a summary of key metrics (execution time, memory usage, FLOPS) and their comparison across architectures (e.g., CPU, GPU).
\item \textbf{Challenges Identified:} Describe any bottlenecks encountered (e.g., memory usage, parallelization inefficiencies) and how they impacted the benchmark.
\end{itemize}

\subsection{12-Month Roadmap}
\label{sec:WP5:Uranie:roadmap}

Developments are currently being made to improve the Metropolis-Hastings Uranie-class and to add stochastic inversion algorithms based on Stochastic Expectation Maximization (SEM) or Stochastic Approximation of Expectation Maximization (SAEM).

In~\cref{tab:WP5:Uranie:bottlenecks}, we briefly discuss the bottleneck roadmap associated to the software and relevant to the work package.

\begin{table}[h!]
\centering
\centering
{
\setlength{\parindent}{0pt}
\def\arraystretch{1.25}
\arrayrulecolor{numpexgray}
{
\fontsize{9}{11}\selectfont
\begin{tabular}{!{\color{numpexgray}\vrule}p{.25\linewidth}!{\color{numpexgray}\vrule}p{.6885\linewidth}!{\color{numpexgray}\vrule}}

\rowcolor{numpexgray}{\rule{0pt}{2.5ex}\color{white}\bf Bottlenecks} & {\rule{0pt}{2.5ex}\color{white}\bf Short Description }\\

\rowcolor{white} None & provide short description here \\
\end{tabular}
}
}
\caption{WP5: Uranie plan with Respect to Relevant Bottlenecks}
\label{tab:WP5:Uranie:bottlenecks}
\end{table}
48 changes: 7 additions & 41 deletions software/uranie/WP6/WP6.tex
Original file line number Diff line number Diff line change
Expand Up @@ -86,45 +86,10 @@ \subsection{Parallel Capabilities}
URANIE use also LibSSH for launching code on different cluster (in the module TLauncher).

\item The parallel computation environment of our platform is built on a HPC architecture designed to maximize computational power and efficiency
using both distributed and shared memory parallelism. URANIE is used on CEA/TGCC supercomputers such as IRESNE.
using both distributed and shared memory parallelism. URANIE is used on CEA/TGCC supercomputers.

\item URANIE allows performing simulations in parallel for uncertainty quantification.
\item \textbf{Scalability:} The scalability is constant because each the software distributes the simulation. if we add resource, they are devoted to run new simulations.
% \item \textbf{Integration with Other Systems:} Describe how the software integrates with other numerical libraries in the Exa-MA framework.
\end{itemize}


\subsection{Initial Performance Metrics}
\label{sec:WP6:Uranie:metrics}

This section provides a summary of initial performance benchmarks performed in the context of WP6. It ensures reproducibility by detailing input/output datasets, benchmarking tools, and the results. All data should be publicly available, ideally with a DOI for future reference.

\begin{itemize}
\item \textbf{Overall Performance:} Summarize the software's computational performance, energy efficiency, and scalability results across different architectures (e.g., CPU, GPU, hybrid systems).
\item \textbf{Input/Output Dataset:} Provide a detailed description of the dataset used for the benchmark, including:
\begin{itemize}
\item Input dataset size, structure, and format (e.g., CSV, HDF5, NetCDF).
\item Output dataset format and key results.
\item Location of the dataset (e.g., GitHub repository, institutional repository, or open access platform).
\item DOI or permanent link for accessing the dataset.
\end{itemize}
\item \textbf{open-data Access:} Indicate whether the datasets used for the benchmark are open access, and provide a DOI or a direct link for download. Where applicable, highlight any licensing constraints.
\item \textbf{Challenges:} Identify any significant bottlenecks or challenges observed during the benchmarking process, including data handling and computational performance.
\item \textbf{Future Improvements:} Outline areas for optimization, including dataset handling, memory usage, or algorithmic efficiency, to address identified challenges.
\end{itemize}

\subsubsection{Benchmark \#1}
\begin{itemize}
\item \textbf{Description:} Briefly describe the benchmark case, including the problem size, target architecture (e.g., CPU, GPU), and the input data. Mention the specific goals of the benchmark (e.g., testing scalability, energy efficiency).
\item \textbf{Benchmarking Tools Used:} List the tools used for performance analysis, such as Extrae, Score-P, TAU, Vampir, or Nsight, and specify what metrics were measured (e.g., execution time, FLOPS, energy consumption).
\item \textbf{Input/Output Dataset Description:}
\begin{itemize}
\item \textbf{Input Data:} Describe the input dataset (size, format, data type) and provide a DOI or link to access it.
\item \textbf{Output Data:} Specify the structure of the results (e.g., memory usage, runtime logs) and how they can be accessed or replicated.
\item \textbf{Data Repository:} Indicate where the data is stored (e.g., Zenodo, institutional repository) and provide a DOI or URL for accessing the data.
\end{itemize}
\item \textbf{Results Summary:} Include a summary of key metrics (execution time, memory usage, FLOPS) and their comparison across architectures (e.g., CPU, GPU).
\item \textbf{Challenges Identified:} Describe any bottlenecks encountered (e.g., memory usage, parallelization inefficiencies) and how they impacted the benchmark.
\end{itemize}

\subsection{12-Month Roadmap}
Expand All @@ -134,8 +99,8 @@ \subsection{12-Month Roadmap}

\begin{table}[h!]
\centering



\centering
{
Expand All @@ -145,13 +110,14 @@ \subsection{12-Month Roadmap}
{
\fontsize{9}{11}\selectfont
\begin{tabular}{!{\color{numpexgray}\vrule}p{.25\linewidth}!{\color{numpexgray}\vrule}p{.6885\linewidth}!{\color{numpexgray}\vrule}}

\rowcolor{numpexgray}{\rule{0pt}{2.5ex}\color{white}\bf Bottlenecks} & {\rule{0pt}{2.5ex}\color{white}\bf Short Description }\\

\rowcolor{white} Coupling with parallelized software & Nowadays more and more simulation software are parralelized, sometimes with shared memory. Moreover, multiphysics simulations implies chained or coupled simulation software. Uranie has to be capable to perform UQ studies on such simulation software. \\
\end{tabular}
}
}
\caption{WP6: Uranie plan with Respect to Relevant Bottlenecks}
\label{tab:WP6:Uranie:bottlenecks}
\end{table}
\end{table}

9 changes: 2 additions & 7 deletions software/uranie/uranie.tex
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,8 @@ \subsection{Software summary}
\subsection{Purpose}
\label{sec:Uranie:purpose}

Uranie (the version under discussion here being v4.9.0) is a software dedicated to perform studies on uncertainty propagation, sensitivity analysis and surrogate model generation and calibration, based on ROOT (the corresponding version being v6.32.00). The motivation for the development of Uranie is the VVUQ (Verification, Validation and Uncertainty Quantification) approach for conceiving a numerical model of real physical phenomena of interests.
Uranie (the version under discussion here being v4.9.0) is a software dedicated to perform studies on uncertainty propagation, sensitivity analysis and surrogate model generation and calibration, based on ROOT (the corresponding version being v6.32.00). The motivation for the development of Uranie is the VVUQ (Verification, Validation and Uncertainty Quantification) approach for conceiving a numerical model of real physical phenomena of interests. Uranie is developed such that it interfaces well with CEA internal numerical simulation software.

\subsection{Programming and Computational Environment}
\label{sec::Uranie:environment_capabilities}


The following table summarizes these aspects for Uranie, providing a view of its programming and computational capabilities.

\begin{table}[h!]
\centering
Expand Down Expand Up @@ -126,7 +121,7 @@ \subsection{Mathematics}
\subsection{Relevant Publications}
\label{sec:Uranie:publications}

Here is a list of relevant publications related to the software:
Here is a relevant publication used to cite Uranie:

\begin{itemize}
\item \fullcite{blanchard_uranie_2019}
Expand Down