Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wp6 update #47

Merged
merged 10 commits into from
Oct 15, 2024
72 changes: 1 addition & 71 deletions software/uranie/WP2/WP2.tex
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ \section{Software: Uranie}
\rowcolor{numpexlightergray}\textbf{Supported Architectures} & \begin{tabular}{l}
CPU Only\\
\end{tabular} \\
\rowcolor{white}\textbf{Repository} & \href{https://sourceforge.net/projects/uranie/}{https://sourceforge.net/projects/uranie/} \\
\rowcolor{white}\textbf{Repository} & \href{https://uranie.cea.fr}{https://uranie.cea.fr} \\
\rowcolor{numpexlightergray}\textbf{License} & \begin{tabular}{l}
OSS:: LGPL v*\\
\end{tabular} \\
Expand Down Expand Up @@ -88,73 +88,3 @@ \subsection{Parallel Capabilities}
\item URANIE allows performing simulations in parallel for uncertainty quantification.
\item \textbf{Scalability:} The scalability is constant because each the software distributes the simulation. if we add resource, they are devoted to run new simulations.
\end{itemize}


\subsection{Initial Performance Metrics}
\label{sec:WP2:Uranie:metrics}

This section provides a summary of initial performance benchmarks performed in the context of WP2. It ensures reproducibility by detailing input/output datasets, benchmarking tools, and the results. All data should be publicly available, ideally with a DOI for future reference.

\begin{itemize}
\item \textbf{Overall Performance:} Summarize the software's computational performance, energy efficiency, and scalability results across different architectures (e.g., CPU, GPU, hybrid systems).
\item \textbf{Input/Output Dataset:} Provide a detailed description of the dataset used for the benchmark, including:
\begin{itemize}
\item Input dataset size, structure, and format (e.g., CSV, HDF5, NetCDF).
\item Output dataset format and key results.
\item Location of the dataset (e.g., GitHub repository, institutional repository, or open access platform).
\item DOI or permanent link for accessing the dataset.
\end{itemize}
\item \textbf{open-data Access:} Indicate whether the datasets used for the benchmark are open access, and provide a DOI or a direct link for download. Where applicable, highlight any licensing constraints.
\item \textbf{Challenges:} Identify any significant bottlenecks or challenges observed during the benchmarking process, including data handling and computational performance.
\item \textbf{Future Improvements:} Outline areas for optimization, including dataset handling, memory usage, or algorithmic efficiency, to address identified challenges.
\end{itemize}

\subsubsection{Benchmark \#1}
\begin{itemize}
\item \textbf{Description:} Briefly describe the benchmark case, including the problem size, target architecture (e.g., CPU, GPU), and the input data. Mention the specific goals of the benchmark (e.g., testing scalability, energy efficiency).
\item \textbf{Benchmarking Tools Used:} List the tools used for performance analysis, such as Extrae, Score-P, TAU, Vampir, or Nsight, and specify what metrics were measured (e.g., execution time, FLOPS, energy consumption).
\item \textbf{Input/Output Dataset Description:}
\begin{itemize}
\item \textbf{Input Data:} Describe the input dataset (size, format, data type) and provide a DOI or link to access it.
\item \textbf{Output Data:} Specify the structure of the results (e.g., memory usage, runtime logs) and how they can be accessed or replicated.
\item \textbf{Data Repository:} Indicate where the data is stored (e.g., Zenodo, institutional repository) and provide a DOI or URL for accessing the data.
\end{itemize}
\item \textbf{Results Summary:} Include a summary of key metrics (execution time, memory usage, FLOPS) and their comparison across architectures (e.g., CPU, GPU).
\item \textbf{Challenges Identified:} Describe any bottlenecks encountered (e.g., memory usage, parallelization inefficiencies) and how they impacted the benchmark.
\end{itemize}

\subsection{12-Month Roadmap}
\label{sec:WP2:Uranie:roadmap}

In this section, describe the roadmap for improving benchmarks and addressing the challenges identified. This should include:
\begin{itemize}
\item \textbf{Data Improvements:} Plans for improving input/output data management, including making datasets more accessible and ensuring reproducibility through open-data initiatives.
\item \textbf{Methodology Application:} Implementation of the benchmarking methodology proposed in this deliverable to streamline reproducibility and dataset management.
\item \textbf{Results Retention:} Plans to maintain benchmark results in a publicly accessible repository with appropriate metadata and documentation, ensuring long-term usability.
\end{itemize}

In~\cref{tab:WP2:Uranie:bottlenecks}, we briefly discuss the bottleneck roadmap associated to the software and relevant to the work package.

\begin{table}[h!]
\centering



\centering
{
\setlength{\parindent}{0pt}
\def\arraystretch{1.25}
\arrayrulecolor{numpexgray}
{
\fontsize{9}{11}\selectfont
\begin{tabular}{!{\color{numpexgray}\vrule}p{.25\linewidth}!{\color{numpexgray}\vrule}p{.6885\linewidth}!{\color{numpexgray}\vrule}}

\rowcolor{numpexgray}{\rule{0pt}{2.5ex}\color{white}\bf Bottlenecks} & {\rule{0pt}{2.5ex}\color{white}\bf Short Description }\\

\rowcolor{white} None & provide short description here \\
\end{tabular}
}
}
\caption{WP2: Uranie plan with Respect to Relevant Bottlenecks}
\label{tab:WP2:Uranie:bottlenecks}
\end{table}
62 changes: 1 addition & 61 deletions software/uranie/WP5/WP5.tex
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ \section{Software: Uranie}
\rowcolor{numpexlightergray}\textbf{Supported Architectures} & \begin{tabular}{l}
CPU Only\\
\end{tabular} \\
\rowcolor{white}\textbf{Repository} & \href{https://sourceforge.net/projects/uranie/}{https://sourceforge.net/projects/uranie/} \\
\rowcolor{white}\textbf{Repository} & \href{https://uranie.cea.fr}{https://uranie.cea.fr} \\
\rowcolor{numpexlightergray}\textbf{License} & \begin{tabular}{l}
OSS:: LGPL v*\\
\end{tabular} \\
Expand Down Expand Up @@ -91,63 +91,3 @@ \subsection{Parallel Capabilities}
\end{itemize}


\subsection{Initial Performance Metrics}
\label{sec:WP5:Uranie:metrics}

This section provides a summary of initial performance benchmarks performed in the context of WP5. It ensures reproducibility by detailing input/output datasets, benchmarking tools, and the results. All data should be publicly available, ideally with a DOI for future reference.

\begin{itemize}
\item \textbf{Overall Performance:} Summarize the software's computational performance, energy efficiency, and scalability results across different architectures (e.g., CPU, GPU, hybrid systems).
\item \textbf{Input/Output Dataset:} Provide a detailed description of the dataset used for the benchmark, including:
\begin{itemize}
\item Input dataset size, structure, and format (e.g., CSV, HDF5, NetCDF).
\item Output dataset format and key results.
\item Location of the dataset (e.g., GitHub repository, institutional repository, or open access platform).
\item DOI or permanent link for accessing the dataset.
\end{itemize}
\item \textbf{open-data Access:} Indicate whether the datasets used for the benchmark are open access, and provide a DOI or a direct link for download. Where applicable, highlight any licensing constraints.
\item \textbf{Challenges:} Identify any significant bottlenecks or challenges observed during the benchmarking process, including data handling and computational performance.
\item \textbf{Future Improvements:} Outline areas for optimization, including dataset handling, memory usage, or algorithmic efficiency, to address identified challenges.
\end{itemize}

\subsubsection{Benchmark \#1}
\begin{itemize}
\item \textbf{Description:} Briefly describe the benchmark case, including the problem size, target architecture (e.g., CPU, GPU), and the input data. Mention the specific goals of the benchmark (e.g., testing scalability, energy efficiency).
\item \textbf{Benchmarking Tools Used:} List the tools used for performance analysis, such as Extrae, Score-P, TAU, Vampir, or Nsight, and specify what metrics were measured (e.g., execution time, FLOPS, energy consumption).
\item \textbf{Input/Output Dataset Description:}
\begin{itemize}
\item \textbf{Input Data:} Describe the input dataset (size, format, data type) and provide a DOI or link to access it.
\item \textbf{Output Data:} Specify the structure of the results (e.g., memory usage, runtime logs) and how they can be accessed or replicated.
\item \textbf{Data Repository:} Indicate where the data is stored (e.g., Zenodo, institutional repository) and provide a DOI or URL for accessing the data.
\end{itemize}
\item \textbf{Results Summary:} Include a summary of key metrics (execution time, memory usage, FLOPS) and their comparison across architectures (e.g., CPU, GPU).
\item \textbf{Challenges Identified:} Describe any bottlenecks encountered (e.g., memory usage, parallelization inefficiencies) and how they impacted the benchmark.
\end{itemize}

\subsection{12-Month Roadmap}
\label{sec:WP5:Uranie:roadmap}

Developments are currently being made to improve the Metropolis-Hastings Uranie-class and to add stochastic inversion algorithms based on Stochastic Expectation Maximization (SEM) or Stochastic Approximation of Expectation Maximization (SAEM).

In~\cref{tab:WP5:Uranie:bottlenecks}, we briefly discuss the bottleneck roadmap associated to the software and relevant to the work package.

\begin{table}[h!]
\centering
\centering
{
\setlength{\parindent}{0pt}
\def\arraystretch{1.25}
\arrayrulecolor{numpexgray}
{
\fontsize{9}{11}\selectfont
\begin{tabular}{!{\color{numpexgray}\vrule}p{.25\linewidth}!{\color{numpexgray}\vrule}p{.6885\linewidth}!{\color{numpexgray}\vrule}}

\rowcolor{numpexgray}{\rule{0pt}{2.5ex}\color{white}\bf Bottlenecks} & {\rule{0pt}{2.5ex}\color{white}\bf Short Description }\\

\rowcolor{white} None & provide short description here \\
\end{tabular}
}
}
\caption{WP5: Uranie plan with Respect to Relevant Bottlenecks}
\label{tab:WP5:Uranie:bottlenecks}
\end{table}
63 changes: 1 addition & 62 deletions software/uranie/WP6/WP6.tex
Original file line number Diff line number Diff line change
Expand Up @@ -86,72 +86,11 @@ \subsection{Parallel Capabilities}
URANIE use also LibSSH for launching code on different cluster (in the module TLauncher).

\item The parallel computation environment of our platform is built on a HPC architecture designed to maximize computational power and efficiency
using both distributed and shared memory parallelism. URANIE is used on CEA/TGCC supercomputers such as IRESNE.
using both distributed and shared memory parallelism. URANIE is used on CEA/TGCC supercomputers.

\item URANIE allows performing simulations in parallel for uncertainty quantification.
\item \textbf{Scalability:} The scalability is constant because each the software distributes the simulation. if we add resource, they are devoted to run new simulations.
% \item \textbf{Integration with Other Systems:} Describe how the software integrates with other numerical libraries in the Exa-MA framework.
\end{itemize}


\subsection{Initial Performance Metrics}
\label{sec:WP6:Uranie:metrics}

This section provides a summary of initial performance benchmarks performed in the context of WP6. It ensures reproducibility by detailing input/output datasets, benchmarking tools, and the results. All data should be publicly available, ideally with a DOI for future reference.

\begin{itemize}
\item \textbf{Overall Performance:} Summarize the software's computational performance, energy efficiency, and scalability results across different architectures (e.g., CPU, GPU, hybrid systems).
\item \textbf{Input/Output Dataset:} Provide a detailed description of the dataset used for the benchmark, including:
\begin{itemize}
\item Input dataset size, structure, and format (e.g., CSV, HDF5, NetCDF).
\item Output dataset format and key results.
\item Location of the dataset (e.g., GitHub repository, institutional repository, or open access platform).
\item DOI or permanent link for accessing the dataset.
\end{itemize}
\item \textbf{open-data Access:} Indicate whether the datasets used for the benchmark are open access, and provide a DOI or a direct link for download. Where applicable, highlight any licensing constraints.
\item \textbf{Challenges:} Identify any significant bottlenecks or challenges observed during the benchmarking process, including data handling and computational performance.
\item \textbf{Future Improvements:} Outline areas for optimization, including dataset handling, memory usage, or algorithmic efficiency, to address identified challenges.
\end{itemize}

\subsubsection{Benchmark \#1}
\begin{itemize}
\item \textbf{Description:} Briefly describe the benchmark case, including the problem size, target architecture (e.g., CPU, GPU), and the input data. Mention the specific goals of the benchmark (e.g., testing scalability, energy efficiency).
\item \textbf{Benchmarking Tools Used:} List the tools used for performance analysis, such as Extrae, Score-P, TAU, Vampir, or Nsight, and specify what metrics were measured (e.g., execution time, FLOPS, energy consumption).
\item \textbf{Input/Output Dataset Description:}
\begin{itemize}
\item \textbf{Input Data:} Describe the input dataset (size, format, data type) and provide a DOI or link to access it.
\item \textbf{Output Data:} Specify the structure of the results (e.g., memory usage, runtime logs) and how they can be accessed or replicated.
\item \textbf{Data Repository:} Indicate where the data is stored (e.g., Zenodo, institutional repository) and provide a DOI or URL for accessing the data.
\end{itemize}
\item \textbf{Results Summary:} Include a summary of key metrics (execution time, memory usage, FLOPS) and their comparison across architectures (e.g., CPU, GPU).
\item \textbf{Challenges Identified:} Describe any bottlenecks encountered (e.g., memory usage, parallelization inefficiencies) and how they impacted the benchmark.
\end{itemize}

\subsection{12-Month Roadmap}
\label{sec:WP6:Uranie:roadmap}

In the context of exascale computation, Uranie will have to perform uncertainty quantification on more complex simulation software, that will be heavily parallelized using for instance MPI or OpenMP, and run on supercomputers. For the specific case of uncertainty propagation. Ensemble-runs of a simulation software has to be performed, and this can be tricky when the software is parallelized. Memory storage will also be challenging in the exascale era and "on the fly" handling of the output data generated by the simulation software has to be performed by Uranie. In the next deliverable, an adaptation of the Relauncher module using the ICoCo API (\url{https://github.com/cea-trust-platform/icoco-coupling}) will be proposed and illustrated on an uncertainty propagation task using TRUST software, with on the fly processing of the data generated using TRUST Python API.

\begin{table}[h!]
\centering



\centering
{
\setlength{\parindent}{0pt}
\def\arraystretch{1.25}
\arrayrulecolor{numpexgray}
{
\fontsize{9}{11}\selectfont
\begin{tabular}{!{\color{numpexgray}\vrule}p{.25\linewidth}!{\color{numpexgray}\vrule}p{.6885\linewidth}!{\color{numpexgray}\vrule}}

\rowcolor{numpexgray}{\rule{0pt}{2.5ex}\color{white}\bf Bottlenecks} & {\rule{0pt}{2.5ex}\color{white}\bf Short Description }\\

\rowcolor{white} Coupling with parallelized software & Nowadays more and more simulation software are parralelized, sometimes with shared memory. Moreover, multiphysics simulations implies chained or coupled simulation software. Uranie has to be capable to perform UQ studies on such simulation software. \\
\end{tabular}
}
}
\caption{WP6: Uranie plan with Respect to Relevant Bottlenecks}
\label{tab:WP6:Uranie:bottlenecks}
\end{table}
9 changes: 2 additions & 7 deletions software/uranie/uranie.tex
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,8 @@ \subsection{Software summary}
\subsection{Purpose}
\label{sec:Uranie:purpose}

Uranie (the version under discussion here being v4.9.0) is a software dedicated to perform studies on uncertainty propagation, sensitivity analysis and surrogate model generation and calibration, based on ROOT (the corresponding version being v6.32.00). The motivation for the development of Uranie is the VVUQ (Verification, Validation and Uncertainty Quantification) approach for conceiving a numerical model of real physical phenomena of interests.
Uranie (the version under discussion here being v4.9.0) is a software dedicated to perform studies on uncertainty propagation, sensitivity analysis and surrogate model generation and calibration, based on ROOT (the corresponding version being v6.32.00). The motivation for the development of Uranie is the VVUQ (Verification, Validation and Uncertainty Quantification) approach for conceiving a numerical model of real physical phenomena of interests. Uranie is developed such that it interfaces well with CEA internal numerical simulation software.

\subsection{Programming and Computational Environment}
\label{sec::Uranie:environment_capabilities}


The following table summarizes these aspects for Uranie, providing a view of its programming and computational capabilities.

\begin{table}[h!]
\centering
Expand Down Expand Up @@ -126,7 +121,7 @@ \subsection{Mathematics}
\subsection{Relevant Publications}
\label{sec:Uranie:publications}

Here is a list of relevant publications related to the software:
Here is a relevant publication used to cite Uranie:

\begin{itemize}
\item \fullcite{blanchard_uranie_2019}
Expand Down