Skip to content

Commit 40e991b

Browse files
authored
Merge pull request #34 from boegel/abstract
rewrite abstract to include CernVM-FS
2 parents d95608a + 522e2f9 commit 40e991b

File tree

1 file changed

+22
-11
lines changed

1 file changed

+22
-11
lines changed

isc25/EESSI/abstract.tex

Lines changed: 22 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,32 @@
1-
What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC
2-
cluster or cloud instance you use or maintain, without compromising on performance?
1+
What if there was a way to avoid having to install a broad range of scientific software from scratch on every
2+
supercomputer, cloud instance, or laptop you use or maintain, without compromising on performance?
33

44
Installing scientific software for supercomputers is known to be a tedious and time-consuming task. The application
55
software stack continues to deepen as the
6-
HPC user community becomes more diverse, computational science expands rapidly, and the diversity of system architectures
6+
High-Performance Computing (HPC) user community becomes more diverse, computational science expands rapidly, and the diversity of system architectures
77
increases. Simultaneously, we see a surge in interest in public cloud
88
infrastructures for scientific computing. Delivering optimised software installations and providing access to these
99
installations in a reliable, user-friendly, and reproducible way is a highly non-trivial task that affects application
1010
developers, HPC user support teams, and the users themselves.
1111

12-
This tutorial aims to address these challenges by providing the attendees with the tools to \emph{stream} the optimised
13-
scientific software they need. The tutorial introduces European Environment for Scientific Software Installations
14-
(\emph{EESSI}), a collaboration between various European HPC sites \& industry partners, with the common goal of
15-
creating a shared repository of scientific software installations (\emph{not} recipes) that can be used on a variety of
16-
systems, regardless
17-
of which flavor/version of Linux distribution or processor architecture is used, or whether it's a full size HPC
12+
Although scientific research on supercomputers is fundamentally software-driven,
13+
setting up and managing a software stack remains challenging and time-consuming.
14+
In addition, parallel filesystems like GPFS and Lustre are known to be ill-suited for hosting software installations
15+
that typically consist of a large number of small files. This can lead to surprisingly slow startup performance of
16+
software, and may even negatively impact the overall performance of the system.
17+
While workarounds for these issues such as using container images are prevalent, they come with caveats,
18+
such as the significant size of these images, the required compatibility with the system MPI for distributing computing,
19+
and complications with accessing specialized hardware resources like GPUs.
20+
21+
This tutorial aims to address these challenges by introducing the attendees to a way to \emph{stream}
22+
software installations via \emph{CernVM-FS}, a distributed read-only filesystem specifically designed
23+
to efficiently distribute software across large-scale computing infrastructures.
24+
The tutorial introduces the \emph{European Environment for Scientific Software Installations (EESSI)},
25+
a collaboration between various European HPC sites \& industry partners, with the common goal of
26+
creating a shared repository of optimised scientific software installations (\emph{not} recipes) that can be used on a variety of
27+
systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it's a full size HPC
1828
cluster, a cloud environment or a personal workstation.
1929

20-
We cover the usage of EESSI, different ways to accessing EESSI, how to add software to EESSI, and highlight some more
21-
advanced features. We will also show attendees how to engage with the community and contribute to the project.
30+
We cover the installation and configuration of CernVM-FS to access EESSI, the usage of EESSI, how to add software
31+
installations to EESSI, how to install software on top of EESSI, and advanced topics like GPU support and performance
32+
tuning.

0 commit comments

Comments
 (0)