Skip to content

Commit 2d4c138

Browse files
authored
Update DocStrings (#662)
Especially for the classes the users are interacting with
1 parent 9b9f51e commit 2d4c138

File tree

4 files changed

+55
-60
lines changed

4 files changed

+55
-60
lines changed

docs/installation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -120,8 +120,8 @@ For the version 5 of openmpi the backend changed to `pmix`, this requires the ad
120120
```
121121
conda install -c conda-forge flux-core flux-sched flux-pmix openmpi>=5 executorlib
122122
```
123-
In addition, the `flux_executor_pmi_mode="pmix"` parameter has to be set for the `executorlib.Executor` to switch to
124-
`pmix` as backend.
123+
In addition, the `flux_executor_pmi_mode="pmix"` parameter has to be set for the `FluxJobExecutor` or the
124+
`FluxClusterExecutor` to switch to `pmix` as backend.
125125

126126
### Test Flux Framework
127127
To validate the installation of flux and confirm the GPUs are correctly recognized, you can start a flux session on the

executorlib/executor/flux.py

Lines changed: 21 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,11 @@
1919

2020
class FluxJobExecutor(BaseExecutor):
2121
"""
22-
The executorlib.Executor leverages either the message passing interface (MPI), the SLURM workload manager or
22+
The executorlib.FluxJobExecutor leverages either the message passing interface (MPI), the SLURM workload manager or
2323
preferable the flux framework for distributing python functions within a given resource allocation. In contrast to
24-
the mpi4py.futures.MPIPoolExecutor the executorlib.Executor can be executed in a serial python process and does not
25-
require the python script to be executed with MPI. It is even possible to execute the executorlib.Executor directly
26-
in an interactive Jupyter notebook.
24+
the mpi4py.futures.MPIPoolExecutor the executorlib.FluxJobExecutor can be executed in a serial python process and
25+
does not require the python script to be executed with MPI. It is even possible to execute the
26+
executorlib.FluxJobExecutor directly in an interactive Jupyter notebook.
2727
2828
Args:
2929
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the number of
@@ -65,7 +65,7 @@ class FluxJobExecutor(BaseExecutor):
6565
Examples:
6666
```
6767
>>> import numpy as np
68-
>>> from executorlib.executor.flux import FluxJobExecutor
68+
>>> from executorlib import FluxJobExecutor
6969
>>>
7070
>>> def calc(i, j, k):
7171
>>> from mpi4py import MPI
@@ -102,12 +102,11 @@ def __init__(
102102
plot_dependency_graph_filename: Optional[str] = None,
103103
):
104104
"""
105-
Instead of returning a executorlib.Executor object this function returns either a executorlib.mpi.PyMPIExecutor,
106-
executorlib.slurm.PySlurmExecutor or executorlib.flux.PyFluxExecutor depending on which backend is available. The
107-
executorlib.flux.PyFluxExecutor is the preferred choice while the executorlib.mpi.PyMPIExecutor is primarily used
108-
for development and testing. The executorlib.flux.PyFluxExecutor requires flux-core from the flux-framework to be
109-
installed and in addition flux-sched to enable GPU scheduling. Finally, the executorlib.slurm.PySlurmExecutor
110-
requires the SLURM workload manager to be installed on the system.
105+
The executorlib.FluxJobExecutor leverages either the message passing interface (MPI), the SLURM workload manager
106+
or preferable the flux framework for distributing python functions within a given resource allocation. In
107+
contrast to the mpi4py.futures.MPIPoolExecutor the executorlib.FluxJobExecutor can be executed in a serial
108+
python process and does not require the python script to be executed with MPI. It is even possible to execute
109+
the executorlib.FluxJobExecutor directly in an interactive Jupyter notebook.
111110
112111
Args:
113112
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the
@@ -204,11 +203,11 @@ def __init__(
204203

205204
class FluxClusterExecutor(BaseExecutor):
206205
"""
207-
The executorlib.Executor leverages either the message passing interface (MPI), the SLURM workload manager or
208-
preferable the flux framework for distributing python functions within a given resource allocation. In contrast to
209-
the mpi4py.futures.MPIPoolExecutor the executorlib.Executor can be executed in a serial python process and does not
210-
require the python script to be executed with MPI. It is even possible to execute the executorlib.Executor directly
211-
in an interactive Jupyter notebook.
206+
The executorlib.FluxClusterExecutor leverages either the message passing interface (MPI), the SLURM workload manager
207+
or preferable the flux framework for distributing python functions within a given resource allocation. In contrast
208+
to the mpi4py.futures.MPIPoolExecutor the executorlib.FluxClusterExecutor can be executed in a serial python process
209+
and does not require the python script to be executed with MPI. It is even possible to execute the
210+
executorlib.FluxClusterExecutor directly in an interactive Jupyter notebook.
212211
213212
Args:
214213
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the number of
@@ -246,7 +245,7 @@ class FluxClusterExecutor(BaseExecutor):
246245
Examples:
247246
```
248247
>>> import numpy as np
249-
>>> from executorlib.executor.flux import FluxClusterExecutor
248+
>>> from executorlib import FluxClusterExecutor
250249
>>>
251250
>>> def calc(i, j, k):
252251
>>> from mpi4py import MPI
@@ -280,12 +279,11 @@ def __init__(
280279
plot_dependency_graph_filename: Optional[str] = None,
281280
):
282281
"""
283-
Instead of returning a executorlib.Executor object this function returns either a executorlib.mpi.PyMPIExecutor,
284-
executorlib.slurm.PySlurmExecutor or executorlib.flux.PyFluxExecutor depending on which backend is available. The
285-
executorlib.flux.PyFluxExecutor is the preferred choice while the executorlib.mpi.PyMPIExecutor is primarily used
286-
for development and testing. The executorlib.flux.PyFluxExecutor requires flux-core from the flux-framework to be
287-
installed and in addition flux-sched to enable GPU scheduling. Finally, the executorlib.slurm.PySlurmExecutor
288-
requires the SLURM workload manager to be installed on the system.
282+
The executorlib.FluxClusterExecutor leverages either the message passing interface (MPI), the SLURM workload
283+
manager or preferable the flux framework for distributing python functions within a given resource allocation.
284+
In contrast to the mpi4py.futures.MPIPoolExecutor the executorlib.FluxClusterExecutor can be executed in a
285+
serial python process and does not require the python script to be executed with MPI. It is even possible to
286+
execute the executorlib.FluxClusterExecutor directly in an interactive Jupyter notebook.
289287
290288
Args:
291289
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the

executorlib/executor/single.py

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,11 @@
1919

2020
class SingleNodeExecutor(BaseExecutor):
2121
"""
22-
The executorlib.Executor leverages either the message passing interface (MPI), the SLURM workload manager or
23-
preferable the flux framework for distributing python functions within a given resource allocation. In contrast to
24-
the mpi4py.futures.MPIPoolExecutor the executorlib.Executor can be executed in a serial python process and does not
25-
require the python script to be executed with MPI. It is even possible to execute the executorlib.Executor directly
26-
in an interactive Jupyter notebook.
22+
The executorlib.SingleNodeExecutor leverages either the message passing interface (MPI), the SLURM workload manager
23+
or preferable the flux framework for distributing python functions within a given resource allocation. In contrast
24+
to the mpi4py.futures.MPIPoolExecutor the executorlib.SingleNodeExecutor can be executed in a serial python process
25+
and does not require the python script to be executed with MPI. It is even possible to execute the
26+
executorlib.SingleNodeExecutor directly in an interactive Jupyter notebook.
2727
2828
Args:
2929
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the number of
@@ -60,7 +60,7 @@ class SingleNodeExecutor(BaseExecutor):
6060
Examples:
6161
```
6262
>>> import numpy as np
63-
>>> from executorlib.executor.single import SingleNodeExecutor
63+
>>> from executorlib import SingleNodeExecutor
6464
>>>
6565
>>> def calc(i, j, k):
6666
>>> from mpi4py import MPI
@@ -93,12 +93,11 @@ def __init__(
9393
plot_dependency_graph_filename: Optional[str] = None,
9494
):
9595
"""
96-
Instead of returning a executorlib.Executor object this function returns either a executorlib.mpi.PyMPIExecutor,
97-
executorlib.slurm.PySlurmExecutor or executorlib.flux.PyFluxExecutor depending on which backend is available. The
98-
executorlib.flux.PyFluxExecutor is the preferred choice while the executorlib.mpi.PyMPIExecutor is primarily used
99-
for development and testing. The executorlib.flux.PyFluxExecutor requires flux-core from the flux-framework to be
100-
installed and in addition flux-sched to enable GPU scheduling. Finally, the executorlib.slurm.PySlurmExecutor
101-
requires the SLURM workload manager to be installed on the system.
96+
The executorlib.SingleNodeExecutor leverages either the message passing interface (MPI), the SLURM workload
97+
manager or preferable the flux framework for distributing python functions within a given resource allocation.
98+
In contrast to the mpi4py.futures.MPIPoolExecutor the executorlib.SingleNodeExecutor can be executed in a serial
99+
python process and does not require the python script to be executed with MPI. It is even possible to execute
100+
the executorlib.SingleNodeExecutor directly in an interactive Jupyter notebook.
102101
103102
Args:
104103
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the

executorlib/executor/slurm.py

Lines changed: 21 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -20,11 +20,11 @@
2020

2121
class SlurmClusterExecutor(BaseExecutor):
2222
"""
23-
The executorlib.Executor leverages either the message passing interface (MPI), the SLURM workload manager or
24-
preferable the flux framework for distributing python functions within a given resource allocation. In contrast to
25-
the mpi4py.futures.MPIPoolExecutor the executorlib.Executor can be executed in a serial python process and does not
26-
require the python script to be executed with MPI. It is even possible to execute the executorlib.Executor directly
27-
in an interactive Jupyter notebook.
23+
The executorlib.SlurmClusterExecutor leverages either the message passing interface (MPI), the SLURM workload
24+
manager or preferable the flux framework for distributing python functions within a given resource allocation. In
25+
contrast to the mpi4py.futures.MPIPoolExecutor the executorlib.SlurmClusterExecutor can be executed in a serial
26+
python process and does not require the python script to be executed with MPI. It is even possible to execute the
27+
executorlib.SlurmClusterExecutor directly in an interactive Jupyter notebook.
2828
2929
Args:
3030
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the number of
@@ -62,7 +62,7 @@ class SlurmClusterExecutor(BaseExecutor):
6262
Examples:
6363
```
6464
>>> import numpy as np
65-
>>> from executorlib.executor.slurm import SlurmClusterExecutor
65+
>>> from executorlib import SlurmClusterExecutor
6666
>>>
6767
>>> def calc(i, j, k):
6868
>>> from mpi4py import MPI
@@ -96,12 +96,11 @@ def __init__(
9696
plot_dependency_graph_filename: Optional[str] = None,
9797
):
9898
"""
99-
Instead of returning a executorlib.Executor object this function returns either a executorlib.mpi.PyMPIExecutor,
100-
executorlib.slurm.PySlurmExecutor or executorlib.flux.PyFluxExecutor depending on which backend is available. The
101-
executorlib.flux.PyFluxExecutor is the preferred choice while the executorlib.mpi.PyMPIExecutor is primarily used
102-
for development and testing. The executorlib.flux.PyFluxExecutor requires flux-core from the flux-framework to be
103-
installed and in addition flux-sched to enable GPU scheduling. Finally, the executorlib.slurm.PySlurmExecutor
104-
requires the SLURM workload manager to be installed on the system.
99+
The executorlib.SlurmClusterExecutor leverages either the message passing interface (MPI), the SLURM workload
100+
manager or preferable the flux framework for distributing python functions within a given resource allocation.
101+
In contrast to the mpi4py.futures.MPIPoolExecutor the executorlib.SlurmClusterExecutor can be executed in a
102+
serial python process and does not require the python script to be executed with MPI. It is even possible to
103+
execute the executorlib.SlurmClusterExecutor directly in an interactive Jupyter notebook.
105104
106105
Args:
107106
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the
@@ -196,11 +195,11 @@ def __init__(
196195

197196
class SlurmJobExecutor(BaseExecutor):
198197
"""
199-
The executorlib.Executor leverages either the message passing interface (MPI), the SLURM workload manager or
198+
The executorlib.SlurmJobExecutor leverages either the message passing interface (MPI), the SLURM workload manager or
200199
preferable the flux framework for distributing python functions within a given resource allocation. In contrast to
201-
the mpi4py.futures.MPIPoolExecutor the executorlib.Executor can be executed in a serial python process and does not
202-
require the python script to be executed with MPI. It is even possible to execute the executorlib.Executor directly
203-
in an interactive Jupyter notebook.
200+
the mpi4py.futures.MPIPoolExecutor the executorlib.SlurmJobExecutor can be executed in a serial python process and
201+
does not require the python script to be executed with MPI. It is even possible to execute the
202+
executorlib.SlurmJobExecutor directly in an interactive Jupyter notebook.
204203
205204
Args:
206205
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the number of
@@ -241,7 +240,7 @@ class SlurmJobExecutor(BaseExecutor):
241240
Examples:
242241
```
243242
>>> import numpy as np
244-
>>> from executorlib.executor.slurm import SlurmJobExecutor
243+
>>> from executorlib import SlurmJobExecutor
245244
>>>
246245
>>> def calc(i, j, k):
247246
>>> from mpi4py import MPI
@@ -274,12 +273,11 @@ def __init__(
274273
plot_dependency_graph_filename: Optional[str] = None,
275274
):
276275
"""
277-
Instead of returning a executorlib.Executor object this function returns either a executorlib.mpi.PyMPIExecutor,
278-
executorlib.slurm.PySlurmExecutor or executorlib.flux.PyFluxExecutor depending on which backend is available. The
279-
executorlib.flux.PyFluxExecutor is the preferred choice while the executorlib.mpi.PyMPIExecutor is primarily used
280-
for development and testing. The executorlib.flux.PyFluxExecutor requires flux-core from the flux-framework to be
281-
installed and in addition flux-sched to enable GPU scheduling. Finally, the executorlib.slurm.PySlurmExecutor
282-
requires the SLURM workload manager to be installed on the system.
276+
The executorlib.SlurmJobExecutor leverages either the message passing interface (MPI), the SLURM workload
277+
manager or preferable the flux framework for distributing python functions within a given resource allocation.
278+
In contrast to the mpi4py.futures.MPIPoolExecutor the executorlib.SlurmJobExecutor can be executed in a serial
279+
python process and does not require the python script to be executed with MPI. It is even possible to execute
280+
the executorlib.SlurmJobExecutor directly in an interactive Jupyter notebook.
283281
284282
Args:
285283
max_workers (int): for backwards compatibility with the standard library, max_workers also defines the

0 commit comments

Comments
 (0)