@@ -42,6 +42,8 @@ In this example, we are using 1 node, which contains 2 sockets and 64 cores per
42
42
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
43
43
export OMP_PROC_BIND=TRUE
44
44
45
+ # Optional
46
+ python -c " import nest, subprocess as s, os; s.check_call(['/usr/bin/pldd', str(os.getpid())])" 2>&1 | tee -a " pldd-nest.out"
45
47
46
48
# On some systems, MPI is run by SLURM
47
49
srun --exclusive python3 my_nest_simulation.py
@@ -174,6 +176,21 @@ will prevent the threads from moving around.
174
176
175
177
|
176
178
179
+ ::
180
+
181
+ python -c "import nest, subprocess as s, os; s.check_call(['/usr/bin/pldd', str(os.getpid())])" 2>&1 | tee -a "pldd-nest.out"
182
+
183
+ Prints out the linked libraries into a file with name ``pldd-nest.out ``.
184
+ In this way, you can check whether dynamically linked librariries for
185
+ the execution of ``nest `` is indeed used. For example, you can check if ``jemalloc `` is used for the network construction
186
+ in highly parallel simulations.
187
+
188
+ .. note ::
189
+
190
+ The above command uses ``pldd `` which is commonly available in Linux distributions. However, you might need to change
191
+ the path, which you can find with the command ``which pldd ``.
192
+
193
+ |
177
194
178
195
You can then tell the job script to schedule your simulation.
179
196
Setting the ``exclusive `` option prevents other processes or jobs from doing work on the same node.
@@ -222,11 +239,3 @@ It should match the number of ``cpus-per-task``.
222
239
.. seealso ::
223
240
224
241
:ref: `parallel_computing `
225
-
226
-
227
-
228
-
229
-
230
-
231
-
232
-
0 commit comments