-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy pathsample.sub
executable file
·45 lines (38 loc) · 1.61 KB
/
sample.sub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# copy that in your home folder and remove this first line
#!/bin/bash -l
## Bellow we ask for :
# - only 1 machines (machies have at least 20 CPUs)
# - only one task (you rarely need more than one task when not doing many machines)
# - we indicate we plan on using at max a single CPU, and 1G of Ram.
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=1G
#
#SBATCH --partition fast.q
#(You can use multiple partition comma separated)
#SBATCH --time=0-00:15:00 # 0 days 00 hours, 15 minutes, 00 seconds
#
#SBATCH --output=myjob_%j.stdout
#
#SBATCH --job-name=test
#SBATCH --export=ALL
## Constraints and Features (advanced users),
# Many of the Merced's nodes have features that other nodes do not have; for example, GPUs
# or infiniband interconnect for MPI jobs.
#
# You may want to requests nodes with only these features using the --constraint
# option and specifying a list of what you wish to request. Use `sinfo -o "%20N %10c %10m %25f"`
# to see which features are availlable. Currently:
# ib: infiniband (for fast, low latency IO or MPI jobs)
# gpu: Graphical Process Units , separated in K20m, P100 if you need to be more specific.
# Examples of requesting features:
# #SBATCH --constraint=ib
# #SBATCH --constraint=K20m,ib (infiniband and k20 GPU)
#
# This submission file will run a simple set of commands. All stdout will be
# captured in mmyjob_XXXX.stdout (as specified in the Slurm command above).
# This job file uses a shared-memory parallel environment and requests 1 cores
# on a single node.This script loads PGI compiler module named "pgi"
whoami
module load pgi