Partitions & Time Limits¶
Slurm schedules jobs into partitions (queues). Each partition defines policy such as default/maximum wall time, access control, hardware class, and more. Time format is D-HH:MM:SS. The * marks the default partition.
Example of live sinfo output (snapshot)¶
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
general* up 10-00:00:0 9 idle N[01-09]
xeon128 up infinite 2 idle N[01-02]
epyc256 up 30-00:00:0 1 idle N03
epyc512 up 30-00:00:0 6 idle N[04-09]
interactive up 10-00:00:0 3 idle N[01-03]
DB up infinite 1 idle N10 # Reserved
Snapshot table explanation¶
| Partition | Avail | Time limit | Nodes | State | Nodelist | Notes |
|---|---|---|---|---|---|---|
| general* | up | 10-00:00:0 | 9 | idle | N[01-09] | Default; used if -p is not specified |
| xeon128 | up | infinite | 2 | idle | N[01-02] | Nodes with 2 × Xeon CPUs, 48 cores and 128 GB RAM |
| epyc256 | up | 30-00:00:0 | 1 | idle | N03 | Node with 2 × EPYC CPUs, 96 cores and 256 GB RAM |
| epyc512 | up | 30-00:00:0 | 6 | idle | N[04-09] | Nodes with 2 × EPYC CPUs, 96 cores and 512 GB RAM |
| interactive | up | 10-00:00:0 | 3 | idle | N[01-03] | Nodes enabling interactive work (srun) |
| DB | up | infinite | 1 | idle | N10 | Reserved |
How time limits work¶
- If you omit
--time, the job gets the partition’s DefaultTime (which can be shorter thanMaxTime). - Request time explicitly with
--time=DD-HH:MM:SS:sbatch -p general --time=2-00:00:00 myjob.sh srun -p interactive --time=02:00:00 --pty bash - Requests exceeding a partition’s
MaxTimeare rejected. - A partition can show
infinite, but site/account QoS may still cap runtime.
Examples¶
1) Submit a batch job (default partition)¶
# Uses 'general' (default) with its default time:
sbatch myjob.sh
# Request 2 days explicitly:
sbatch -p general --time=2-00:00:00 myjob.sh
2) Submit to a specific hardware partition¶
# EPYC 512 GB nodes for 12 hours:
sbatch -p epyc512 --time=12:00:00 myjob.sh
# EPYC 256 GB node for 6 hours:
sbatch -p epyc256 --time=06:00:00 myjob.sh
3) Interactive shell for debugging¶
# 2-hour interactive session:
srun -p interactive --time=02:00:00 --pty bash
4) Pin to specific node(s) (diagnostics only)¶
# Force placement on node N03 for 1 hour:
sbatch -p general -w N03 --time=01:00:00 myjob.sh
5) See partitions and nodes¶
# Compact summary (same columns as the snapshot):
sinfo -s
# List nodes in a partition (name, state, CPUs, memory MiB):
sinfo -p epyc512 -N -h -o "%N %t %c %m"
# Columns: NodeName State CPUs Memory(MiB)
6) Inspect a partition’s policy¶
# Shows DefaultTime, MaxTime, access rules, etc.:
scontrol show partition general
7) Check your jobs with partition shown¶
squeue -o "%10i %8u %10T %10P %N %10M %l" | column -t
# JOBID USER STATE PARTITION NODELIST TIME LIMIT