If you have several thousands of similar jobs to submit to a SLURM cluster such as Compute-Canada, one of your goals, aside from designing each job run as quickly as possible, will be to reduce the queuing delays — time spent waiting for an idle node to accept your job. Scheduling delays can quickly shadow the runtime of your job if you do not take care when requesting job resources — you may wait hours, or days even. One way to minimize waiting, of course, to schedule fewer jobs overall (e.g. by running more tasks within a given allocation if your allocation’s time is not expired). If you can fit everything in a single job, then perfect. But what if you could divide the experiment in two halves to run concurrently, thereby obtaining close to 200% speedup, and also spend less wallclock time in the job queue, waiting?
When working with compute tasks whose execution parameters can be moved along several different resource requirement axes (time, # threads, memory), the task of picking the right parameters can become difficult. Should I throw in more threads to save time? Should I use fewer threads to save ram? Should i divide the inputs into smaller chunks to stay within a time limit? Wait times are also, sometimes, multiplied if the jobs are resubmitted due to unforeseen errors (or change of parameters).
It helps to understand how the SLURM scheduler makes its decisions when picking the next job to run on a node. The process is not exactly transparent. The good news is that it’s not fully opaque either — There are hints available.
Once a job has been submitted via commands such as
srun, it will enter the job queue, along with the other thousands of jobs submitted by other fellow researchers. You can use
squeue to see how far back you are in line, but that doesn’t tell you much. For instance, that will not always provide estimated start times.
You most likely already know that the amount of resources you request for your job, e.g. number of nodes, number of cpus, RAM, and wall-clock timelimit, have an influence on your job’s eligibility to run sooner. However, varying the amount of resources can have surprising effects on time spent in the scheduling state (aka state
PENDING). The Kicker:
In some cases, increasing requested job resources can lower overall queuing delays.
Compute nodes in the cluster have been statically partitioned by the admins. Varying the resource constraints of your job will change the set of of nodes available to run it somewhat similar to a step function. Each compute node on compute-canada (cedar and graham) is placed in one or more partitions:
- cpubase_bycore (for jobs wanting a number of cores, but not all cores on a node)
- cpubase_bynode (for jobs requesting all the cores on a node)
- cpubase_interac (for interactive jobs e.g., `salloc`)
- gpubase_… (for jobs requesting gpus)
If you request a number of nodes less than the number of cores available on the hardware, then your job request is waiting for an allocation in the
_bycore partitions. If you request all of the cores available, then your job request will wait for an allocation in the
_bynode partitions. So based on availability, it might help to ask for more threads, and configure your jobs to process more work at once. The SLURM settings will vary by cluster — the partitions above are for cedar and graham. Niagara, the new cluster, for instance, will only be doing by_node allocations.
On compute-canada: You can view the raw information about all available nodes and their partition with the
sinfo command. You can also view the current node availability, by partition using
The following command outputs a table that shows the current number of queued jobs in each partition. If you look in the table closely, you learn that there is only one job in the queue for any node in the
_bynode partition (for jobs needing less than 3 hours). The
_bycore partition, on the other hand, has a lot of jobs sitting on it patiently. If you tweak your job to make it eligible for the partition with the most availability (often it is the one with the most strict requirements), then you minimize your queuing times.
$ partition-stats -q
Number of Queued Jobs by partition Type (by node:by core)
Node type | Max walltime
| 3 hr | 12 hr | 24 hr | 72 hr | 168 hr | 672 hr |
Regular | 1:263 | 429:889 | 138:444 | 91:3030| 14:127 | 122:138 |
Large Mem | 0:0 | 0:0 | 0:0 | 0:6 | 0:125 | 0:0 |
GPU | 6:63 | 48:87 | 3:3 | 8:22 | 2:30 | 8:1 |
GPU Large | 1:- | 0:- | 0:- | 0:- | 1:- | 0:- |
“How do I request a
_bynode partitioning instead of a
That is not obvious, and quasi undocumented. The answer is that you do so by asking for all the cpus available on the node. This is done with
sbatch --cpus-per-task N. To get the best number of CPUs N, you have to dig a bit deeper and look at the inventory (this is where the
sinfo command comes handy). The next section covers this. And this is something that may change over time as the cluster gets upgrades and reconfigurations.
Also, if you request more than one node in your job, each with
N CPUs (e.g.
sbatch --nodes=3 --cpus-per-task=32 ...), then all of them will be
Rules of Thumb
Here are some quick rules of thumb which work well for the state of the
cedar cluster, as of April 2018. The info presented in this section was gathered with
sinfo, through conversations with compute-canada support, and through experience (over the course of a few weeks). In other words, I haven’t attempted to systematically study job wait times over the course of months, but I will claim that those settings have worked the best so far for my use cases (backed with recommendations from the support team).
- Most compute nodes have 32 CPUs installed.
If you sbatch for N=32 cpus
--cpus-per-task=32, you are likely to get your job running faster than if you ask for, say N=16 cpus. If your job requires a low-number of CPUs, then it might be worth exploring options where 32 such jobs are run in parallel. It’s okay to ask for more, but try to use all the resources you ask for, since your account will be debited for them.
- Most compute nodes have 128G ram.
If you keep your job’s memory ceiling under that, you’re hitting a sweet spot. You’ll skip a lot of the queue that way. Next brackets up are (48core, 192G) (half as many as in the 128G variety), and (32 core, 256G) (even fewer).
- Watch out for the
--exclusive=user tells the scheduler that you wish your job to only be colocated with jobs run by your user. This is perhaps counter-intuitive, but it does not impose a
_bynoderestriction. In the case where your job is determined to be
_bynode(i.e., you request enough cpus to take a whole node), this flag is redundant. If you don’t ask for all cores available (meaning your job needs a
_bycore allocation), then this flag will prevent jobs from other users from running on that node (on the remaining cpus). In that case, the flag will likely hurt your progress (unless you have many such jobs that can fill that node).
- The time limit you pick matters. Try to batch your work in <= 3H chunks.
A considerable number of nodes will only execute (or favor) tasks which can complete in less than 3H of wallclock time. There’s a considerable larger amount of nodes that will be eligible to you also if you go under the 3H wallclock time limit. The next ‘brackets’ are 12H, then 24H, 72H, 168H (1 wk), and 28d. This suggests that there is no benefit to asking for 1H vs 3H, or 22H vs 18H, although an intimate conversation with the scheduler’s code would confirm that.
But, I want it now.
It should be mentioned that often, if only a single “last-minute” job needs to be run,
salloc (same arguments as
sbatch, mostly) can provide the quickest turnaround time to start executing a job. It will get you an interactive shell to a node of the requested size within a few minutes of asking for it. There is a separate partition,
cpubase_interac, which answers those requests. Again, it is worth looking at the available configurations. Keep
salloc it in your back pocket.