We have * high priority qos for short jobs. The qos is set at submission time by the user or in the lua script. * partitions for jobs of certain length or other requirements. Sometimes several partitions overlap. * a script that adjusts priorities according to our policies every 5 minutes.
By combining these three methods, we've managed to get a pretty good balance for our needs. Best regards, Jessica Nettelblad, UPPMAX, Sweden On Wed, Nov 22, 2017 at 5:53 PM, Satrajit Ghosh <sa...@mit.edu> wrote: > slurm has a way of giving larger jobs more priority. is it possible to do > the reverse? > > i.e., is there a way to configure priority to give smaller jobs (use less > resources) higher priority than bigger ones? > > cheers, > > satra > > resources: can be a weighted combination depending on system resources > available: > > w1*core + w2*memory + w3*time + w4*gpu > > where core, memory, time, gpu are those requested by the job, and w1-4 are > determined by system resources/group allocations. >