[slurm-users] sbatch and --nodes

2024-05-31 Thread Michael DiDomenico via slurm-users
its friday and i'm either doing something silly or have a misconfig somewhere, i can't figure out which when i run sbatch --nodes=1 --cpus-per-task=1 --array=1-100 --output test_%A_%a.txt --wrap 'uname -n' sbatch doesn't seem to be adhering to the --nodes param. when i look at my output files i

[slurm-users] Re: Job not starting

2024-12-10 Thread Michael DiDomenico via slurm-users
you don't need to be a subscriber to search bugs.schedmd.com On Tue, Dec 10, 2024 at 9:44 AM Davide DelVento via slurm-users wrote: > > Good sleuthing. > > It would be nice if Slurm would say something like > Reason=Priority_Lower_Than_Job_ so people will immediately find the > culprit in s

[slurm-users] Re: Job running slower when using Slurm

2025-04-23 Thread Michael DiDomenico via slurm-users
without knowing anything about your environment, its reasonable to suspect that maybe your openmp program is multi-threaded, but slurm is constraining your job to a single core. evidence of this should show up when running top on the node, watching the cpu% used for the program On Wed, Apr 23, 20

[slurm-users] Re: Job running slower when using Slurm

2025-04-23 Thread Michael DiDomenico via slurm-users
the program probably says 32 threads, because it's just looking at the box, not what slurm cgroups allow (assuming your using them) for cpu i think for an openmp program (not openmpi) you definitely want the first command with --cpus-per-task=32 are you measuring the runtime inside the program or