>
>
> Your bf_window may be too small. From 'man slurm.conf':
>
> bf_window=#
>
> The number of minutes into the future to look when considering
> jobs to schedule. Higher values result in more overhead and
> less responsiveness. A value at least as long as the highe
Hi,
my GPU testing system (named “gpu-node”) is a simple computer with one socket
and a processor " Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz". Executing "lscpu",
I can see there are 4 cores per socket, 2 threads per core and 8 CPUs:
Architecture: x86_64
CPU op-mode(s):32-bit,
I have built Slurm 23.11.7 on two machines. Both are running Ubuntu 22.04.
While Slurm runs fine on one machine, on the 2nd machine it does not. First
machine is both a controller and a node while the 2nd machine is just a
node. On both machines, I built the Slurm debian package as per the Slurm
do
Ryan Novosielski via slurm-users writes:
> We do have bf_continue set. And also bf_max_job_user=50, because we
> discovered that one user can submit so many jobs that it will hit the limit
> of the number
> it’s going to consider and not run some jobs that it could otherwise run.
>
> On Jun 4