On Tue, Jun 30, 2020 at 10:52:00AM -0400, Lawrence Stewart wrote:
> How does one configure the runtime priority of a job? That is, how do you
> set the CPU scheduling “nice” value?
>
> We’re using Slurm to share a large (16 core 768 GB) server among FPGA
> compilation jobs. Slurm handles core
As far as I can tell, sbatch —nice only affects scheduling priority, not CPU
priority.
I’ve made a workaround by putting “nice -n 19 xxx” as the job to run in my
sbatch scripts
> On 2020, Jun 30, at 11:07 AM, Renfro, Michael wrote:
>
> There’s a --nice flag to sbatch and srun, at least. Docum
There’s a --nice flag to sbatch and srun, at least. Documentation indicates it
decreases priority by 100 by default.
And untested, but it may be possible to use a job_submit.lua [1] to adjust nice
values automatically. At least I can see a nice property in [2], which I assume
means it'd be acce
How does one configure the runtime priority of a job? That is, how do you set
the CPU scheduling “nice” value?
We’re using Slurm to share a large (16 core 768 GB) server among FPGA
compilation jobs. Slurm handles core and memory reservations just fine, but
runs everything nice -19, which make
Content-Type: text/plain; charset="iso-8859-1"
Can you post, also, slurmdctl.conf log file from server (controller)?
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.schedmd.com/pipermail/slurm-users/attachments
Can you post, also, slurmdctl.conf log file from server (controller)?
Hi,
Can you post the output of the following commands on your master node?:
sacctmgr show cluster
scontrol show nodes
Best,
Durai Arasan
Zentrum für Datenverarbeitung
Tübingen
On Tue, Jun 30, 2020 at 10:33 AM Alberto Morillas, Angelines <
angelines.albe...@ciemat.es> wrote:
> Hi,
>
>
>
> We
Hi,
We have slurm version 18.08.6
One of my nodes is in drain state Reason=Kill task failed
[root@2020-06-27T02:25:29]
In the node I can see in the slurmd.log
2020-06-27T01:24:26.242] task_p_slurmd_batch_request: 963771
[2020-06-27T01:24:26.242] task/affinity: job 963771 CPU input mask for node
Hi Team,
I have differentiated the CPU node and GPU nodes into two different queues.
Now I have 20 Nodes having CPUS (20 cores)only but no GPU.
Another set of nodes having GPU+CPU.some nodes are with 2 GPU and 20 CPU
and some are with 8GPU and 48 CPU assigned to GPU queue
user facing issues when