Hi Team,
I have differentiated the CPU node and GPU nodes into two different queues.
Now I have 20 Nodes having CPUS (20 cores)only but no GPU.
Another set of nodes having GPU+CPU.some nodes are with 2 GPU and 20 CPU
and some are with 8GPU and 48 CPU assigned to GPU queue
user facing issues when
Il 16/06/20 16:23, Loris Bennett ha scritto:
> Thanks for pointing this out - I hadn't been aware of this. Is there
> anywhere in the documentation where this is explicitly stated?
I don't remember. Seems Michael's experience is different. Possibly some
other setting influences that behaviour. Ma
Diego Zuccato writes:
> Il 16/06/20 09:39, Loris Bennett ha scritto:
>
>>> Maybe it's already known and obvious, but... Remember that a node can be
>>> allocated to only one partition.
>> Maybe I am misunderstanding you, but I think that this is not the case.
>> A node can be in multiple partitio
Not trying to argue unnecessarily, but what you describe is not a universal
rule, regardless of QOS.
Our GPU nodes are members of 3 GPU-related partitions, 2 more resource-limited
non-GPU partitions, and one of two larger-memory partitions. It’s set up this
way to minimize idle resources (due t
Il 16/06/20 09:39, Loris Bennett ha scritto:
>> Maybe it's already known and obvious, but... Remember that a node can be
>> allocated to only one partition.
> Maybe I am misunderstanding you, but I think that this is not the case.
> A node can be in multiple partitions.
*Assigned* to multiple part
Diego Zuccato writes:
> Il 13/06/20 17:47, navin srivastava ha scritto:
>
>> Yes we have separate partitions. Some are specific to gpu having 2 nodes
>> with 8 gpu and another partitions are mix of both,nodes with 2 gpu and
>> very few nodes are without any gpu.
> Maybe it's already known and ob
Il 13/06/20 17:47, navin srivastava ha scritto:
> Yes we have separate partitions. Some are specific to gpu having 2 nodes
> with 8 gpu and another partitions are mix of both,nodes with 2 gpu and
> very few nodes are without any gpu.
Maybe it's already known and obvious, but... Remember that a no
* Saturday, June 13, 2020 10:47 AM
> *To:* Slurm User Community List
> *Subject:* Re: [slurm-users] ignore gpu resources to scheduled the cpu
> based jobs
>
>
> Yes we have separate partitions. Some are specific to gpu having 2 nodes
> with 8 gpu and another partitions are mix o
rom: slurm-users on behalf of navin
srivastava
Sent: Saturday, June 13, 2020 10:47 AM
To: Slurm User Community List
Subject: Re: [slurm-users] ignore gpu resources to scheduled the cpu based jobs
Yes we have separate partitions. Some are specific to gpu having 2 nodes with 8
gpu and another part
Yes we have separate partitions. Some are specific to gpu having 2 nodes
with 8 gpu and another partitions are mix of both,nodes with 2 gpu and very
few nodes are without any gpu.
Regards
Navin
On Sat, Jun 13, 2020, 21:11 navin srivastava wrote:
> Thanks Renfro.
>
> Yes we have both types of n
Thanks Renfro.
Yes we have both types of nodes with gpu and nongpu.
Also some users job require gpu and some applications use only CPU.
So the issue happens when user priority is high and waiting for gpu
resources which is not available and the job with lower priority is waiting
even though enoug
Will probably need more information to find a solution.
To start, do you have separate partitions for GPU and non-GPU jobs? Do you have
nodes without GPUs?
On Jun 13, 2020, at 12:28 AM, navin srivastava wrote:
Hi All,
In our environment we have GPU. so what i found is if the user having high
Hi All,
In our environment we have GPU. so what i found is if the user having high
priority and his job is in queue and waiting for the GPU resources which
are almost full and not available. so the other user submitted the job
which does not require the GPU resources are in queue even though lots
13 matches
Mail list logo