Il 06/11/20 13:43, Valerio Bellizzomi ha scritto:
> Usually hyperthreading will halve the memory-bandwidth available to one
> thread running in one core, the other half being used by the second
> thread.
True. That's one of the "many factors" to consider.
We tested with MPI jobs that are mostly CP
On Fri, 2020-11-06 at 13:00 +0100, Diego Zuccato wrote:
> Il 04/11/20 19:12, Brian Andrus ha scritto:
>
> > One thing you will start finding in HPC is that, by it's goal,
> > hyperthreading is usually a poor fit.
> Depends on many factors, but our tests confirm it can do much good!
>
> > If you a
Il 04/11/20 19:12, Brian Andrus ha scritto:
> One thing you will start finding in HPC is that, by it's goal,
> hyperthreading is usually a poor fit.
Depends on many factors, but our tests confirm it can do much good!
> If you are properly utilizing your cores, your jobs will actually be
> slowed
Hello,
yesterday we upgrade our cluster from Slurm 20.02.2 to 20.02.5 and recognized
some problems with the usage of gpus and more than one cpu per task.
I could reproduce that problem in a little Docker container, which description
you could find on the following link.
https://github.com/bikerd
Hi Ciaron,
on our Omnipath network, we encounterd a simmilar problem:
The MPI needs exclusive access to the interconnect.
Cray once provided a workaround, but that was not worth to implement (terrible
efford/gain for us).
Conclusion You might have to live with this limitation.
Kind regards,