On Wednesday, 26 February 2020 12:48:26 PM PST Joshua Baker-LePain wrote:
> We're planning the migration of our moderately sized cluster (~400 nodes,
> 40K jobs/day) from SGE to slurm. We'd very much like to have a backup
> slurmctld, and it'd be even better if our backup slurmctld could be in a
I would say so.
Certainly, if you have many nodes and/or many jobs being submitted, you
will see an impact, but in my experience comparing Slurm to SGE, Slurm
has much less overhead to cause as much impact.
Brian Andrus
On 2/26/2020 1:05 PM, Joshua Baker-LePain wrote:
On Wed, 26 Feb 2020 a
On Wed, 26 Feb 2020 at 12:56pm, Brian Andrus wrote
Any shared filesystem that both systems can get to will work.
I have done it with NFS, Gluster, appliances (NetApp), etc.
Being in a separate datacenter is fine, but you will see some latency, which
you likely already addressed if you are pys
Any shared filesystem that both systems can get to will work.
I have done it with NFS, Gluster, appliances (NetApp), etc.
Being in a separate datacenter is fine, but you will see some latency,
which you likely already addressed if you are pysically splitting a
network like that.
Also, very e
We're planning the migration of our moderately sized cluster (~400 nodes,
40K jobs/day) from SGE to slurm. We'd very much like to have a backup
slurmctld, and it'd be even better if our backup slurmctld could be in a
separate data center from the primary (though they'd still be on the same
pri