The times for the two runs suggest that the version run through slurm is using
only one core.
Best – Don
Don Krieger, PhD
Research Scientist
Department of Neurological Surgery
University of Pittsburgh
From: slurm-users On Behalf Of
Williams, Gareth (IM&T, Black Mountain)
Sent: Tuesday, Decemb
to 16Gb.
El El mié, 7 feb 2018 a las 16:20, Krieger, Donald N.
mailto:krieg...@upmc.edu>> escribió:
Sorry for jumping in without full knowledge of the thread.
But it sounds like the key issue is that each job requires 3 GBytes.
Even if that's true, won't jobs start on cores with
Sorry for jumping in without full knowledge of the thread.
But it sounds like the key issue is that each job requires 3 GBytes.
Even if that's true, won't jobs start on cores with less memory and then just
page?
Of course as the previous post states, you must tailor your slurm request to
the phys