So this issue is occurring only with job arrays.
—
Christopher Coffey
High-Performance Computing
Northern Arizona University
928-523-1167
On 12/21/18, 12:15 PM, "slurm-users on behalf of Chance Bryce Carl Nelson"
wrote:
Hi folks,
calling sacct with the usercpu flag enable
Hi Chance,
Can you check your slurm.conf's TaskPlugin and TaskPluginParam? or cgroup
settings. The tasks may not even be constrained to a group of cores.
The 00:02:16 core-walltime seems odd though as you've set each job for 40 cpu
minutes (20 minutes * 2 cores) Are you using a debug partitio
Hi folks,
calling sacct with the usercpu flag enabled seems to provide cpu times far
above expected values for job array indices. This is also reported by seff.
For example, executing the following job script:
#!/bin/bash
#SBATCH --job-name
Hi Bill and Douglas,
Thanks for your tips - I narrowed down the issue to the fact that only one
partition (a default partition) string is available, in all other cases it is
not there, so the condition I'm targeting (job_desc[‘partition’] == "parallel")
does not happen, and I don't know the re
Hi Ulf,
On 12/20/18 11:45 PM, Ulf wrote:
Hello,
we think about switch to SLURM. Currently we grant access to the cluster
using a active directory group, everyone in this group is allowed to run
jobs.
So the users are not known to the SLURM accounting database.
Is it possible to automatically
Hello all,
thank you very much for your answers. So there is no out of the box solution, I thought I missed something.
From the answers I get the impression that some of you are facing similar challenges.
So in case we want to start using slurm we have to either change our process or build a