Actually we double checked and are seeing it in normal jobs too.
—
Christopher Coffey
High-Performance Computing
Northern Arizona University
928-523-1167
On 1/4/19, 9:24 AM, "slurm-users on behalf of Paddy Doyle"
wrote:
Hi Chris,
We're seeing it on 18.08.3, so I was hoping that
Hi all,
It seems that "raw usage," i.e. what is shown with "sshare" shows TRES
minutes on the whole. With sacctmgr, I can configure GrpTRESMins to set
limits on TRES at the individual TRES level. However, is it possible to set
limits irrespective of the individual TRES? I.e., I'd like to do someth
Hello,
Thank you for your comments on installing and using TurboVNC. I'm working on
the installation at the moment, and may get back with other questions relating
to the use of Slurm with VNC.
Best regards,
David
From: slurm-users on behalf of Daniel
Letai
Hi Chris,
We're seeing it on 18.08.3, so I was hoping that it was fixed in 18.08.4
(recently upgraded from 17.02 to 18.08.3). Note that we're seeing it in
regular jobs (haven't tested job arrays).
I think it's cgroups-related; there's a similar bug here:
https://bugs.schedmd.com/show_bug.cgi?id=
I'm surprised no one else is seeing this issue? I wonder if you have 18.08 you
can take a moment and run jobeff on a job in one of your users job arrays. I'm
guessing jobeff will show the same issue as we are seeing. The issue is that
usercpu is incorrect, and off by many orders of magnitude.
B
I think that the main reason is the lack of access to some /dev "files" in
your docker container. For singularity nvidia plugin is required, maybe
there is something similar for docker...
Cheers,
Marcin
-
https://funinit.wordpress.com
On Wed, 2 Jan 2019, 05:53 허웅 Hi Chris.
>
>
>
> T