Thank you for sharing
it's indeed of interest of others...
On 23/01/2018 01:20, Kilian Cavalotti
wrote:
Hi all,
We (Stanford Research Computing Center) developed a SPANK plugin which
allows users to choose the GPU compute mode [1] for their jobs.
[1] h
Mike,
Thanks for replying.
I thought "afterany" means after any one of the specified jobs. I guess I
wasn't reading the explanation correctly.
I'll give that a try.
Thanks
George
[KLA-Tencor%20Confidential%20Need-to-Know%20Only]
From: slurm-users [mailto:slurm-users-boun...@lists.schedmd
Hi all,
We (Stanford Research Computing Center) developed a SPANK plugin which
allows users to choose the GPU compute mode [1] for their jobs.
[1]
http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-modes
This came from the need to give our users some control on the way GPUs
Hi John,
just an update...
we not have a solution for the SSSD issue yet, but we changed the ACL
on the 2 partitions from AllowGroups=g2 to AllowAccounts=g2 and the
slowdown has gone.
Thanks for the help
ale
- Original Message -
> From: "Alessandro Federico"
> To: "John DeSantis"
> Cc:
Loris Bennett writes:
> Hi,
>
> Some while ago I defined several QOS thus:
>
> Name Priority MaxWall MaxJobs MaxSubmit
> -- -- --- --- -
> normal 0
> short 1003:00:00 1020
>
Hi Matthew,
this is exactly the question, I asked a few days before ;) It seems,
nearly none is using the native x11 forwarding, since there came no answers.
The only way, I got this functional, is to use salloc:
salloc srun --x11 xterm
See also my post [slurm-users] slurm 17.11.2 and X11 f