Re: [slurm-users] Slurm SPANK GPU Compute Mode plugin

2018-01-22 Thread Nadav Toledo
Thank you for sharing it's indeed of interest of others... On 23/01/2018 01:20, Kilian Cavalotti wrote: Hi all, We (Stanford Research Computing Center) developed a SPANK plugin which allows users to choose the GPU compute mode [1] for their jobs. [1] h

Re: [slurm-users] [EXTERNAL]: Re: execute job regardless the exit status of dependent jobs

2018-01-22 Thread Hwa, George
Mike, Thanks for replying. I thought "afterany" means after any one of the specified jobs. I guess I wasn't reading the explanation correctly. I'll give that a try. Thanks George [KLA-Tencor%20Confidential%20Need-to-Know%20Only] From: slurm-users [mailto:slurm-users-boun...@lists.schedmd

[slurm-users] Slurm SPANK GPU Compute Mode plugin

2018-01-22 Thread Kilian Cavalotti
Hi all, We (Stanford Research Computing Center) developed a SPANK plugin which allows users to choose the GPU compute mode [1] for their jobs. [1] http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-modes This came from the need to give our users some control on the way GPUs

Re: [slurm-users] slurm 17.11.2: Socket timed out on send/recv operation

2018-01-22 Thread Alessandro Federico
Hi John, just an update... we not have a solution for the SSSD issue yet, but we changed the ACL on the 2 partitions from AllowGroups=g2 to AllowAccounts=g2 and the slowdown has gone. Thanks for the help ale - Original Message - > From: "Alessandro Federico" > To: "John DeSantis" > Cc:

Re: [slurm-users] Requirement to use QOS?

2018-01-22 Thread Loris Bennett
Loris Bennett writes: > Hi, > > Some while ago I defined several QOS thus: > > Name Priority MaxWall MaxJobs MaxSubmit > -- -- --- --- - > normal 0 > short 1003:00:00 1020 >

Re: [slurm-users] Slurm 17.11 X11 support questions

2018-01-22 Thread Marcus Wagner
Hi Matthew, this is exactly the question, I asked a few days before ;) It seems, nearly none is using the native x11 forwarding, since there came no answers. The only way, I got this functional, is to use salloc: salloc srun --x11 xterm See also my post  [slurm-users] slurm 17.11.2 and X11 f