Re: [slurm-users] pam_slurm_adopt with pbis-open pam modules

2019-02-21 Thread Chris Samuel
On Thursday, 21 February 2019 8:20:36 AM PST נדב טולדו wrote: > Yeah I have, before i installed pbis and introduce lsass.so the slurm module > worked well Is there anyway to debug? > > I am seeing in syslog that the slurm module is adopting into the job context > but then i am getting out of cont

Re: [slurm-users] Only one socket for SLURM

2019-02-21 Thread Chris Samuel
On Thursday, 21 February 2019 1:00:52 PM PST Sam Hawarden wrote: > Linux assigns numbers to your CPUs. 0-15 will be socket 1, thread 1. 16-31 > are socket 2, thread 1, 32-47 are socket 1, thread 2. 48-63 are socket 2, > thread 2. This isn't strictly true, for x86 for instance the kernel will read

[slurm-users] Question on billing tres information from sacct, sshare, and scontrol

2019-02-21 Thread David Rhey
Hello, I have a small vagrant setup I use for prototyping/testing various things. Right now, it's running Slurm 18.08.4. I am noticing some differences for the billing TRES in the output of various commands (notably that of sacct, sshare, and scontrol show assoc). On a freshly built cluster, ther

Re: [slurm-users] Only one socket for SLURM

2019-02-21 Thread Sam Hawarden
Hi Gestió, To reliably load down 32 cores and view it: [user@headnode] srun -c 32 -t 10 --pty $SHELL [user@worknode] stress -c 32 -vm 32& [user@worknode] htop; fg ^C You can view a task's CPU affinity by pressing a in htop if stress isn't consuming all the cores. You will need to make

Re: [slurm-users] pam_slurm_adopt with pbis-open pam modules

2019-02-21 Thread נדב טולדו
Hey Chris, Yeah I have, before i installed pbis and introduce lsass.so the slurm module worked well Is there anyway to debug? I am seeing in syslog that the slurm module is adopting into the job context but then i am getting o