On Thursday, 21 February 2019 8:20:36 AM PST נדב טולדו wrote:
> Yeah I have, before i installed pbis and introduce lsass.so the slurm module
> worked well Is there anyway to debug?
>
> I am seeing in syslog that the slurm module is adopting into the job context
> but then i am getting out of cont
On Thursday, 21 February 2019 1:00:52 PM PST Sam Hawarden wrote:
> Linux assigns numbers to your CPUs. 0-15 will be socket 1, thread 1. 16-31
> are socket 2, thread 1, 32-47 are socket 1, thread 2. 48-63 are socket 2,
> thread 2.
This isn't strictly true, for x86 for instance the kernel will read
Hello,
I have a small vagrant setup I use for prototyping/testing various things.
Right now, it's running Slurm 18.08.4. I am noticing some differences for
the billing TRES in the output of various commands (notably that of sacct,
sshare, and scontrol show assoc).
On a freshly built cluster, ther
Hi Gestió,
To reliably load down 32 cores and view it:
[user@headnode] srun -c 32 -t 10 --pty $SHELL
[user@worknode] stress -c 32 -vm 32&
[user@worknode] htop; fg
^C
You can view a task's CPU affinity by pressing a in htop if stress isn't
consuming all the cores.
You will need to make
Hey Chris,
Yeah I have, before i installed pbis and introduce lsass.so
the slurm module worked well
Is there anyway to debug?
I am seeing in syslog that the slurm module is adopting into
the job context but then i am getting o