A while ago, I thought a patch was made to sshare to show raw tres usage.
Something like
sshare -o account,user,GrpTRESRaw
At the time I used this, I was only concerned with account usage, so I didn't
look to see if sshare would work on the QOS level.
I'm not sure that "feature" was in the m
hi,
we observed a strange behavior of pam_slurm_adopt regarding the
involved cgroups:
when we start a shell as a new Slurm job using "srun", the process has
freezer, cpuset and memory cgroups setup as e.g.
"/slurm/uid_5001/job_410318/step_0". that's good!
however, another shell started by
Hi Christian,
On Wed, Aug 22, 2018 at 7:27 AM, Christian Peter
wrote:
> we observed a strange behavior of pam_slurm_adopt regarding the involved
> cgroups:
>
> when we start a shell as a new Slurm job using "srun", the process has
> freezer, cpuset and memory cgroups setup as e.g.
> "/slurm/uid_5
On 08/22/2018 10:58 AM, Kilian Cavalotti wrote:
My guess is that you're experiencing first-hand the awesomeness of systemd.
Yes, systemd uses cgroups. I'm trying to understand if the Slurm use of
cgroups is incompatible with systemd, or if there is another way to
resolve this issue?
Look
All,
Registration is open for the 2018 Partly Cloudy conference:
http://partly-cloudy.fredhutch.org
While this conference is not Slurm specific, many smaller Slurm shops have
already confirmed their attendance to the "Partly Cloudy" conference in Seattle
so this shameless plug feels relevant
Hi,
My test script is like this:
=
#!/bin/bash
#SBATCH -J LOOP
#SBATCH -p low
#SBATCH --comment test
#SBATCH -N 1
#SBATCH -n 5
#SBATCH -o log/%j.loop
#SBATCH -e log/%j.loop
date
echo "SLURM_JOB_NODELIST=${SLURM_JOB_NODELIST}"
echo "SLURM_NODELIST=${SLURM_NODELIST}"
sleep 2