[slurm-users] Slurm on Debian Stretch

2020-03-03 Thread Steffen Grunewald
Good morning, is there anyone out there, running Slurm on a Debian Stretch platform? I've been maintaining a HTCondor pool for quite some time, and recently started an attempt to convert some of the compute nodes to form a Slurm cluster instead. I ran into some issues I could only partially resol

[slurm-users] How to make sreport cluster reports for individual partitions?

2020-03-03 Thread Ole Holm Nielsen
We make cluster utilization reports with sreport using the command: $ sreport -t hourper --tres=cpu,gpu cluster AccountUtilizationByUser Start=030120 End=030320 tree However, there are some strong interests here for making these reports also for the individual partitions, so that we can analy

[slurm-users] salloc not working in configless setup on login machine

2020-03-03 Thread nanava
Hi, if I execute salloc on a login machine and then run any user command, I get the following error: [testuser@login03] $ salloc -p gpu --ntasks=8 --mem=7000mb [testuser@login03] $ sinfo sinfo: error: Parse error in file /proc/4050/fd/5 line 1: "" sinfo: error: Parse error in file /proc/4050/

[slurm-users] Slurmctld caching extended gid?

2020-03-03 Thread Luis Huang
Recently encountered an odd issue where some users were getting sporadic permission denied on certain directories with their stderr/stdout. We realized that this was caused by a change in their nested group permissions on AD several days ago. At first we thought it was the compute nodes themsel

Re: [slurm-users] Problem with configuration CPU/GPU partitions

2020-03-03 Thread Pavel Vashchenkov
I've found that this question arised two years ago: https://bugs.schedmd.com/show_bug.cgi?id=4717 And it's still unsolved :( -- Pavel Vashchenkov 02.03.2020 17:28, Pavel Vashchenkov пишет: > 28.02.2020 20:53, Renfro, Michael пишет: >> When I made similar queues, and only wanted my GPU jobs to