Slurm versions 24.11.5, 24.05.8, and 23.11.11 are now available and
include a fix for a recently discovered security issue.
SchedMD customers were informed on April 23rd and provided a patch on
request; this process is documented in our security policy. [1]
A mistake with permission handling fo
Getting back the original question - I just noticed that there is special
option AuditRPCs in DebugFlags for controller, so perhaps you can determine
source of RPC calls without breaking things.
https://slurm.schedmd.com/slurm.conf.html#OPT_AuditRPCs
Regards
Patryk.
On 25/05/07 10:47AM, Patryk
> IMHO the RPC rate limiting should be considered a best practice, and I
> wouldn't think that it's a "dirty" configuration. You need Slurm 23.02 or
> later for this. Some details are discussed in this Wiki page:
Dirty in a way that levels are so low that they break some other service in
order t
On 5/7/25 10:28, Guillaume COCHARD wrote:
Hi,
Speaking of RPC rate limiting, we recently encountered an issue with Snakemake
making excessive requests to sacct. It seems that the current rate limiting
only applies to controller RPCs. Is there a way to also limit the rate of sacct
calls?
The
Hi,
Speaking of RPC rate limiting, we recently encountered an issue with Snakemake
making excessive requests to sacct. It seems that the current rate limiting
only applies to controller RPCs. Is there a way to also limit the rate of sacct
calls?
Thanks,
Guillaume
- Mail original -
De:
On 5/7/25 09:57, Patryk Bełzak via slurm-users wrote:
Hi,
why you think it's an authentication requests? As far as I understand multiple
UIDs are asking for job and partition info. It's unlikely that all of them
perform that kind of requests the same way and in the same time, so I think you
sh
Hi,
why you think it's an authentication requests? As far as I understand multiple
UIDs are asking for job and partition info. It's unlikely that all of them
perform that kind of requests the same way and in the same time, so I think you
should look for some external program that may do that - i
Mike via slurm-users
writes:
> Greetings,
>
> We are new to Slurm and we are trying to better understand why we’re seeing
> high-mem jobs stuck in Pending state indefinitely. Smaller (mem) jobs in the
> queue will continue to pass by the high mem jobs even when we bump priority
> on a pending hi