Hi,
I have configured slurm cloud scheduling for OpenStack. I am using CentOS7
with slurm version 20.11.8 installed using EPEL RPMs and it's working fine
but I am getting some strange errors in the slurm master logs which I think
are a bug.
I am using these options in slurm.conf:
SlurmctldParamet
You can check the sarchive tool.
https://archive.fosdem.org/2020/schedule/event/job_script_archival/
https://github.com/itkovian/sarchive
Regards,
Pablo.
On Fri, Jul 16, 2021 at 8:29 PM Paul Edmon wrote:
> Not in the current version of Slurm. In the next major version long
> term storage of j
Hi,
I am exploring the option to use the Slurm elastic computing support (
https://slurm.schedmd.com/elastic_computing.html ) together with the Slurm
configless support ( https://slurm.schedmd.com/configless_slurm.html ) to
deploy dynamic Slurm clusters on OpenStack which can automatically grow an
Hi Manuel,
A possible workaround is to configure a cgroups limit by user in the
frontend node so a single user cannot allocate more than 1GB of ram (or
whatever value you prefer). The user would still be able to abuse the
machine but as soon as his memory usage goes above the limit his job will
be
Hi,
We have upgraded from 17.02.3 to 17.11.0 and after the upgrade we have
noticed that a simple "sacct -j $jobid" takes much longer than before.
Before the upgrade sacct was near immediate and now it takes around 1
minute.
After enabling the slow queries log in mariadb we have found this slow
qu