Can you not also do this with a single configuration file but configuring
multiple clusters which the user can choose with the -M option? I suppose it
depends on the use case; if you want to be able to choose a dev cluster over
the production one, to test new config options, then the environmen
Hi Christine,
yes, you can either set the environment variable SLURM_CONF to the full
path of the configuration-file you want to use and then run any program.
Or you can do it like this
SLURM_CONF=/your/path/to/slurm.conf sinfo|sbatch|srun|...
But I am not quite sure if this is really the be
LEROY Christine 208562 writes:
> Is there an env variable in SLURM to tell where the slurm.conf is?
> We would like to have on the same client node, 2 type of possible submissions
> to address 2 different cluster.
According to man sbatch:
SLURM_CONFThe location of the Slurm
Hi Diego,
sorry for the delay.
On 10/18/21 14:20, Diego Zuccato wrote:
Il 15/10/2021 06:02, Marcus Wagner ha scritto:
mostly, our problem was, that we forgot to add/remove a node to/from
the partitions/topology file, which caused slurmctld to deny startup.
So I wrote a simple checker for th
Il 15/10/2021 06:02, Marcus Wagner ha scritto:
mostly, our problem was, that we forgot to add/remove a node to/from the
partitions/topology file, which caused slurmctld to deny startup. So I
wrote a simple checker for that. Here is the output of a sample run:
Even "just" catching syntax errors
mostly, our problem was, that we forgot to add/remove a node to/from the
partitions/topology file, which caused slurmctld to deny startup. So I wrote a
simple checker for that. Here is the output of a sample run:
reading '../conf/rcc/slurm.conf' ...
reading '../conf/rcc/nodes.conf' ...
reading
Sadly no. There is a feature request for one though:
https://bugs.schedmd.com/show_bug.cgi?id=3435
What we've done in the meantime is put together a gitlab runner which
basically starts up a mini instance of the scheduler and runs slurmctld
on the slurm.conf we want to put in place. We then