Hi,
I see an example in [1] which looks like
sacctmgr modify user bob set GrpTRES=cpu=1500,mem=200,gres/gpu=50
I want to know how GrpTRES parameters are set. I read [2] which
explains TRES however the question still remains. Can someone explain
more?
[1] https://slurm.schedmd.com/resource_limi
On Friday, 23 March 2018 8:16:01 AM AEDT Andreas Hilboll wrote:
> Is this somehow possible? When I tried the above approach, it
> didn't work (squeue reported the job's name to be 'myscript.sh').
It needs to be set before submission for sbatch to use it, a script cannot
affect the environment o
On Friday, 23 March 2018 8:49:00 AM AEDT Ryan Novosielski wrote:
> If I’m not mistaken, you may submit with multiple partitions specified and
> it will run on the one that makes the most sense.
More importantly it'll run on the one it can first, based on the priority of
the partitions requested.
I’m not sure if this is what you are trying to accomplish but we do something
similar using features and job constraints to get jobs to run on any set of
processor types that are available. We have 4 generations of processors on one
of our clusters and our MPI jobs need them to run on all of on
If there a reason you can't set the job name with either "sbatch
--job-name=myname myscript.sh" or this in your script?
#SBATCH --job-name=myname
That way SLURM_JOB_NAME should be set to what you expect.
-Original Message-
From: slurm-users on behalf of Andreas
Hilboll
Reply-To:
If I’m not mistaken, you may submit with multiple partitions specified and it
will run on the one that makes the most sense.
> On Mar 22, 2018, at 5:29 PM, Alexander John Mamach
> wrote:
>
> Hi all,
>
> I’ve been looking into a way to automatically migrate queued jobs from one
> partition to
Hi all,
I’ve been looking into a way to automatically migrate queued jobs from one
partition to another. For example, if someone submits in partition A and must
wait for resources, move their job request to partition B and try to run, and
if they must still wait, then try partition C, etc?
Tha
Hi,
I'd like to be able to set the SLURM_JOB_NAME from within the
script I'm submitting to `sbatch`. So, e.g., with the script
`myscript.sh`,
#!/bin/bash
export SLURM_JOB_NAME='myname'
sleep 120
and then `sbatch myscript.sh`, I'd like the job's name to be
'myname'.
Is this someh
On 03/21/2018 08:44 PM, Michael Jennings wrote:
On Wednesday, 21 March 2018, at 20:14:22 (+0100),
Ole Holm Nielsen wrote:
Thanks for your friendly advice! I keep forgetting about Systemd
details, and your suggestions are really detailed and useful for
others! Do you mind if I add your advice
On 03/22/2018 02:10 PM, Patrick Goetz wrote:
> Or even better, don't think about it. If you type
>
>sudo systemctl edit slurmd
>
> this will open an editor. Type your changes into this and save it and
> systemd will set up the snippet file for you automatically (in
> etc/systemd/system/slurm
I forgot to add that you will need to reload the daemon after doing this
(and systemd will probably prompt you to do so).
On 03/22/2018 08:10 AM, Patrick Goetz wrote:
Or even better, don't think about it. If you type
sudo systemctl edit slurmd
this will open an editor. Type your changes
Or even better, don't think about it. If you type
sudo systemctl edit slurmd
this will open an editor. Type your changes into this and save it and
systemd will set up the snippet file for you automatically (in
etc/systemd/system/slurmd.service.d/).
On 03/21/2018 02:14 PM, Ole Holm Nielse
Check config.log, is pkg-config aware of paths to your lua shared libraries?
cheers,
Marcin
13 matches
Mail list logo