Hi,
Is there a plan or ETA to add RockyLinux 8 to the supported OS for slurm-gcp?
Simon
sacctmgr add/delete user basically adds/deletes a Slurm association for
that user/cluster/account.
You need to add (an association for) the user for account B before you can
change their default account to B.
You do *not* need to delete (the association for) the user with account A
if not desired
I'm not sure what you mean by loosing user history?
You did way you want to change the users association from account 'A' to
account 'B' - well, yes, that means associating the user with A (i.e.
adding them to account A), and removing them from B.
The removing from B is optional of course - u
Hi Chip,
You don't have to delete the user, because a user can be in multiple accounts.
Here's what I'd do:
1. add the user to account B
2. make account B their default
3. remove them from account A
We often swap out user accounts like this.
Best,
Joseph
--
Thanks. Guess adding/deleting is the way to go then – I was hoping not to lose
user history, but alas.
From: slurm-users on behalf of "Renfro,
Michael"
Reply-To: Slurm User Community List
Date: Friday, August 5, 2022 at 10:17 AM
To: Slurm User Community List
Subject: [ext] Re: [slurm-users
This should work:
sacctmgr add user someuser account=newaccount # adds user to new account
sacctmgr modify user where user=someuser set defaultaccount=newaccount # change
default
sacctmgr remove user where user=someuser and account=oldaccount # remove from
old account
From: slurm-users on be
I have a user U who is in association with account A, and I want to change that
to account B. The obvious thing does not work:
$ sacctmgr modify user where user=”U” set defaultaccount=”B”
Can't modify because these users aren't associated with new default account “B”…
OK, fair enough.But I
We are testing slurm 22.05 and we noticed a behaviour change for
prolog/epilog scripts. We use NHC in the prolog/epilog to check if a
node is healthy. In the prevous problems we had no problems 21.08.X and
earlier.
Now when we do a srun:
* srun -t 1 hostname
```
srun: job 3975 queued and wai
Hello,
I think you could use SLURM's power saving mecanism to shut down all your nodes
simultaneously.
Then doing srun -N -C true (or any other small
work) will wake up N nodes simultaneously.
You can even do srun while your nodes are powering down, SLURM will reboot them
as soon as they're