I mistyped that. "they CAN'T get into the login nodes using SSH keys"
On 5/27/21 10:08 AM, Lloyd Brown wrote:
they get into the login nodes using SSH keys
--
Lloyd Brown
HPC Systems Administrator
Office of Research Computing
Brigham Young University
http://marylou.byu.edu
While that's absolutely a significant issue, here's how we solved it,
despite still using user keys. This basically assures that while people
can SSH around with keys within our cluster, they get into the login
nodes using SSH keys. Combine that with the required enrollment in 2FA,
and I think
On Thursday, 27 May 2021, at 08:19:14 (+0200),
Loris Bennett wrote:
Thanks for the detailed explanations. I was obviously completely
confused about what MUNGE does. Would it be possible to say, in very
hand-waving terms, that MUNGE performs a similar role for the access of
processes to nodes a
Hi Ole,
Ole Holm Nielsen writes:
> Hi Loris,
>
> On 5/27/21 8:19 AM, Loris Bennett wrote:
>> Regarding keys vs. host-based SSH, I see that host-based would be more
>> elegant, but would involve more configuration. What exactly are the
>> simplification gains you see? I just have a single cluste
Ward Poelmans writes:
> On 27/05/2021 08:19, Loris Bennett wrote:
>> Thanks for the detailed explanations. I was obviously completely
>> confused about what MUNGE does. Would it be possible to say, in very
>> hand-waving terms, that MUNGE performs a similar role for the access of
>> processes t
On 27/05/2021 08:19, Loris Bennett wrote:
> Thanks for the detailed explanations. I was obviously completely
> confused about what MUNGE does. Would it be possible to say, in very
> hand-waving terms, that MUNGE performs a similar role for the access of
> processes to nodes as SSH does for the ac
Hi Loris,
On 5/27/21 8:19 AM, Loris Bennett wrote:
Regarding keys vs. host-based SSH, I see that host-based would be more
elegant, but would involve more configuration. What exactly are the
simplification gains you see? I just have a single cluster and naively I
would think dropping a script in
Hi Michael,
Michael Jennings writes:
> On Tuesday, 25 May 2021, at 14:09:54 (+0200),
> Loris Bennett wrote:
>
>> I think my main problem is that I expect logging in to a node with a job
>> to work with pam_slurm_adopt but without any SSH keys. My assumption
>> was that MUNGE takes care of the a
On 25-05-2021 18:07, Loris Bennett wrote:
PS Am I wrong to be surprised that this is something one needs to roll
oneself? It seems to me that most clusters would want to implement
something similar. Is that incorrect? If not, are people doing
something else? Or did some vendor setting things
On 25-05-2021 19:03, Patrick Goetz wrote:
On 5/25/21 11:07 AM, Loris Bennett wrote:
PS Am I wrong to be surprised that this is something one needs to roll
oneself? It seems to me that most clusters would want to implement
something similar. Is that incorrect? If not, are people doing
somethin
On Tuesday, 25 May 2021, at 14:09:54 (+0200),
Loris Bennett wrote:
> I think my main problem is that I expect logging in to a node with a job
> to work with pam_slurm_adopt but without any SSH keys. My assumption
> was that MUNGE takes care of the authentication, since users' jobs start
> on node
On Tue, 25 May 2021 14:09:54 +0200
"Loris Bennett" wrote:
> to work with pam_slurm_adopt but without any SSH keys. My assumption
> was that MUNGE takes care of the authentication, since users' jobs
> start on nodes with the need for keys.
>
> Can someone confirm that this expectation is wrong a
On 5/25/21 11:07 AM, Loris Bennett wrote:
PS Am I wrong to be surprised that this is something one needs to roll
oneself? It seems to me that most clusters would want to implement
something similar. Is that incorrect? If not, are people doing
something else? Or did some vendor setting things
...I really didn't want to wade in on this, but why not set up host
based ssh? It's not exactly as if passphraseless keys give better security?
Tina
On 25/05/2021 17:23, Brian Andrus wrote:
Your mistake is that munge has nothing to do with sshd, which is the
daemon you are connecting to. It ca
Your mistake is that munge has nothing to do with sshd, which is the
daemon you are connecting to. It can use PAM (hence the ability to use
pam_slurm_adopt), but munge has no pam integration that I am aware of.
As far as your /etc/skel bits, that is something that is done when a
user's home is
Hi Lloyd,
Lloyd Brown writes:
> We had something similar happen, when we migrated away from a Rocks-based
> cluster. We used a script like the one attached, in /etc/profile.d, which was
> modeled heavily by something similar in Rocks.
>
> You might need to adapt it a bit for your situation, but
We had something similar happen, when we migrated away from a
Rocks-based cluster. We used a script like the one attached, in
/etc/profile.d, which was modeled heavily by something similar in Rocks.
You might need to adapt it a bit for your situation, but otherwise it's
pretty straightforward
Hi Ole,
Thanks for the links.
I have discovered that the users whose /home directories were migrated
from our previous cluster all seem to have a pair of keys which were
created along with files like '~/.bash_profile'. Users who have been
set up on the new cluster don't have these files.
Is the
Hi Loris,
I think you need, as pointed out by others, either of:
* SSH keys, see
https://wiki.fysik.dtu.dk/niflheim/SLURM#ssh-keys-for-password-less-access-to-cluster-nodes
* SSH host-base authentication, see
https://wiki.fysik.dtu.dk/niflheim/SLURM#host-based-authentication
/Ole
On 5/25/
Hi everyone,
Thanks for all the replies.
I think my main problem is that I expect logging in to a node with a job
to work with pam_slurm_adopt but without any SSH keys. My assumption
was that MUNGE takes care of the authentication, since users' jobs start
on nodes with the need for keys.
Can so
Oh, you could also use the ssh-agent to mange the keys, then use
'ssh-add ~/.ssh/id_rsa' to type the passphrase once for your whole
session (from that system).
Brian Andrus
On 5/21/2021 5:53 AM, Loris Bennett wrote:
Hi,
We have set up pam_slurm_adopt using the official Slurm documentation
a
Umm.. Your keys are password protected. If they were not, you would be
getting what you expect:
Enter passphrase for key '/home/loris/.ssh/id_rsa':
Brian Andrus
On 5/21/2021 5:53 AM, Loris Bennett wrote:
Hi,
We have set up pam_slurm_adopt using the official Slurm documentation
and Ole's in
* Tina Friedrich [210521 16:35]:
> If this is simply about quickly accessing nodes that they have jobs on to
> check on them - we tell our users to 'srun' into a job allocation (srun
> --jobid=XX).
Hi Tina,
sadly, this does not always work in version 20.11.x any more because of the
new non-
Hi Loris,
pam slurm adopt just allows or disallows a user to login to a node,
depending if a job runs or not.
Yet you have to do something, that the user can login passwordless, e.g.
through host-based authentication.
Best
Marcus
Am 21.05.2021 um 14:53 schrieb Loris Bennett:
Hi,
We have se
Hi Loris,
I don't know if this would solve your problem, but I think that node SSH
keys should be gathered and distributed. See my notes in
https://wiki.fysik.dtu.dk/niflheim/SLURM#ssh-keys-for-password-less-access-to-cluster-nodes
/Ole
On 21-05-2021 14:53, Loris Bennett wrote:
Hi,
We hav
Hi Loris,
I'm not an PAM expert, but - pam_slurm_adopt doesn't do authenticatio,
it only verifies that access for the authenticated user is allowed (by
checking there's a job). 'account' not 'auth' in PAM config. As in, it's
got nothing to do with how the user logs in to the server / is
authe
Hi Loris,
this depends largely on whether host-based authentication is
configured (which does not seem to be the case for you) and also on
how exactly the PAM stack for sshd looks like in /etc/pam.d/sshd.
As the rules are worked through in the order they appear in
/etc/pam.d/sshd, pam_slurm_adopt
Hi,
We have set up pam_slurm_adopt using the official Slurm documentation
and Ole's information on the subject. It works for a user who has SSH
keys set up, albeit the passphrase is needed:
$ salloc --partition=gpu --gres=gpu:1 --qos=hiprio --ntasks=1 --time=00:30:00
--mem=100
salloc: Grant
28 matches
Mail list logo