Loic Tortay <[EMAIL PROTECTED]> writes: > Perry E. Metzger wrote: > [...] >> >> Maybe some sort of strange myth has been going by so long on this that >> people refuse to believe that the ticket refresh is a single easy >> command? >> > The "myth" is the ability to automatically get a Kerberos ticket on any > node in a cluster *especially* for the nodes on which you can neither > login nor run cron jobs to renew tickets (which is ugly and likely to be > non practical and/or insecure in any but the most simple environment > anyway).
It is the way virtually all server credentials are handled. If you have any kerberized service on the network, it almost always works with stashed creds. > That's the point of "kstart" and similar tools, kstart is a modified version of kinit. It is just a more sophisticated version of what I described already -- it uses an srvtab or keytab to get the tickets, forks a job, waits, and then does a kdestroy at the exit. I'm not going to say it is a stupid program -- it is very useful -- but it isn't doing anything terribly deep or special. > as well as specific modifications/extensions to batch queueing > systems used where a Kerberos ticket is required for jobs (including > many HEP sites): *transparently* get and renew Kerberos tickets (for > the local realm) on *any* node in the cluster without the need to > ever enter a password on the computing nodes. One doesn't "enter a password" using tools like kstart because one uses a servtab or keytab -- you are putting the crypto key into a file. > The tickets are discarded when the process/job ends (unlike the > "kinit" in a cron job thingy). It appears you are talking about distributing *user* credentials to the remote systems. What exactly is it that these jobs are doing that require user tickets rather than the tickets for the locally provided service? In any case, for user credentials, kstart isn't the appropriate mechanism, forwarding a ticket from a trusted machine is the appropriate mechanism. > Everyday use case example: the user job runs a program binary stored in > CERN's AFS cell with input data in our AFS cell and writes its output in > BNL's AFS cell (Kerberos tickets for at least two realms/cells required). Actually, with cross realm auth you only need tickets for one realm along with an appropriate trust relationship between the two KDCs. It seems like a bad move for performance to use AFS this way, but it seems reasonably straightforward. AFS has an advanced ACL mechanism so it is not necessary to give the user's normal credentials away to the job -- it is more than sufficient to set up a distinct instance for the job that has permission to read and write only the appropriate files, and to forward the credentials for the segregated instance to the compute nodes. Naturally the last thing the job should do is a kdestroy or the moral equivalent. Perry -- Perry E. Metzger [EMAIL PROTECTED] _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf