Be careful with this approach.  You also need the same munge key installed everywhere.  If the developers have root on their own system, they can submit jobs and run Slurm commands as any user.

ssh sounds significantly safer.  A quick and easy way to make sure that users don't abuse the system is to set limits using pam_limits.so, usually in /etc/security/limits.conf.  A cputime limit of one minute should prevent users from running their work there.  If I'm reading it right, it sounds like you do want jobs running on that system but do not want people launching work over ssh.  In that case, you would need to make sure that pam_limits.so is enabled for ssh but not Slurm.

Ryan

On 12/12/19 2:01 AM, Nguyen Dai Quy wrote:
On Thu, Dec 12, 2019 at 5:53 AM Ryan Novosielski <novos...@rutgers.edu <mailto:novos...@rutgers.edu>> wrote:

    Sure; they’ll need to have the appropriate part of SLURM installed
    and the config file. This is similar to having just one login node
    per user. Typically login nodes don’t run either daemon.


Hi,
It's interesting ! Do you have any link/tutorial for this kind of setup?
Thanks,



    On Dec 11, 2019, at 22:41, Victor (Weikai) Xie
    <xiewei...@gmail.com <mailto:xiewei...@gmail.com>> wrote:

    
    Hi,

    We are trying to setup a tiny Slurm cluster to manage shared
    access to the GPU server in our team. Both slurmctld and slumrd
    are going to run on this GPU server. But here is a problem. On
    one hand, we don't want to give developers ssh access to that
    box, because otherwise they might bypass Slurm job queue and
    launch jobs directly on the box. On the other hand, if developers
    don't have ssh access to the box, how can they run 'sbatch'
    command to submit jobs?

    Does Slurm provide an option to allow developers submit jobs
    right from their own PCs?

    Regards,

    Victor (Weikai)  Xie


Reply via email to