Regarding NFS shares and Python, and plenty of other packages too, pay attention to where the NFS server is located on your network. The NFS server should be part of your cluster, or at least have a network interface on your cluster fabric.
If you perhaps have a home directory server which is a campus NFS server and you are NATting via your head node, then every time a parallel multimode job starts up you will pull in libraries multiple times and this will be a real performance bottleneck. You do have to have a home directory mounted on the nodes - either the users real home directory or something which loosk like a home directory. Ooodles of software packages depend on dor files int eh home directory, and you won't get far without one. Eric, my advice would be to definitely learn the Modules system and implement modules for your users. Also if you could give us some idea of your storage layout this would be good. On 11 May 2018 at 08:55, Miguel Gutiérrez Páez <mgutier...@gmail.com> wrote: > Hi, > > I install all my apps in a shared storage, and change environment > variables (path, vars, etc.) with lmod. It's very useful. > > Regards. > > El vie., 11 may. 2018 a las 6:19, Eric F. Alemany (<ealem...@stanford.edu>) > escribió: > >> Hi Lachlan, >> >> Thank you for sharing your environment. Everyone has their own set of >> rules and i appreciate everyone’s input. >> It seems as if the NFS share is a great place to start. >> >> Best, >> Eric >> ____________________________________________________________ >> _________________________________________ >> >> * Eric F. Alemany * >> *System Administrator for Research* >> >> Division of Radiation & Cancer Biology >> Department of Radiation Oncology >> >> Stanford University School of Medicine >> Stanford, California 94305 >> >> Tel:1-650-498-7969 No Texting >> Fax:1-650-723-7382 >> >> >> >> On May 10, 2018, at 4:23 PM, Lachlan Musicman <data...@gmail.com> wrote: >> >> On 11 May 2018 at 01:35, Eric F. Alemany <ealem...@stanford.edu> wrote: >> >>> Hi All, >>> >>> I know this might sounds as a very basic question: where in the cluster >>> should I install Python and R? >>> Headnode? >>> Execute nodes ? >>> >>> And is there a particular directory (path) I need to install Python and >>> R. >>> >>> Background: >>> SLURM on Ubuntu 18.04 >>> 1 headnode >>> 4 execute nodes >>> NFS shared drive among all nodes. >>> >> >> >> Eric, >> >> To echo the others: we have a /binaries nfs share that utilises the >> standard Environment Modules software so that researchers can manipulate >> their $PATH on the fly with module load/module unload. That share is >> mounted on all the nodes. >> >> For Python, I use virtualenv's but instead of activating, the path is >> changed by the Module file. Personally, I find conda doesn't work very well >> in a shared environment. It's fine on a personal level/ >> >> For R, we have resorted to only installing the main point release because >> we have >700 libraries installed within R and I don't want to reinstall >> them every time. We do also have packrat installed so researchers can >> install their own libraries locally as well. >> >> >> Cheers >> L. >> >> >> >> >>