[slurm-users] Fwd: Spank Plugin Prolog Issue with Slurm ≥ 20.02.2

2020-09-17 Thread Kaizaad Bilimorya
I thought I would mention this issue on the mailing list to make it easier for people to search for if they have issues with their spank plugin after upgrading to Slurm 20.02.2 or newer. Seems that the slurm_spank_job_prolog (or epilogue) functions are not found or called if you don't have "Plugst

Re: [slurm-users] Fair share per partition

2020-09-17 Thread Mark Dixon
On Thu, 17 Sep 2020, Paul Edmon wrote: So the way we handle it is that we give a blanket fairshare to everyone but then dial in our TRES charge back on a per partition basis based on hardware.  Our fairshare doc has a fuller explanation: https://docs.rc.fas.harvard.edu/kb/fairshare/ -Paul Ed

Re: [slurm-users] Fair share per partition

2020-09-17 Thread Mark Dixon
On Thu, 17 Sep 2020, Paul Edmon wrote: So the way we handle it is that we give a blanket fairshare to everyone but then dial in our TRES charge back on a per partition basis based on hardware.  Our fairshare doc has a fuller explanation: https://docs.rc.fas.harvard.edu/kb/fairshare/ -Paul Ed

Re: [slurm-users] Fair share per partition

2020-09-17 Thread Paul Edmon
So the way we handle it is that we give a blanket fairshare to everyone but then dial in our TRES charge back on a per partition basis based on hardware.  Our fairshare doc has a fuller explanation: https://docs.rc.fas.harvard.edu/kb/fairshare/ -Paul Edmon- On 9/17/2020 9:30 AM, Mark Dixon wr

[slurm-users] Fair share per partition

2020-09-17 Thread Mark Dixon
Hi all, Clusters sometimes have a couple of different types of hardware, e.g. lots of standard plus small amounts of highmem - with a partition per type. Sometimes one partition, e.g. "standard", is much busier than e.g. "highmem". In a fair share set up with multiple accounts and multiple