On 29/08/18 09:10, Priedhorsky, Reid wrote:
This is surprising to me, as my interpretation is that the first run
should allocate only one CPU, leaving 35 for the second srun, which
also only needs one CPU and need not wait.
Is this behavior expected? Am I missing something?
That's odd - and I
> On Aug 28, 2018, at 6:35 AM, Chris Samuel wrote:
>
> On Tuesday, 28 August 2018 10:21:45 AM AEST Chris Samuel wrote:
>
>> That won't happen on a well configured Slurm system as it is Slurm's role to
>> clear up any processes from that job left around once that job exits.
>
> Sorry Reid, for
On Tuesday, 28 August 2018 11:43:54 PM AEST Umut Arus wrote:
> It seems the main problem is; slurmctld: fatal: No front end nodes defined
Frontend nodes are for IBM BlueGene and Cray systems where you cannot run
slurmd on the compute nodes themselves so a proxy system must be used instead
(at $
Thanks for your reply. Well I'll change NodeName info as output of slurmd
-C.
Yes, both Compute Node and ControlMachine are same machine for this first
test setup. Should any other config parameter need in config file?
thanks...
On Tue, Aug 28, 2018 at 6:26 PM Raymond Wan wrote:
>
> Hi,
>
>
>
Registration for the 2018 Slurm User Group Meeting is ending soon. You can
register at https://slug18.eventbrite.com
The meeting will be held on 25-26 September 2018 in Madrid Spain at CIEMAT.
- *Standard registration*
- Ends August 31
- $350 USD
- *Late **registration*
- Sep
Hi,
On Tuesday, August 28, 2018 09:43 PM, Umut Arus wrote:
# COMPUTE NODES
NodeName=umuta CPUs=1 State=UNKNOWN
I'm not sure what's the cause of your problem, but one thing
I noticed is that the line above should be replaced with the
output of the first line of "slurmd -C".
The
>
> Hi,
>
> I'm trying to install and configure slurm-wlm 17.11.2 package. Firstly I
> wanted to configure as a single host. munge, slurmd and slurmctld was
> installed. munge slurmd can up and run properly but I couldnt up and run
> slurmctld!
>
> It seems the main problem is; slurmctld: fatal: No
On Tuesday, 28 August 2018 10:21:45 AM AEST Chris Samuel wrote:
> That won't happen on a well configured Slurm system as it is Slurm's role to
> clear up any processes from that job left around once that job exits.
Sorry Reid, for some reason I misunderstood your email and the fact you were
talk
Hi,
We intend to oversubscribe our GPU nodes with OverSubscribe=YES,
ExclusiveUser=YES and GRES=GPU:: (and of course with
gres.conf and cgroup.conf properly configured). We're not sure yet how
to approach accounting with this setup. We want to charge users for the
whole node, whether they are runn