We are pleased to announce the availability of Slurm version 17.11.8.
This includes over 30 fixes made since 17.11.7 was released at the end
of May. This includes a change to the slurmd.service file used with
systemd, this fix prevents systemd from destroying the cgroup
hierarchies slurmd/slur
Hello,
Michael Di Domenico writes:
> did you copy the mca parameters file to all the compute nodes as well?
>
No need: my home directory is shared between the submit machine & the
nodes.
Cheers,
Roger
did you copy the mca parameters file to all the compute nodes as well?
On Thu, Jul 19, 2018 at 11:37 AM, Roger Mason wrote:
> Hell Gilles,
>
> gil...@rist.or.jp writes:
>
>> is the home directory mounted at the same place regardless this is a
>> frontend or a compute node ?
>
> One host serves as
Hi all,
We currently have a small Slurm cluster for a research group here, where as a
part of that setup, we have a Slurm DBD setup that we utilize for fair-share
scheduling. The current server platform this runs on is an older Dell PowerEdge
R210 system, with a single 4-core Intel Xeon X3430 C
Hell Gilles,
gil...@rist.or.jp writes:
> is the home directory mounted at the same place regardless this is a
> frontend or a compute node ?
One host serves as both a frontend and compute node and is used to pixie
boot the other compute nodes. On the frontend machine (192.168.0.100) I
have:
mo
Roger,
is the home directory mounted at the same place regardless this is a
frontend or a compute node ?
I noted you --export=ALL, so there is a risk you export $HOME that is
not reachable on the compute nodes.
Cheers,
Gilles
- Original Message -
> Hello Paul,
>
> Paul Edmon wri
Interesting. Haven't hit that one before. The file has always worked
for me.
-Paul Edmon-
On 07/19/2018 10:28 AM, Roger Mason wrote:
Hello Paul,
Paul Edmon writes:
So the recommendation I've gotten the past is to us option number 4
from this FAQ:
https://www.open-mpi.org/faq/?category=
Hello Paul,
Paul Edmon writes:
> So the recommendation I've gotten the past is to us option number 4
> from this FAQ:
>
> https://www.open-mpi.org/faq/?category=tuning#setting-mca-params
>
> This works for both mpirun and srun in slurm because its a flat file
> that is read rather than options t
So the recommendation I've gotten the past is to us option number 4 from
this FAQ:
https://www.open-mpi.org/faq/?category=tuning#setting-mca-params
This works for both mpirun and srun in slurm because its a flat file
that is read rather than options that are passed in.
-Paul Edmon-
On 07/1
Thank you Peter,
Bill
-- Original --
From: Peter Kjellström
Date: Thu,Jul 19,2018 9:51 PM
To: Bill
Cc: Slurm User Community List
Subject: Re: [slurm-users] default memory request
On Thu, 19 Jul 2018 18:57:09 +0800
"Bill" wrote:
> Hi ,
>
>
> I just found t
On Thu, 19 Jul 2018 18:57:09 +0800
"Bill" wrote:
> Hi ,
>
>
> I just found the way , set "DefMemPerCPU=4096" for partition in
> slurm.conf
>
> It will use 4G memory request.
That is how we do it too (except not for a specific partition but
globally).
You can also add custom logic to a submi
Hello,
I've run into a problem passing MCA parameters to openmpi2. This runs
fine on the command-line:
/usr/local/mpi/openmpi2/bin/mpirun --mca btl_tcp_if_include \
192.168.0.0/24 -np 10 -hostfile ~/ompi.hosts \
~/Software/Gulp/gulp-5.0/gulp.ompi example2
If I put the the MCA parameters in ~/op
Hi ,
I just found the way , set "DefMemPerCPU=4096" for partition in slurm.conf
It will use 4G memory request.
Regards,
Bill
-- Original --
From: "Bill";
Date: Thu, Jul 19, 2018 06:39 PM
To: "Slurm User Community";
Subject: [slurm-users]
Hi ,
How to set a default memory request when use srun without --mem?
for example,
$srun --mem=40G sleep 120 , it will require 40G memory, but if I run $srun
sleep 120, it may use all node's memory.
How to setup the default srun request memory size without --mem arguments?
Many thanks,
Bill
Hi Slurm users,
We have found the need to execute a parallel command on all nodes
running jobs belonging to a particular user.
I have made a configuration to the excellent ClusterShell tool as
documented in https://wiki.fysik.dtu.dk/niflheim/SLURM#clustershell
If you add a "slurmuser" secti
15 matches
Mail list logo