The 2019 Slurm User Group Meeting will be held in Salt Lake City at the
University of Utah on September 17-18.
Registration for this user group meeting typically opens in May.
Jacob
On Mon, Mar 25, 2019 at 2:57 PM david baker wrote:
>
> Hello,
>
> I was searching the web to see if there was
Hello,
I was searching the web to see if there was going to be a Slurm users’ meeting
this year, but couldn’t find anything. Does anyone know if there is a users’
meeting planned for 2019? If so, is it most likely going to be held as part of
Supercomputing in Denver? Please let me know if yo
I have created a small group of 4 nodes using my lab mates computers to perform
calculations overnight.
The algorithm has a random component. I have to run the same program with the
same input data several thousand times. To distinguish the executions I have
tried to create a folder with the co
On Monday, 25 March 2019, at 12:57:46 (+),
Ryan Novosielski wrote:
> If the error message is accurate, the fix may be having the VNC
> server not set DISPLAY equal to localhost:10.0 or similar as SSH
> normally does these days, but to configure it to set DISPLAY to
> fqdn:10.0. We had to do so
Hi Mahmood,
If your SLURM version is at least 18.08 then you should be able to do it
with an heterogeneous job. See
https://slurm.schedmd.com/heterogeneous_jobs.html
Cheers,
Rafael.
Le lun. 25 mars 2019 à 13:10, Mahmood Naderan a
écrit :
> Hi
> Is it possible to submit a multinode mpi job with
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If the error message is accurate, the fix may be having the VNC server
not set DISPLAY equal to localhost:10.0 or similar as SSH normally
does these days, but to configure it to set DISPLAY to fqdn:10.0. We
had to do something similar with FastX.
On 3
Hi Chris,
Christopher Benjamin Coffey writes:
> Loris,
>
> Glad you've made some progress.
>
> We finally got it working as well, and have two findings:
>
> 1. the login node fqdn must be the same as the compute nodes
This is the case on our system.
> 2. --x11 is not required to be added to sr
Hi
Is it possible to submit a multinode mpi job with the following config:
Node1: 16 cpu, 90GB
Node2: 8 cpu, 20GB
?
Regards,
Mahmood
Hello Doug,
Thank you for your detailed reply regarding how to setup backfill. There's
quite a lot to take in there. Fortunately, I now have a day or two to read up
and understand the ideas now that our cluster is down due to a water cooling
failure. In the first instance, I'll certainly imple
Dear all,
Using these config files,
https://github.com/psteinb/docker-centos7-slurm/blob/7bdb89161febacfd2dbbcb3c5684336fb73d7608/gres.conf
https://github.com/psteinb/docker-centos7-slurm/blob/7bdb89161febacfd2dbbcb3c5684336fb73d7608/slurm.conf
I observed a weird behavior of the '--gres-flags=
10 matches
Mail list logo