Going off topic, if you want an ssh client and an X-server on a Windows
workstation or laptop, I highly recommend MobaXterm.
You can open a remote desktop easily.
Session types are ssh, VNC, RDP, Telnet(!) , Mosh and anything else you can
think of.
Including a serial terminal for those times when y
Why would you run a slurmctld on a *submit* host? You only need the
controller daemon on, well, the controllers (what I would still call
'queue masters' :) ). Personally I'd make quite sure that no-one apart
from admins has rights to log in to those, really!
In fact, you don't need to run any d
I'm a little confused about how this would work. For example, where
does slurmctld run? And if on each submit host, why aren't the control
daemons stepping all over each other?
On 11/22/18 6:38 AM, Stu Midgley wrote:
> indeed.
>
> All our workstations are submit hosts and in the queue, so peo
I posted about the local display issue a while back ("Built in X11
forwarding in 17.11 won't work on local displays").
I agree that having some local managed workstations that can also act as
submit nodes is not so uncommon. However we also ran into this on our
official "login nodes" because we us
Hi Chris,
I really think, it is not that uncommon. But in another way like Tina
explained.
We HAVE special loginnodes to the cluster, no institute can submit from
their workstations, they have to login to our loginnodes.
BUT, they can do it not only by logging in per ssh, but also per FastX,
On Saturday, 24 November 2018 9:12:26 AM AEDT Mark Hahn wrote:
> I think it makes sense. Traditionally, DISPLAY=:0 means "the X server on
> the machine where the client is running". You can trivally
> export DISPLAY=`hostname`$DISPLAY
> and Slurm will be happy, won't it? IE, you have given it
Sadly that's exactly what I'm saying. Your $DISPLAY variable is : followed
by a number and that's what I'm saying that Slurm forbids, though I'm not
clear why. The code checks like this:
I think it makes sense. Traditionally, DISPLAY=:0 means "the X server on
the machine where the client is
Hi Mahmood,
On Saturday, 24 November 2018 6:52:54 AM AEDT Mahmood Naderan wrote:
> >I suspect if you do:
> >echo $DISPLAY
> >it will say something like :0 and Slurm doesn't allow that at present.
>
> Actually that is not applicable here. Please see below
>
> [mahmood@rocks7 ~]$ echo $DISPLAY
>
>I suspect if you do:
>echo $DISPLAY
>it will say something like :0 and Slurm doesn't allow that at present.
Actually that is not applicable here. Please see below
[mahmood@rocks7 ~]$ echo $DISPLAY
:1
[mahmood@rocks7 ~]$ srun --x11 --nodelist=compute-0-3 -n 1 -c 6 --mem=8G -A
y8 -p RUBY xclock
On Friday, 23 November 2018 7:34:42 PM AEDT Mahmood Naderan wrote:
> Now, the question is, why the following error happens when we now that x11
> support had been enabled during the compilation.
>
> [mahmood@rocks7 ~]$ srun --x11 --nodelist=compute-0-5 -n 1 -c 6 --mem=8G -A
> y8 -p RUBY xclock
>
>You would need to manipulate the xauth and DISPLAY settings to make then
in a different form (hostname:number or IP:number). This is not hard >when
you know the trick...
Can you give me a keyword for that to search? I can not understand what is
going to be done.
Regards,
Mahmood
i36>
From: slurm-users on behalf of Mahmood
Naderan
Sent: Friday, November 23, 2018 8:13:59 PM
To: Slurm User Community List
Subject: Re: [slurm-users] About x11 support
>Then I'd you run something like:
>srun --var=DISPLAY xterm
There is no such option when I se
behalf of
> Mahmood Naderan
> *Sent:* Friday, November 23, 2018 6:34:42 PM
> *To:* Slurm User Community List
> *Subject:* Re: [slurm-users] About x11 support
>
> Hi Gareth,
> Thanks for the info. My cluster is not a big one and I have configured in
> the following way.
>
network accessible from the compute nodes.
>
> Gareth
>
> Get Outlook for Android <https://aka.ms/ghei36>
>
> --
> *From:* slurm-users on behalf of
> Mahmood Naderan
> *Sent:* Friday, November 23, 2018 6:34:42 PM
> *To:* Slurm User Com
munity List
Subject: Re: [slurm-users] About x11 support
Hi Gareth,
Thanks for the info. My cluster is not a big one and I have configured in the
following way.
1- A frontend which has the rocks 7 (based on centos 7) with gnome. Users login
to this node *only* via vncviewer.
2- While a user is c
Hi Gareth,
Thanks for the info. My cluster is not a big one and I have configured in
the following way.
1- A frontend which has the rocks 7 (based on centos 7) with gnome. Users
login to this node *only* via vncviewer.
2- While a user is connected to his gnome desktop, he opens a terminal and
may r
X11 comes up on this list now and then. I'm often tempted to describe our
site's approach and will do so now it case it help others (or someone wants to
say why it is a terrible approach).
First some preamble:
* we offer 'login' nodes with limits on what can be run as a
highest-reliability op
indeed.
All our workstations are submit hosts and in the queue, so people can run
jobs on their local host if they want.
We have a GUI tightly integrated with our environment for our staff to
submit and monitor their jobs from (they don't have to touch a single job
script).
On Thu, Nov 22, 2018
On Thursday, 22 November 2018 9:24:50 PM AEDT Tina Friedrich wrote:
> I really don't want to start a flaming discussion on this - but I don't
> think it's an unusual situation.
Oops sorry, I wasn't intending to imply it wasn't a valid way to do it, it's
just that across the many organisations I'
I really don't want to start a flaming discussion on this - but I don't
think it's an unusual situation. I have, in likewise roughtly 15 years
of doing this, not ever worked anywhere where people didn't have a GUI
to submit from. It's always been a case of 'Wand to use the cluster?
We'll make y
On 22/11/18 5:04 am, Mahmood Naderan wrote:
The idea is to have a job manager that find the best node for a newly
submitted job. If the user has to manually ssh to a node, why one should
use slurm or any other thing?
You are in a really really unusual situation - in 15 years I've not come
ac
I agree with you on that one - I'd forgotten about that detail. The
having to actually do an 'ssh -X' before you can do 'srun --x11' is
quite silly, and a bit aggravating.
You can do 'ssh -X localhost' and then try the srun; that should work,
as well.
Tina
On 21/11/2018 18:04, Mahmood Naderan
>The 'fix' for Mahmood would be to ssh to another host and then submit
>the X11 job.
The idea is to have a job manager that find the best node for a newly
submitted job. If the user has to manually ssh to a node, why one should
use slurm or any other thing?
Regards,
Mahmood
Hi Chris,
On 11/20/2018 09:09 PM, Chris Samuel wrote:
On Wednesday, 21 November 2018 12:16:04 AM AEDT Mahmood Naderan wrote:
So, I am *guessing* that the latest version of slurm is not compatible with
1804 from Centos. In other word, something has been added/fixed in the ssh
library which is n
On Wednesday, 21 November 2018 12:16:04 AM AEDT Mahmood Naderan wrote:
> So, I am *guessing* that the latest version of slurm is not compatible with
> 1804 from Centos. In other word, something has been added/fixed in the ssh
> library which is now causing some mismatches.
It's not getting that f
On Wednesday, 21 November 2018 2:27:15 AM AEDT Christopher Benjamin Coffey
wrote:
> Are you using the built in slurm x11 support? Or that spank plugin? We
> haven't been able to get the right combo of things in place to get the
> built in x11 to work.
We're using the built in X11 support with SS
Hi Chris,
Are you using the built in slurm x11 support? Or that spank plugin? We haven't
been able to get the right combo of things in place to get the built in x11 to
work.
Best,
Chris
—
Christopher Coffey
High-Performance Computing
Northern Arizona University
928-523-1167
On 11/15/18, 5
I think I know what is going wrong. Actually the bug is not related to
slurm or rocks itself. It is a result of some mismatches due to the update
of softwares including ssh, centos, rocks and slurm.
Recently, I have updated my rocks using "yum update". The result was
fetching the latest packages o
On Tuesday, 20 November 2018 2:51:26 AM AEDT Mahmood Naderan wrote:
> With and without --x11, I am not able to see xclock on a compute node.
>
> [mahmood@rocks7 ~]$ srun --x11 --nodelist=compute-0-3 -n 1 -c 6 --mem=8G -A
> y8 -p RUBY xclock
> srun: error: Cannot forward to local display. Can only
On 11/19/18 4:51 PM, Mahmood Naderan wrote:
> With and without --x11, I am not able to see xclock on a compute node.
>
> [mahmood@rocks7 ~]$ srun --x11 --nodelist=compute-0-3 -n 1 -c 6
> --mem=8G -A y8 -p RUBY xclock
> srun: error: Cannot forward to local display. Can only use X11
> forwarding w
Excuse me, the last email was mistakenly sent.
The ssh config seems to be fine
[root@rocks7 ~]# grep X11 /etc/ssh/sshd_config
X11Forwarding yes
X11UseLocalhost no
Werner,
I am running that command on TTY. As you can see, I can run xclock while
sshing to the node
[mahmood@rocks7 ~]$ ssh -Y
With and without --x11, I am not able to see xclock on a compute node.
[mahmood@rocks7 ~]$ srun --x11 --nodelist=compute-0-3 -n 1 -c 6 --mem=8G -A
y8 -p RUBY xclock
srun: error: Cannot forward to local display. Can only use X11 forwarding
with network displays.
[mahmood@rocks7 ~]$ srun --nodelist
On Sunday, 18 November 2018 4:24:08 AM AEDT Mahmood Naderan wrote:
> >What does this command say?
> >
> >scontrol show config | fgrep PrologFlags
>
> [root@rocks7 ~]# scontrol show config | fgrep PrologFlags
> PrologFlags = Alloc,Contain,X11
>
> That means x11 has been compiled in
Hello,
two things; you don't actually seem to have the '--x11' flag on your
srun command? I.e. does 'srun --x11 --nodelist=compute-0-5 -n 1 -c 6
--mem=8G -A y8 -p RUBY xclock' get you any further?
I had some trouble getting the inbuild X forwarding to work, which had
to do with hostnames & xau
Hi,
On the head-node, I have set
X11UseLocalhost no
in /etc/ssh/sshd_config.
Then I login from my workstation to the head-node with the command
ssh -Y
Then I simply run:
srun xclock
and this is working.
On 11/17/2018 06:24 PM, Mahmood Naderan wrote:
scontrol show config | fgrep Prolo
>What does this command say?
>scontrol show config | fgrep PrologFlags
[root@rocks7 ~]# scontrol show config | fgrep PrologFlags
PrologFlags = Alloc,Contain,X11
That means x11 has been compiled in the code (while Werner created the
roll).
>Check your slurmd logs on the compute n
On Friday, 16 November 2018 10:26:31 PM AEDT Mahmood Naderan wrote:
> So, is it still possible to use spank even when the code is compiled for
> x11?
No. You need to recompile Slurm without X11 support.
What does this command say?
scontrol show config | fgrep PrologFlags
> Does that mean every
So, is it still possible to use spank even when the code is compiled for
x11?
It seems that Rocks uses RSA keys. It also uses hostbasedauthentication.
[root@rocks7 ~]# cd /etc/ssh/
[root@rocks7 ssh]# ls
authorized_keys shosts.equiv ssh_host_ecdsa_key
ssh_host_ed25519_key.pub ssh_known_hosts
mod
It is compile\d with `-x11` else you get an error message that the
option is not supported. As said it only supports RSA-keys (hardcoded
in the src, libssh2 can handle more formats) and you must use short
hostnames. Another thing is we had to set:
* X11Parameters=local_xauthority
because we
>You can (apparently) still use the external plugin if you build Slurm
without
>its internal X11 support.
Is there any way to query slurm to see if the x11 module has been compiled?
Currently, I am using the slurm roll on rocks 7. Previously, I was able to
use spank with slurm roll 17.
While, the
On Thursday, 15 November 2018 9:36:08 PM AEDT Mahmood Naderan wrote:
> Is there any update about native support of x11 in slurm v18?
It works here...
$ srun --x11 xdpyinfo
srun: job 1744869 queued and waiting for resources
srun: job 1744869 has been allocated resources
name of display:loc
Hi,
Is there any update about native support of x11 in slurm v18?
Prior to that, I used spank-x11 where an rpm file was installed on the
nodes to support x11.
Now that I removed the rpm, I can not use srun with x11 support.
[mahmood@rocks7 ~]$ srun --nodelist=rocks7 -n 1 -c 4 --mem=4G --x11 -A y8
42 matches
Mail list logo