Hi,
On the head-node, I have set
X11UseLocalhost no
in /etc/ssh/sshd_config.
Then I login from my workstation to the head-node with the command
ssh -Y
Then I simply run:
srun xclock
and this is working.
On 11/17/2018 06:24 PM, Mahmood Naderan wrote:
scontrol show config | fgrep Prolo
Hi Mahmood,
this question is related to the slurm-roll.
The command rocks sync slurm has more tasks:
1. Rebuild of 411 is forced
2. on compute nodes, the command /etc/slurm/slurm-prep.sh start is executed
3. on compute nodes, slurmd is restarted
4. slurmctld is restarted.
Step 1 and 2 are requi
Hi,
I tried scontrol reconfigure some years ago, but this didn't work in all
cases.
Best regards
Werner
On 05/09/2018 04:27 PM, Mahmood Naderan wrote:
I think, the problem was:
the python script
/opt/rocks/lib/python2.7/site-packages/rocks/commands/sync/slurm/__init__py,
which is called by
Hi Mahmood,
I think, the problem was:
the python script
/opt/rocks/lib/python2.7/site-packages/rocks/commands/sync/slurm/__init__py,
which is called by the command rocks sync slurm
did not restart slurmd on the Head-Node.
After the restart of slurmctld, slurmd on the Head-node had the old
con
Hi Mahmood,
Please try the following commands on rocks7:
systemctl restart slurmd
systemctl restart slurmctld
scontrol update node=rocks7 state=undrain
Best regards
Werner
On 05/06/2018 02:09 PM, Mahmood Naderan wrote:
Still I think for some reasons, slurms put the frontend in drain
stat
Hi,
what is the output of the command:
slurmd -C rocks7
Best regards
Werner
On 05/05/2018 06:56 PM, Mahmood Naderan wrote:
Quick follow up.
I see the Sockets for the head node is 1 while for the compute nodes
is 32. And I think that is the reason, why slurm only see one cpu
(CPUTot=1).
M