Re: [slurm-users] siesta jobs with slurm, an issue

2018-07-22 Thread Mahmood Naderan
Thanks for the hint. In fact the siesta user wasted my time too!! :/ Regards, Mahmood On Sun, Jul 22, 2018 at 11:13 PM, Renfro, Michael wrote: > You’re getting the same fundamental error in both the interactive and > batch version, though. > > The ‘reinit: Reading from standard input’ line se

Re: [slurm-users] siesta jobs with slurm, an issue

2018-07-22 Thread Renfro, Michael
You’re getting the same fundamental error in both the interactive and batch version, though. The ‘reinit: Reading from standard input’ line seemed off, since you were providing an argument for the input file. But all the references I find to running Siesta in their manual (section 3 and section

Re: [slurm-users] siesta jobs with slurm, an issue

2018-07-22 Thread Mahmood Naderan
Yes. Since with my user account, I can not login to nodes, I first ssh to the node via root and the su there. [root@rocks7 ~]# ssh compute-0-3 Warning: untrusted X11 forwarding setup failed: xauth key data not generated Last login: Sun Jul 22 21:40:09 2018 from rocks7.local Rocks Compute Node Rock

Re: [slurm-users] siesta jobs with slurm, an issue

2018-07-22 Thread John Hearns
Are you very sure that the filesystem with the input file is mounted on the compute nodes? Try to cat the file. On 22 July 2018 at 19:11, Mahmood Naderan wrote: > I am able to directly run the command on the node. Please note in the > following output that I have pressed ^C after some minutes. S

Re: [slurm-users] siesta jobs with slurm, an issue

2018-07-22 Thread Mahmood Naderan
I am able to directly run the command on the node. Please note in the following output that I have pressed ^C after some minutes. So, the errors are related to ^C. [mahmood@compute-0-3 ~]$ mpirun -np 4 /share/apps/chem/siesta-4.0.2/spar/siesta dimer1prime.fdf dimer1prime.out Siesta Version : v4.

Re: [slurm-users] siesta jobs with slurm, an issue

2018-07-22 Thread Bill Barth
That doesn't look like a slurm problem to me necessarily. Looks like SIESTA quit of its own volition (thus the call to MPI_ABORT()). I suggest you ask your local site support to take a look or go to the SIESTA developers. I doubt you'll find any SIESTA experts here to help you. All I can sugge

[slurm-users] siesta jobs with slurm, an issue

2018-07-22 Thread Mahmood Naderan
Hi, I don't know why siesta jobs are aborted by the slurm. [mahmood@rocks7 sie]$ cat slurm_script.sh #!/bin/bash #SBATCH --output=siesta.out #SBATCH --job-name=siesta #SBATCH --ntasks=8 #SBATCH --mem=4G #SBATCH --account=z3 #SBATCH --partition=EMERALD mpirun /share/apps/chem/siesta-4.0.2/spar/sies