On Thursday, 17 October 2019, at 16:50:29 (+),
Goetz, Patrick G wrote:
> Are applications even aware when they've been hit by a SIGSTP? This
> idea of a license being released under these circumstances just
> seems very unlikely.
No, which is why SIGSTOP cannot be caught. The action is carr
Are applications even aware when they've been hit by a SIGSTP? This
idea of a license being released under these circumstances just seems
very unlikely.
On 10/15/19 1:57 PM, Brian Andrus wrote:
> It seems that there are some details that would need addressed.
>
> A suspend signal is nothing mo
We have been using:
https://github.com/fasrc/slurm-diamond-collector
For our set up. Though it gives more of an over all look. We also use
this:
https://github.com/fasrc/lsload
-Paul Edmon-
On 10/16/19 4:53 PM, Will Dennis wrote:
Hi all,
We run a few Slurm clusters here, all using Slurm
Hello,
I'm testing X11 forwarding and it seems it runs *much* slower if I run
it through Slurm as opposed to running it via ssh forwarding.
For example, running ANSYS via srun gives me extremely laggy, unusable
output:
login ~]$ env --unset LD_PRELOAD srun --pty --x11 bash
node ~]$ unset SLURM_G
Brian Andrus writes:
> When running a report to try and get jobs that start during a particular
> day, sacct is returning a number of jobs that show as starting/ending
> outside the range.
> What could cause this?
sacct selects jobs that were eligible to run (including actually
running) between
Hi,
I used to run a hello mpi for testing purposes. Now, I see that it doesn't
work. While the log file shows memory allocation problem, squeue shows that
job is in R state endlessly.
[mahmood@hpc ~]$ cat slurm_script1.sh
#!/bin/bash
#SBATCH --job-name=hello_mpi
#SBATCH --output=hellompi.log
#SBAT