Hello,
I'm trying to run parallel on multiple nodes. Each node may have a different
number of CPUs. It appears the best syntax for this is from the man page --slf
section:
8/my-8-cpu-server.example.com
2/[email protected]
My problem is that I'm running in the SLURM envi
On 11/10/22 12:49, Ken Mankoff wrote:
Hello,
I'm trying to run parallel on multiple nodes. Each node may have a different
number of CPUs. It appears the best syntax for this is from the man page --slf
section:
8/my-8-cpu-server.example.com
2/[email protected]
My probl
Hi,
Take a look here for a template:
https://mogonwiki.zdv.uni-mainz.de/dokuwiki/start:working_on_mogon:workflow_organization:node_local_scheduling#running_on_several_hosts
Of course, you need to adjust the partition names and the like, and the
example is unmaintained, but it worked for me fo
Hello,
On 2022-11-10 at 21:27 +01, Christian Meesters wrote:
> https://mogonwiki.zdv.uni-mainz.de/dokuwiki/start:working_on_mogon:workflow_organization:node_local_scheduling#running_on_several_hosts
That example uses "SLURM_CPUS_PER_TASK". From
https://slurm.schedmd.com/sbatch.html
SLURM_CPUS_
Hi Rob,
On 2022-11-10 at 21:21 +01, Rob Sargent wrote:
> I do this, in slurm bash script, to get the number of jobs I want to
> run (turns out it's better for me to not load the full hyper-threaded
> count)
>
>cores=`grep -c processor /proc/cpuinfo`
>cores=$(( $cores / 2 ))
>
>paralle
I'll try to simplify my original question...
If I run
parallel -s-slf hostfile -j 1000
On 11/11/22 00:05, Ken Mankoff wrote:
Hi Rob,
On 2022-11-10 at 21:21 +01, Rob Sargent wrote:
I do this, in slurm bash script, to get the number of jobs I want to
run (turns out it's better for me to not load the full hyper-threaded
count)
cores=`grep -c processor /proc/cpuinfo`
cores=