Hi,
I´m using slurm together with clustercfn autoscaling.
I just have a problem and thought that you may help.
When i run a script
#Script.sh
# /bin/bash
./myprogram --threads=5 inputfile outputfile
The program uses 5 threads , assuming only 1 thread per cpu is launched
it would requi
Hi folks,
Not sure i should post this here but thought you may have seen this
problem before.
I´m running slurm(16.05) together with cfncluster from aws and using
autoscaling. It seems to work except for dependencies.
I always get an error:
_sbatch: error: Batch job submission falied: Jo
Hi,
I´m running a batch array script and would like to execute a command
after the last task
#SBATCH --array 1-10%10:1
sh myscript.R inputdir/file.${SLURM_ARRAY_TASK_ID}
# Would like to run a command after the last task
For exemple when i was using SGE there was something like this
| if($
Hi,
I would like to submit a job that requires 3Go. The problem is that I
have 70 nodes available each node with 2Gb memory.
So the command sbatch --mem=3G will wait for ressources to become available.
Can I run sbatch and tell the cluster to use the 3Go out of the 70Go
available or is
*Hi,*
**
*I have a single physical server with :*
**
* *63 cpus (each cpu has 16 cores) *
* *480Gb total memory*
**
**
**
*NodeNAME= Sockets=1 CoresPerSocket=16 ThreadsPerCore=1 Procs=63
REALMEMORY=48***
**
**
**
**
**
*This configuration will not work. What is should be ?*
*Hi,*
**
*I have a single physical server with :*
**
* *63 cpus (each cpu has 16 cores) *
* *480Gb total memory*
**
**
**
*NodeNAME= Sockets=1 CoresPerSocket=16 ThreadsPerCore=1 Procs=63
REALMEMORY=48***
**
**
**
**
**
*This configuration will not work. What is should be ?*
Benjamin Redling wrote:
Am 16.02.2018 um 15:28 schrieb david martin:
*I have a single physical server with :*
* *64 cpus (each cpu has 16 cores) *
* *480Gb total memory*
*NodeNAME= Sockets=1 CoresPerSocket=16 ThreadsPerCore=1 Procs=63
REALMEMORY=48***
*This configuration will not work.