Re: [slurm-users] Database Compression

2021-12-09 Thread Paul Edmon
Just to put a resolution on this.  I did some testing and compression does work but to get extant tables to compress you have to reimport your database.  So the procedure would be to: 1. Turn on compression in my.cnf following the doc. 2. mysqldump the database you want to compress 3. recreat

Re: [slurm-users] Job array start time and SchedNodes

2021-12-09 Thread Thekla Loizou
Dear Loris, Yes it is indeed a bit odd. At least now I know that this is how SLURM behaves and not something that has to do with our configuration. Regards, Thekla On 9/12/21 1:04 μ.μ., Loris Bennett wrote: Dear Thekla, Yes, I think you are right. I have found a similar job on my system a

Re: [slurm-users] Job array start time and SchedNodes

2021-12-09 Thread Loris Bennett
Dear Thekla, Yes, I think you are right. I have found a similar job on my system and this does seem to be the normal, slightly confusing behaviour. It looks as if the pending elements of the array get assigned a single node, but then start on other nodes: $ squeue -j 8536946 -O jobid,jobarray

Re: [slurm-users] Job array start time and SchedNodes

2021-12-09 Thread Thekla Loizou
Dear Loris, Thank you for your reply. I don't believe that there is something wrong with the job configuration or the node configuration to be honest. I have just submitted a simple sleep script: #!/bin/bash sleep 10 as below: sbatch --array=1-10 --ntasks-per-node=40 --time=09:00:00 test.s