Just to put a resolution on this. I did some testing and compression
does work but to get extant tables to compress you have to reimport your
database. So the procedure would be to:
1. Turn on compression in my.cnf following the doc.
2. mysqldump the database you want to compress
3. recreat
Dear Loris,
Yes it is indeed a bit odd. At least now I know that this is how SLURM
behaves and not something that has to do with our configuration.
Regards,
Thekla
On 9/12/21 1:04 μ.μ., Loris Bennett wrote:
Dear Thekla,
Yes, I think you are right. I have found a similar job on my system a
Dear Thekla,
Yes, I think you are right. I have found a similar job on my system and
this does seem to be the normal, slightly confusing behaviour. It looks
as if the pending elements of the array get assigned a single node,
but then start on other nodes:
$ squeue -j 8536946 -O jobid,jobarray
Dear Loris,
Thank you for your reply. I don't believe that there is something wrong
with the job configuration or the node configuration to be honest.
I have just submitted a simple sleep script:
#!/bin/bash
sleep 10
as below:
sbatch --array=1-10 --ntasks-per-node=40 --time=09:00:00 test.s