Here is a bash script which may be used to run multiple copies of
GraphicsMagick in parallel working on the same algorithm. The
specified total number of iterations is subdivided across a specified
number of GM processes. Any GM subcommand may be specified. This may
be useful in order to understand Linux VM loading and the best OpenMP
options to use in order to avoid wasting precious shared VM CPU cyles
while still achieving good throughput.
The subcommand used for the example run output is one which normally
consumes 100% of available CPU.
Bob
% cat ./benchparallel.sh
#!/bin/bash
# Run gm in parallel with enough processes to subdivide the work to
# achieve the requested number of total iterations. Useful to
# determine OpenMP parameters in a forking-server environment.
#
# See http://gcc.gnu.org/onlinedocs/libgomp/index.html#toc_Environment-Variables
# for libgomp tunables.
#
# Usage
# ./benchparallel.sh total_iter nprocs [subcommand] [input_image]
[output_image]
#
# Typical usage:
# ./benchparallel.sh 12 3 '-gaussian 0x1'
typeset -i i=0
typeset -i total_iter=$1
typeset -i nprocs=$2
subcommand=${3:-'-noop'}
input_image=${4:-'-size 4000x3000 tile:logo:'}
output_image=${5:-'null:'}
typeset -i iter_per_proc
duration=5
let iter_per_proc=total_iter/nprocs
command="gm benchmark -iterations ${iter_per_proc} convert ${input_image}
${subcommand} ${output_image}"
echo "Executing ${nprocs}x: ${command} ..."
time (
while ((i < nprocs))
do
eval "${command}" &
let i=i+1
done
# Wait for processing to be done
wait
)
% ./benchparallel.sh 24 4 '-gaussian 0x1'
Executing 4x: gm benchmark -iterations 6 convert -size 4000x3000 tile:logo: -gaussian 0x1 null: ...
Results: 24 threads 6 iter 59.09s user 10.04s total 0.598 iter/s (0.102 iter/s
cpu)
Results: 24 threads 6 iter 58.96s user 10.07s total 0.596 iter/s (0.102 iter/s
cpu)
Results: 24 threads 6 iter 59.06s user 10.08s total 0.595 iter/s (0.102 iter/s
cpu)
Results: 24 threads 6 iter 58.76s user 10.09s total 0.595 iter/s (0.102 iter/s
cpu)
real 0m10.100s
user 3m51.462s
sys 0m4.468s
% env OMP_NUM_THREADS=6 ./benchparallel.sh 24 4 '-gaussian 0x1'
Executing 4x: gm benchmark -iterations 6 convert -size 4000x3000 tile:logo:
-gaussian 0x1 null: ...
Results: 6 threads 6 iter 57.05s user 9.81s total 0.612 iter/s (0.105 iter/s
cpu)
Results: 6 threads 6 iter 56.76s user 9.83s total 0.610 iter/s (0.106 iter/s
cpu)
Results: 6 threads 6 iter 56.80s user 9.83s total 0.610 iter/s (0.106 iter/s
cpu)
Results: 6 threads 6 iter 57.11s user 9.99s total 0.601 iter/s (0.105 iter/s
cpu)
real 0m9.990s
user 3m43.742s
sys 0m4.024s
% env OMP_NUM_THREADS=4 ./benchparallel.sh 24 4 '-gaussian 0x1'
Executing 4x: gm benchmark -iterations 6 convert -size 4000x3000 tile:logo:
-gaussian 0x1 null: ...
Results: 4 threads 6 iter 54.94s user 13.88s total 0.432 iter/s (0.109 iter/s
cpu)
Results: 4 threads 6 iter 55.77s user 14.18s total 0.423 iter/s (0.108 iter/s
cpu)
Results: 4 threads 6 iter 55.94s user 14.23s total 0.422 iter/s (0.107 iter/s
cpu)
Results: 4 threads 6 iter 55.95s user 14.23s total 0.422 iter/s (0.107 iter/s
cpu)
real 0m14.242s
user 3m39.142s
sys 0m3.492s
env OMP_NUM_THREADS=4 OMP_WAIT_POLICY=PASSIVE ./benchparallel.sh 24 4
'-gaussian 0x1'
Executing 4x: gm benchmark -iterations 6 convert -size 4000x3000 tile:logo:
-gaussian 0x1 null: ...
Results: 4 threads 6 iter 56.54s user 14.36s total 0.418 iter/s (0.106 iter/s
cpu)
Results: 4 threads 6 iter 56.04s user 14.46s total 0.415 iter/s (0.107 iter/s
cpu)
Results: 4 threads 6 iter 56.27s user 14.55s total 0.412 iter/s (0.107 iter/s
cpu)
Results: 4 threads 6 iter 56.43s user 14.55s total 0.412 iter/s (0.106 iter/s
cpu)
real 0m14.560s
user 3m41.766s
sys 0m3.564s
% env OMP_NUM_THREADS=24 OMP_WAIT_POLICY=PASSIVE ./benchparallel.sh 24 1
'-gaussian 0x1'
Executing 1x: gm benchmark -iterations 24 convert -size 4000x3000 tile:logo:
-gaussian 0x1 null: ...
Results: 24 threads 24 iter 237.87s user 11.30s total 2.124 iter/s (0.101
iter/s cpu)
real 0m11.306s
user 3m52.047s
sys 0m5.840s
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org