I think, memory footprint doesn't really matter when standard availlable ram
is lot away...
But *time required to ``fork''* could matter:
QuickTest() {
local TIMEFORMAT='%R ( %U + %S )';
printf "%-10s: " "${1##*/}";
time for i in {1..1000};
do
res=$("$@");
done;
[ "$res" != "12" ] && echo "WARN: $1: '$res' != 12."
}
CacheHostName=$(</etc/hostname) # Pre caching this file for sed
for cmd in /bin/{"busybox ",ba,da}"sh -c 'echo $((3*4))'" \
{"/usr/bin/perl -e ","/usr/bin/python -c "}"'print 4*3'" \
"/usr/bin/awk 'BEGIN{print 3*4}' /dev/null" \
"/bin/sed -e 's/.*/12/' /etc/hostname" ;do
. <(echo QuickTest $cmd) 2>&1;
done |
sort -nk 3
After ~23 seconds, this print (on my host):
dash : 1.667 ( 1.206 + 0.519 )
busybox : 1.868 ( 1.264 + 0.665 )
sed : 2.150 ( 1.364 + 0.841 )
bash : 2.201 ( 1.520 + 0.719 )
perl : 2.608 ( 1.679 + 0.978 )
awk : 2.662 ( 1.666 + 1.041 )
python : 9.881 ( 6.867 + 3.052 )
Of course, all this was called without modules, so this times represent
something like *minimal footprint when forked to*.
There is no more consideration of efficience when used, for sample, as
filter throught big volume of data, for making complex calculs, etc...
On Wed, Mar 24, 2021 at 10:52:16PM -0400, Dale R. Worley wrote:
> Andreas Schwab <[email protected]> writes:
> > $ size /usr/bin/perl /bin/bash
> > text data bss dec hex filename
> > 2068661 27364 648 2096673 1ffe21 /usr/bin/perl
> > 1056850 22188 61040 1140078 11656e /bin/bash
> $ size /usr/bin/perl /bin/bash
> text data bss dec hex filename
> 8588 876 0 9464 24f8 /usr/bin/perl
> 898672 36064 22840 957576 e9c88 /bin/bash
>
> I do suspect that's because perl is using more loadable modules than
...
--
Félix Hauri - <[email protected]> - http://www.f-hauri.ch