My guess is that getting rid of '-g' (i.e., debugging symbols) would be
the most profitable "optimization." My understanding is that the debug
symbols cannot be stripped from library code, so you are probably
thrashing your cpu cache unnecessarily when running debian binary
packages (at least io-bound processes that run lots of dynamically
linked code). (Does anyone have benchmark results?) If I remember
correctly, it is debian policy to use '-g' and then strip non-library
binaries. I'm sure I'll get howls for suggesting it, but I think that
the policy should be to not use '-g' in the stable distribution.
Cheers,
Tim
Andras BALI wrote:
On Wed, Nov 14, 2001 at 01:53:20PM +0100, spear wrote:
I was wondering, about the fact some Linux distributions are " optimized "
for i586 processors : what does it really change ? Are there any benchmarks
comparing a distribution giving the choice of both i386/i586 ?
I don't know any, but you may find some interesting results at
<http://www.geocities.com/SiliconValley/Vista/6552/compila.html>.
IIRC the average benefit is about only 5%, which may be somewhat
higher (eg. gzip compresses about 10-12% faster) in some cases.
In case you're interested, there's a project for recompiling potato
for the i586 at <http://debian.fsn.hu/>. The status report says that
they've already recompiled ~85 percent of the distribution with
pentium optimization.
Regards,
--
Timothy H. Keitt
Department of Ecology and Evolution
State University of New York at Stony Brook
Stony Brook, New York 11794 USA
Phone: 631-632-1101, FAX: 631-632-7626
http://life.bio.sunysb.edu/ee/keitt/