I wrote:
> > I'm glad you're looking at speeding up the linker.  Please make sure to
> > look at memory consumption as well, since performance falls off a cliff
> > once the working set exceeds physical memory.  A good test would be to
> > bootstrap gcc on a machine with 256M, or that is artificially limited to
> > 256M (I seem to recall that you can tell a Linux kernel you have only a
> > given amount of memory).

On Wed, May 04, 2005 at 09:17:19AM -0700, H. J. Lu wrote:
> Given the current hardware, I would say 512MB is a reasonable number.
> If I need to more than 256M to get 5% speed up, I will take it.

We're not talking about 5% speedup; if the linker starts thrashing because
of insufficient memory you pay far more than that.  And certainly anyone
with an older computer who is dissatified with its performance, but
doesn't have a lot of money, should look into getting more memory before
anything else.  Still, the GNU project shouldn't be telling people in the
third world with cast-off machines that they are out of luck; to many of
them, 256M is more than they have.

So the basic issue is this: why the hell does the linker need so much
memory?  Sure, if you have tons available, it pays to trade memory for
time, mmap everything, then build all the hashes you want to look up
relationships in every direction.  But if it doesn't really fit, it's
a big lose.  Ideally ld, ar and the like could detect and adapt if there
isn't enough physical memory to hold everything.

Reply via email to