> >
> > IMHO there needs to be a maximum size (maybe related to the sum of
> > caches of all CPUs in the system?)
> >
> > Best would be to fix this for all large system hashes together.
> 
> How about using an algorithm like this: up to a certain "size"
> (memory size, cache size,...), scale the hash tables linearly; 
> but for larger sizes, scale logarithmically (or approximately
> logarithmically)

I don't think it makes any sense to continue scaling at all after
some point - you won't get better shorter hash chains anymore and the 
large hash tables actually cause problems: e.g. there are situations where we 
walk
the complete tables and that takes longer and longer.

Also does a 1TB machine really need bigger hash tables than a 100GB one?

The problem is to find out what a good boundary is.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to