Whopps. I made some mistakes in the previous post. 

Toke Eskildsen [t...@statsbiblioteket.dk]:

> Extrapolating from 1.4M documents and 180 clients, let's say that
> there are 1.4M/180/5 unique terms for each sort-field and that their
> average length is 10. We thus have
> 1.4M*log2(1500*10*8) + 1500*10*8 bit ~= 23MB
> per sort field or about 4GB for all the 180 fields.

That would be 10 bytes and thus 80 bits. The results were correct though.

> So 1 active searcher and 2 warming searchers. Ignoring that one of
> the warming searchers is highly likely to finish well ahead of the other
> one, that means that your heap must hold 3 times the structures for
> a single searcher.

This should be taken with a grain of salt as it depends on whether or not there 
is any re-use of segments. There might be for sorting.

Apologies for any confusion,
Toke Eskildsen

Reply via email to