On Thu, 2017-10-05 at 21:56 -0700, S G wrote:
> So for large indexes, there is a chance that filterCache of 128 can
> cause bad GC.

Large indexes measured in document count, yes. Or you could argue that
a large index is likely to be served with a much larger heap and that
it will offset the increased filterCache requirements.

> And for smaller indexes, it would really not matter that much because
> well, the index size is small and probably whole of it is in OS-cache 
> anyways.

More fuzzy. You can easily have a small index measured in document
count that is large in bytes (i.e. large documents) and have complex
(slow) filters.

> So perhaps a default of 64 would be a much saner choice to get the
> best of both the worlds?

Hard to say without empiric measurements. At this point it is all hand-
waving, made worse by the fact that Solr indexes differ a lot in where
the scale & complexity is. I am told that PostgreSQL has the same
problem with default tuning parameters.

Letting the default use maxSizeMB would be better IMO. But I assume
that FastLRUCache is used for a reason, so that would have to be
extended to support that parameter first.


Looking much further ahead, the whole caching system would benefit from
having constraints that encompasses all the shards & collections served
in the same Solr. Unfortunately it is a daunting task just to figure
out the overall principles in this.

- Toke Eskildsen, Royal Danish Library

Reply via email to