Hmmm, I think you may be looking at the wrong thing here. Generally, a filterCache entry will be maxDocs/8 (plus some overhead), so in your case they really shouldn't be all that large, on the order of 3M/filter. That shouldn't vary based on the number of docs that match the fq, it's just a bitset. To see if that makes any sense, take a look at the admin page and the number of evictions in your filterCache. If that is > 0, you're probably using all the memory you're going to in the filterCache during the day..
But you haven't indicated what version of Solr you're using, I'm going from a relatively recent 3x knowledge-base. Have you put a memory analyzer against your Solr instance to see where the memory is being used? Best Erick On Wed, Jun 13, 2012 at 1:05 PM, Pawel <pawelmis...@gmail.com> wrote: > Hi, > I have solr index with about 25M documents. I optimized FilterCache size to > reach the best performance (considering traffic characteristic that my Solr > handles). I see that the only way to limit size of a Filter Cace is to set > number of document sets that Solr can cache. There is no way to set memory > limit (eg. 2GB, 4GB or something like that). When I process a standard > trafiic (during day) everything is fine. But when Solr handle night traffic > (and the charateristic of requests change) some problems appear. There is > JVM out of memory error. I know what is the reason. Some filters on some > fields are quite poor filters. They returns 15M of documents or even more. > You could say 'Just put that into q'. I tried to put that filters into > "Query" part but then, the statistics of request processing time (during > day) become much worse. Reduction of Filter Cache maxSize is also not good > solution because during day cache filters are very very helpful. > You could be interested in type of filters that I use. These are range > filters (I tried standard range filters and frange) - eg. price:[* TO > 10000]. Some fq with price can return few thousands of results (eg. > price:[40 TO 50]), but some (eg. price:[* TO 10000]) can return milions of > documents. I'd also like to avoid solution which will introduce strict > ranges that user can choose. > Have you any suggestions what can I do? Is there any way to limit for > example maximum size of docSet which is cached in FilterCache? > > -- > Pawel