If 1GB would make solr go out of memory by using a filter query cache,
then it would have already happened during the initial upload of the
solr documents. Imagine the amount of memory you need for one billion 
documents..
A filter cache would be the least of your problems. 1GB is small in comparison
to the entire solr index.

> On 17 Feb 2020, at 10:13, Hongxu Ma <inte...@outlook.com> wrote:
> 
> Hi
> I want to know the internal of solr filter cache, especially its memory usage.
> 
> I googled some pages:
> https://teaspoon-consulting.com/articles/solr-cache-tuning.html
> https://lucene.472066.n3.nabble.com/Solr-Filter-Cache-Size-td4120912.html 
> (Erick Erickson's answer)
> 
> All of them said its structure is: fq => a bitmap (total doc number bits), 
> but I think it's not so simple, reason:
> Given total doc number is 1 billion, each filter cache entry will use nearly 
> 1GB(1000000000/8 bit), it's too big and very easy to make solr OOM (I have a 
> 1 billion doc cluster, looks it works well)
> 
> And I also checked solr node, but cannot find the details (only saw using 
> DocSets structure)
> 
> So far, I guess:
> 
>  *   degenerate into an doc id array/list when the bitmap is sparse
>  *   using some compressed bitmap, e.g. roaring bitmaps
> 
> which one is correct? or another answer, thanks you very much!
> 

Reply via email to