Dynamic fields don’t make any difference, they’re just like fixed fields as far
as merging is concerned.

So this is almost certainly merging being kicked off by your commits. The number
of documents and the more terms, the more work Lucene has to do, so I suspect
this is just how things work.

I’ll add parenthetically that your cache settings, while not adding to this 
problem,
are suspiciously high. filterCache in particular can take up maxDoc/8 _per 
entry_,
2048 in this case. I’d recommend you think about reducing the size here while
monitoring your hit ratio.

Oh, and if you use NOW in filter clauses, that’s an anti-pattern, see:

see: https://dzone.com/articles/solr-date-math-now-and-filter

Best,
Erick

> On Jun 18, 2019, at 8:20 AM, Venu <thotavenumadhav...@gmail.com> wrote:
> 
> Thanks Erick. 
> 
> I see the above pattern only at the time of commit.
> 
> I have many fields (like around 250 fields out of which around 100 fields
> are dynamic fields and around 3 n-gram fields and text fields, while many of
> them are stored fields along with indexed fields), will a merge take a lot
> of time in this kind of case, I mean is it CPU intensive because of many
> dynamic fields or because of huge data?
> 
> Also, I am doing a hard commit for every 5 minutes and open-searcher is true
> in my case. I am not doing soft-commit.
> 
> And below are the configurations for filter, query and document caches.
> Should I try reducing initialsize?
> 
> <filterCache class="solr.FastLRUCache"
>                 size="2048"
>                 initialSize="512"
>                 autowarmCount="0"/>
> <documentCache class="solr.LRUCache"
>                   size="2048"
>                   initialSize="512"
>                   autowarmCount="0"/>
> <queryResultCache class="solr.LRUCache"
>                      size="2048"
>                      initialSize="512"
>                      autowarmCount="0"/>
> 
> 
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Reply via email to