With frequent commits, autowarming isn’t very useful. Even with a daily bulk 
update, I use explicit warming queries.

For our textbooks collection, I configure the twenty top queries and the twenty 
most common words in the index. Neither list changes much. If we used facets, 
I’d warm those, too.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Sep 19, 2017, at 12:18 AM, Toke Eskildsen <t...@kb.dk> wrote:
> 
> On Mon, 2017-09-18 at 20:47 -0700, shamik wrote:
>> I did bring down the heap size to 8gb, changed to G1 and reduced the
>> cache params. The memory so far has been holding up but will wait for
>> a while before passing on a judgment. 
> 
> Sounds reasonable.
> 
>> <filterCache class="solr.FastLRUCache" size="256" initialSize="256"
>> autowarmCount="0"/>
> [...]
> 
>> The change seemed to have increased the number of slow queries (1000
>> ms), but I'm willing to address the OOM over performance at this
>> point.
> 
> You over-compensated by switching from an enormous cache with excessive
> warming to a small cache with no warming. Try setting autowarmCount to
> 20 or something like that. Also make an explicit warming query that
> facets on all your facet-fields, to initialize the underlying
> structures.
> 
>> One thing I realized is that I provided the wrong index size here.
>> It's 49gb instead of 25, which I mistakenly picked from one shard.
> 
> Quite independent from all of this, your index is not a large one; it
> might work better for you to store it as a single shard (with
> replicas), to avoid the overhead of the distributes processing needed
> for multi-shard. The overhead is especially visible when doing a lot of
> String faceting.
> 
>> I hope the heap size will continue to sustain for the index size. 
> 
> You can check the memory usage in the admin GUI.
> 
> - Toke Eskildsen, Royal Danish Library
> 

Reply via email to