Hi Shamik,
Please see incline comments/questions.
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 24 Oct 2017, at 07:41, shamik wrote:
>
> Thanks Emir and Zisis.
>
> I added the maxRamMB for filte
Thanks Emir and Zisis.
I added the maxRamMB for filterCache and reduced the size. I could the
benefit immediately, the hit ratio went to 0.97. Here's the configuration:
It seemed to be stable for few days, the cache hits and jvm pool utilization
seemed to be well within expected range. But th
You mentioned hat you are on v. 6.6, but in case someone else uses this, just
to add that maxRamMB is added to FastLRUCache in version 6.4.
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 23 Oct 201
shamik wrote
> I was not aware of maxRamMB parameter, looks like it's only available for
> queryResultCache. Is that what you are referring to? Can you please share
> your cache configuration?
I've setup filterCache entry inside solrconfig.xml as follows
**
I had a look inside FastLRUCache code
Hi Shamik,
I agree that your filter cache is not the reason for OOMs. Can you confirm that
your fieldCache and filedValueCache sizes are not consuming too much memory.
The next on the list would be some heavy faceting with pivots, but you
mentioned that all fields are low cardinality. Do you see
Zisis, thanks for chiming in. This is really an interesting information and
probably in line what I'm trying to fix. In my case, the facet fields are
certainly not high cardinal ones. Most of them have a finite set of data,
the max being 200 (though it has a low usage percentage). Earlier I had
fac
Thanks Eric, in my case, each replica is running on it's own JVM, so even if
we consider 8gb of filter cache, it still has 27gb to play with. Isn't this
is a decent amount of memory to handle the rest of the JVM operation?
Here's an example of implicit filters that get applied to almost all the
q
I'll post my experience too, I believe it might be related to the low
FilterCache hit ratio issue. Please let me know if you think I'm off topic
here to create a separate thread.
I've run search stress tests on a 2 different Solr 6.5.1 installations
sending Distributed search queries with facets (
Once you hit an OOM, the behavior of Java is indeterminate. There's no
expectation that things will just pick up where they left off when
memory his freed up. Lots of production systems have OOM killer
scripts that automatically kill/restart Java apps that OOM for just
that reason.
Yes, each repli
Thanks Emir. The index is equally split between the two shards, each having
approx 35gb. The total number of documents is around 11 million which should
be distributed equally among the two shards. So, each core should take 3gb
of the heap for a full cache. Not sure I get the "multiply it by number
Hi Shamik,
I am pleased to see you find SPM useful!
I think that your problems might be related to caches exhausting your memory.
You mentioned that your index is 70GB, but how many documents it has? Remember
that filter caches can take up to 1bit/doc. With 4096 filter cache size it
means that f
11 matches
Mail list logo