On 4/22/2019 3:19 AM, vishal patel wrote:
-- 228634803 maxDoc of one shard [we have 26 collection in production and 2 shards 2 replicas]
228 million is quite a lot of documents.
Can you gather and share the screenshot described on the following wiki page?
There seem to be two Solr instances on this machine, not one. You said there's one shard ... so why would you need more than one Solr instance?
If I'm reading things correctly, it looks like your 228 million documents are about 30 gigabytes in size. They must be very small documents.
I have also attached solrconfig.xml of one collection.
There are no caches -- they're commented out. That is the other thing that might need large amounts of memory. For instance, the filterCache that you have commented out ... if a filterCache of size 10240 were actually there, with 228 million documents, that cache would require more memory than you have in the server -- for just the cache.
Thoughts after seeing all this: Even with 228 million documents, it doesn't seem like you would need a 50GB heap. But the GC log snippet seems to indicate that you do, and that even that size might not be big enough ... so I'm wondering what is needing all that memory. Are you doing massively complex queries, like huge facets, or grouping? If you are, you might need an even larger heap. Which I think means that you're going to need to run only one Solr instance on this machine, not two, so that you have additional memory available.
Thanks, Shawn