On 8/10/2015 5:05 PM, rohit wrote:
> I have just started with a new project and using solrCloud and during
> my performance testing, i have been finding OOM issues. The thing
> which i notice more is physical memory keeps on increasing and never
> comes to original state.
> 
> I'm indexing 10 million documents and have 4 nodes as leader and 4
> replicas. I have updated heap memory to 4 GB out of total 8 GB.
> 
> Currently using StandardDirectoryFactory and
> 
> <autoCommit> <maxDocs>25000</maxDocs> 
> <maxTime>${solr.autoCommit.maxTime:1000000}</maxTime> 
> <openSearcher>false</openSearcher> </autoCommit> <autoSoftCommit> 
> <maxDocs>25000</maxDocs> 
> <maxTime>${solr.autoSoftCommit.maxTime:1000000}</maxTime> 
> </autoSoftCommit> Can anyone help to find where im going wrong.

If you've told java that the max heap is 4GB, then Java cannot use more
than 4GB plus a very small amount extra (probably several megabytes) as
overhead.  If more is used, then that's a bug in *Java* ... not Solr.

What numbers are you looking at which show a memory problem and where
are you looking at them?

It is 100 percent completely normal for a computer to use all the
physical memory.  This is simply how a modern operating system works.

https://en.wikipedia.org/wiki/Page_cache

FYI, you should use the NRTCachingDirectoryFactory with 4.0 and later.
This should be the default in Solr's examples.

Thanks,
Shawn

Reply via email to