Thanks and yeah I thought it might be crazy, the image is just the JVM memory 
usage you get from the dashboard on the solr admin pages, the JVM on has what 
appears to be a light grey then dark grey band then some blank space, those are 
the numbers I referred to if that makes sense?

Bit of quick ascii art to represent JVM memory usage image:

##########====----------------

As I look now:
-       Ends at 127.81GB
=       Ends at 67.30GB
#       Ends at 39.24GB

My guess was that the light grey is used memory that hasn't been garbage 
collected and the total bar is equivalent to the max heap setting?

Will digest the blog.

Cheers

Si

-----Original Message-----
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: 06 October 2014 16:58
To: solr-user@lucene.apache.org
Subject: Re: Solr configuration, memory usage and MMapDirectory

First, the e-mail programs tend to strip attachments so your screenshot didn't 
come through. You can past it up somewhere and provide a link if you still need 
us to see it.

That said....

-Xmx131072m

This is insane, you're absolutely right to focus on that first. Here's Uwe's 
excellent blog ont he subject, with hints on how to read top:

http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Meanwhile, Shawn gave you some very good info so I won't repeat any....

On Mon, Oct 6, 2014 at 8:24 AM, Simon Fairey <sifai...@gmail.com> wrote:

> Hi
>
> I've inherited a Solr config and am doing some sanity checks before 
> making some updates, I'm concerned about the memory settings.
>
> System has 1 index in 2 shards split across 2 Ubuntu 64 bit nodes, 
> each node has 32 CPU cores and 132GB RAM, we index around 500k files a 
> day spread out over the day in batches every 10 minutes, a portion of 
> these are updates to existing content, maybe 5-10%. Currently 
> MergeFactor is set to 2 and commit settings are:
>
> <autoCommit>
>
>     <maxTime>60000</maxTime>
>
>     <openSearcher>false</openSearcher>
>
> </autoCommit>
>
> <autoSoftCommit>
>
>     <maxTime>900000</maxTime>
>
> </autoSoftCommit>
>
> Currently each node has around 25M docs in with an index size of 45GB, 
> we prune the data every few weeks so it never gets much above 35M docs 
> per node.
>
> On reading I've seen a recommendation that we should be using 
> MMapDirectory, currently it's set to NRTCachingDirectoryFactory. 
> However currently the JVM is configured with -Xmx131072m, and for 
> MMapDirectory I've read you should use less memory for the JVM so 
> there is more available for the OS caching.
>
> Looking at the dashboard in the JVM memory usage I see:
>
> [image: enter image description here]
>
> Not sure I understand the 3 bands, assume 127.81 is Max, dark grey is 
> in use at the moment and the light grey is allocated as it was used 
> previously but not been cleaned up yet?
>
> I'm trying to understand if this will help me know how much would be a 
> good value to change Xmx to, i.e. say 64GB based on light grey?
>
> Additionally once I've changed the max heap size is it a simple case 
> of changing the config to use MMapDirectory or are there things i need 
> to watch out for?
>
> Thanks
>
> Si
>
>
>

Reply via email to