Cool, Thank you very much Erick and Walter.
On Wed, Feb 22, 2017 at 12:32 PM, Walter Underwood
wrote:
> I’ve run with 8GB for years for moderate data sets (250K to 15M docs).
> Faceting can need more space.
>
> Make -Xms equal to -Xmx. The heap will grow to the max size regardless and
> you’ll g
I’ve run with 8GB for years for moderate data sets (250K to 15M docs). Faceting
can need more space.
Make -Xms equal to -Xmx. The heap will grow to the max size regardless and
you’ll get pauses while it grows. Starting at the max will avoid that pain.
Solr uses lots and lots of short-lived allo
Solr is very memory-intensive. 1g is still a very small heap. For any
sizeable data store people often run with at least 4G, often 8G or
more. If you facet or group or sort on fields that are _not_
docValues="true" fields you'll use up a lot of JVM memory. The
filterCache uses up maxDoc/8 bytes for
Thanks Eric, It looked like the garbage collection was blocking the other
processes.
I updated the SOLR_JAVA_MEM="-Xms1g -Xmx4g" as it was the default before
and looked like the garbage collection was triggered too frequent.
Lets see how it goes now.
Thanks again for the support.
On Mon, Feb 20
The first place to look for something like his is garbage collection.
Are you hitting any really long stop-the-world GC pauses?
Best,
Erick
On Sun, Feb 19, 2017 at 2:21 PM, Sadheera Vithanage wrote:
> Hi Experts,
>
> I have a solr cloud node (Just 1 node for now with a zookeeper running on
> the