On 11/20/2017 6:17 PM, Walter Underwood wrote:
When I ran load benchmarks with 6.3.0, an overloaded cluster would get super 
slow but keep functioning. With 6.5.1, we hit 100% CPU, then start getting 
OOMs. That is really bad, because it means we need to reboot every node in the 
cluster.

Also, the JVM OOM hook isn’t running the process killer (JVM 1.8.0_121-b13). 
Using the G1 collector with the Shawn Heisey settings in an 8G heap.
<snip>
This is not good behavior in prod. The process goes to the bad place, then we 
need to wait until someone is paged and kills it manually. Luckily, it usually 
drops out of the live nodes for each collection and doesn’t take user traffic.

There was a bug, fixed long before 6.3.0, where the OOM killer script wasn't working because the arguments enabling it were in the wrong place. It was fixed in 5.5.1 and 6.0.

https://issues.apache.org/jira/browse/SOLR-8145

If the scripts that you are using to get Solr started originated with a much older version of Solr than you are currently running, maybe you've got the arguments in the wrong order.

Do you see the commandline arguments for the OOM killer (only available on *NIX systems, not Windows) on the admin UI dashboard? If they are properly placed, you will see them on the dashboard, but if they aren't properly placed, then you won't see them. This is what the argument looks like for one of my Solr installs:

-XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 /var/solr/logs

Something which you probably already know: If you're hitting OOM, you need a larger heap, or you need to adjust the config so it uses less memory. There are no other ways to "fix" OOM problems.

Thanks,
Shawn

Reply via email to