[ 
https://issues.apache.org/jira/browse/SOLR-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959513#comment-16959513
 ] 

Bernd Wahlen edited comment on SOLR-13862 at 10/25/19 7:45 AM:
---------------------------------------------------------------

Thanks for all the advice and explanations. I know the 50% rule of thumb, there 
are 128 GiB of memory , but the small/large pointer issue is new to me - i will 
try to reduce to 31GiB on one node and compare after my holiday. We never tried 
31GiB, i tested only 2 setups 64Gib Memory with 32Gib Heap and 128Gib Memory 
with 64Gib Heap and the last performs better, that is the reason why the heap 
is that large.
We are using solr with shennandoah in production since jdk8 (shipilev), and 
with jdk11 and jdk12 without any issues.
I had the same experience with shennendoah: overhead/cpu usage is higher, but 
througput is not relevant (load is normally low, we need 3 nodes for 
reliability anyway).
With G1 (and older GCs) we had some pauses (1-2 seconds on solr after 
optimization) producing much more slow requests (500-1000 slow requests per 1 
million requests, nerarly 0 with shennendoah). 
What do you think about ZGC instead of shennandoah?
You can close the issue if you like and i accept that out configuration is 
unusual.


was (Author: bwahlen):
Thanks for all the advice and explanations. I know the 50% rule of thumb, there 
are 128 GiB of memory , but the small/large pointer issue is new to me - i will 
try to reduce to 31GiB on one node and compare after my holiday. We never tried 
31GiB, i tested only 2 setups 64Gib Memory with 32Gib Heap and 128Gib Memory 
with 64Gib Heap and the last performs better.
We are using solr with shennandoah in production since jdk8 (shipilev), and 
with jdk11 and jdk12 without any issues.
I had the same experience with shennendoah: overhead/cpu usage is higher, but 
througput is not relevant (load is normally low, we need 3 nodes for 
reliability anyway).
With G1 (and older GCs) we had some pauses (1-2 seconds on solr after 
optimization) producing much more slow requests (500-1000 slow requests per 1 
million requests, nerarly 0 with shennendoah). 
What do you think about ZGC instead of shennandoah?
You can close the issue if you like and i accept that out configuration is 
unusual.

> JDK 13 stability/recovery problems
> ----------------------------------
>
>                 Key: SOLR-13862
>                 URL: https://issues.apache.org/jira/browse/SOLR-13862
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>    Affects Versions: 8.2
>            Reporter: Bernd Wahlen
>            Priority: Major
>
> after updating my cluster (centos 7.7, solr 8.2, jdk12) to JDK13 (3 nodes, 4 
> collections, 1 shard) everything was running good (with lower p95) for some 
> hours. Then 2 nodes (not the leader) going to recovery state, but ~"Recovery 
> failed Error opening new searcher". I tried rolling restart the cluster, but 
> recovery is not working. After i switched to jdk11 recovery works again. In 
> summary jdk11 or jdk12 was running stable, jdk13 not.
> This is my solr.in.sh:
> GC_TUNE="-XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC"
>  SOLR_TIMEZONE="CET"
>  
> GC_LOG_OPTS="-Xlog:gc*:file=/var/log/solr/solr_gc.log:time:filecount=9,filesize=20M:safepoint"
> I also tried ADDREPLICA during my attempt to reapair the cluster, which 
> causes Out of Memory on JDK 13 and worked after going back to JDK 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to