in case any one is interested, i made the memory changes as well as two
changes to
XX:ParallelGCThreads 8->20
XX:ConcGCThreads . 4->5
old:
https://gceasy.io/diamondgc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvNi8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMjEtMTA=&channel=WEB
now:
https://gceasy.io/diamondgc-re
Thanks you guys, this has been educational, i uploaded up to now, the
server was restarted after adding the extra memory, so
https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvNi8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMjEtMTA=&channel=WEB
is what im looking at. tuning the JVM is new to me, so
A replication shouldn’t have consumed that much heap. It’s mostly I/O, just a
write through. If replication really consumes huge amounts of heap we need to
look at that more closely. Personally I suspect/hope it’s coincidental, but
that’s only a guess. You can attach jconsole to the running proc
Actually at about that time the replication finished and added about 20-30gb to
the index from the master. My current set up goes
Indexing master -> indexer slave/production master (only replicated on
command)-> three search slaves (replicate each 15 minutes)
We added about 2.3m docs, then I re
On 12/5/2019 12:57 PM, David Hastings wrote:
That probably isnt enough data, so if youre interested:
https://gofile.io/?c=rZQ2y4
The previous one was less than 4 minutes, so it doesn't reveal anything
useful.
This one is a little bit less than two hours. That's more useful, but
still pret
Hi David,
Your Xmx seems to be an overkill though without usage stats, this cannot be
factified. I think you should analyze long GC pauses given that you have so
much difference between the min and max. I prefer making the min/max same
before stressing on the values. You can start with 20G but wha
and if this may be of use:
https://imgur.com/a/qXBuSxG
just been more or less winging the options since solr 1.3
On Thu, Dec 5, 2019 at 2:41 PM Shawn Heisey wrote:
> On 12/5/2019 11:58 AM, David Hastings wrote:
> > as of now we do an xms of 8gb and xmx of 60gb, generally through the
> > dashbo
That probably isnt enough data, so if youre interested:
https://gofile.io/?c=rZQ2y4
On Thu, Dec 5, 2019 at 2:52 PM David Hastings
wrote:
> I know theres no hard answer, and I know the Xms and Xmx should be the
> same, but it was a set it and forget it sort of thing from years ago. I
> will def
I know theres no hard answer, and I know the Xms and Xmx should be the
same, but it was a set it and forget it sort of thing from years ago. I
will definitely be changing it but figured I may as well figure out as
much as possible from this user group resource.
as far as the raw GC data goes:
http
On 12/5/2019 11:58 AM, David Hastings wrote:
as of now we do an xms of 8gb and xmx of 60gb, generally through the
dashboard the JVM hangs around 16gb. I know Xms and Xmx are supposed to be
the same so thats the change #1 on my end, I am just concerned of dropping
it from 60 as thus far over the
10 matches
Mail list logo