Brett, it’s probably because you hit the 5g default segment size limit on
Solr and in order to merge segments a huge number of the docs within the
segment must be marked as deleted. So even if large amounts of docs are
deleted docs within the segment, the segment is still there, happily taking
up s
Greg
On Fri, Jun 7, 2019 at 11:30 AM John Davis
wrote:
> What would be the best way to understand where heap is being used?
>
> On Tue, Jun 4, 2019 at 9:31 PM Greg Harris wrote:
>
> > Just a couple of points I’d make here. I did some testing a while back in
> > which if no
Just a couple of points I’d make here. I did some testing a while back in
which if no commit is made, (hard or soft) there are internal memory
structures holding tlogs and it will continue to get worse the more docs
that come in. I don’t know if that’s changed in further versions. I’d
recommend doi
Just to chime in a few quick thoughts
I think my experience to this point is G1 (barring unknown lucene bug risk)
is actually a lower risk easier collector to use. However that doesn't
necessarily mean better. You don't have set the space sizes or any number
of all sorts of various parameters
You can lose access to zk from either the solr side or the zk side. You
need to determine which is which. No hard and fast rules. If you're
restarting solr and everything comes back online, my bet is zk is fine,
which in the grand scheme of things is usually but not always the case
On Dec 29,
Your gun (not quite smoking yet, we still need the fingerprints) is this:
[Times: user=0.00 sys=94.28, real=97.19 secs]
Normal GC pauses are generally almost entirely user CPU, very short and
multiprocessor. Something else is sometimes happening with either the JVM
or OS which is causing this pro
Hi Monti,
As pointed out there is a huge gap of no information. There are two primary
possibilities. One is that something about your resources is depleted. As
Shawn has pointed out... watch them as you start up. Two, Solr is somehow
locked or waiting on something. Since there is no information at
Hi,
All your stats show is large memory requirements to Solr. There is no
direct mapping of number of documents and queries to memory reqts as
requested in that article. Different Solr projects can yield extremely,
extremely different requirements. If you want to understand your memory
usage bette
Here is a quick way you can identify which thread is taking up all your CPU.
1) Look at top (or htop) sorted by CPU Usage and with threads toggled on ->
hit capital 'H'
2) Get the native process ids of the threads taking up a lot of CPU
3) Convert that number to hex using a converter:
http://www.m
You have to be careful looking at the QTime's. They do not include garbage
collection. I've run into issues where QTime is short (cause it was), it just
happened that the query came in during a long garbage collection where
everything was paused. So you can get into situations where once the 15
10 matches
Mail list logo