Did you try to see where/which component like query, facet highlight... is
taking time by debugQuery=on when performance is slow. Just to rule out any
other component is not the culprit...
Thnx
On Mon, Jun 25, 2018 at 2:06 PM, Chris Troullis
wrote:
> FYI to all, just as an update, we rebuilt t
FYI to all, just as an update, we rebuilt the index in question from
scratch for a second time this weekend and the problem went away on 1 node,
but we were still seeing it on the other node. After restarting the
problematic node, the problem went away. Still makes me a little uneasy as
we weren't
Thanks Shawn,
As mentioned previously, we are hard committing every 60 seconds, which we
have been doing for years, and have had no issues until enabling CDCR. We
have never seen large tlog sizes before, and even manually issuing a hard
commit to the collection does not reduce the size of the tlog
On 6/12/2018 12:06 PM, Chris Troullis wrote:
> The issue we are seeing is with 1 collection in particular, after we set up
> CDCR, we are getting extremely slow response times when retrieving
> documents. Debugging the query shows QTime is almost nothing, but the
> overall responseTime is like 5x w
Hi Susheel,
It's not drastically different no. There are other collections with more
fields and more documents that don't have this issue. And the collection is
not sharded. Just 1 shard with 2 replicas. Both replicas are similar in
response time.
Thanks,
Chris
On Wed, Jun 13, 2018 at 2:37 PM, S
Is this collection anyway drastically different than others in terms of
schema/# of fields/total document etc is it sharded and if so can you look
which shard taking more time with shard.info=true.
Thnx
Susheel
On Wed, Jun 13, 2018 at 2:29 PM, Chris Troullis
wrote:
> Thanks Erick,
>
> Seems to
Thanks Erick,
Seems to be a mixed bag in terms of tlog size across all of our indexes,
but currently the index with the performance issues has 4 tlog files
totally ~200 MB. This still seems high to me since the collections are in
sync, and we hard commit every minute, but it's less than the ~8GB i
First, nice job of eliminating all the standard stuff!
About tlogs: Sanity check: They aren't growing again, right? They
should hit a relatively steady state. The tlogs are used as a queueing
mechanism for CDCR to durably store updates until they can
successfully be transmitted to the target. So I
Thanks Erick. A little more info:
-We do have buffering disabled everywhere, as I had read multiple posts on
the mailing list regarding the issue you described.
-We soft commit (with opensearcher=true) pretty frequently (15 seconds) as
we have some NRT requirements. We hard commit every 60 seconds
Having the tlogs be huge is a red flag. Do you have buffering enabled
in CDCR? This was something of a legacy option that's going to be
removed, it's been made obsolete by the ability of CDCR to bootstrap
the entire index. Buffering should be disabled always.
Another reason tlogs can grow is if yo
Hi all,
Recently we have gone live using CDCR on our 2 node solr cloud cluster
(7.2.1). From a CDCR perspective, everything seems to be working
fine...collections are staying in sync across the cluster, everything looks
good.
The issue we are seeing is with 1 collection in particular, after we se
11 matches
Mail list logo