Hi Michael,
I also verified the patch in SOLR-14471 with 8.4.1 and it fixed the issue
with shards.preference=replica.location:local,replica.type:TLOG in my
setting. Thanks!
Wei
On Thu, May 21, 2020 at 12:09 PM Phill Campbell
wrote:
> Yes, JVM heap settings.
>
> > On May 19, 2020, at 10:59 AM,
Yes, JVM heap settings.
> On May 19, 2020, at 10:59 AM, Wei wrote:
>
> Hi Phill,
>
> What is the RAM config you are referring to, JVM size? How is that related
> to the load balancing, if each node has the same configuration?
>
> Thanks,
> Wei
>
> On Mon, May 18, 2020 at 3:07 PM Phill Campbel
Hi Phill,
What is the RAM config you are referring to, JVM size? How is that related
to the load balancing, if each node has the same configuration?
Thanks,
Wei
On Mon, May 18, 2020 at 3:07 PM Phill Campbell
wrote:
> In my previous report I was configured to use as much RAM as possible.
> With
In my previous report I was configured to use as much RAM as possible. With
that configuration it seemed it was not load balancing.
So, I reconfigured and redeployed to use 1/4 the RAM. What a difference for the
better!
10.156.112.50 load average: 13.52, 10.56, 6.46
10.156.116.34 load averag
I have been testing 8.5.2 and it looks like the load has moved but is still on
one machine.
Setup:
3 physical machines.
Each machine hosts 8 instances of Solr.
Each instance of Solr hosts one replica.
Another way to say it:
Number of shards = 8. Replication factor = 3.
Here is the cluster state
I just backported Michael’s fix to be released in 8.5.2
On Fri, May 15, 2020 at 6:38 AM Michael Gibney
wrote:
> Hi Wei,
> SOLR-14471 has been merged, so this issue should be fixed in 8.6.
> Thanks for reporting the problem!
> Michael
>
> On Mon, May 11, 2020 at 7:51 PM Wei wrote:
> >
> > Thanks
Hi Wei,
SOLR-14471 has been merged, so this issue should be fixed in 8.6.
Thanks for reporting the problem!
Michael
On Mon, May 11, 2020 at 7:51 PM Wei wrote:
>
> Thanks Michael! Yes in each shard I have 10 Tlog replicas, no other type
> of replicas, and each Tlog replica is an individual solr
check out the videos on this website TROO.TUBE don't be such a
sheep/zombie/loser/NPC. Much love!
https://troo.tube/videos/watch/aaa64864-52ee-4201-922f-41300032f219
On Mon, May 11, 2020 at 6:50 PM Wei wrote:
>
> Thanks Michael! Yes in each shard I have 10 Tlog replicas, no other type
> of repl
Thanks Michael! Yes in each shard I have 10 Tlog replicas, no other type
of replicas, and each Tlog replica is an individual solr instance on its
own physical machine. In the jira you mentioned 'when "last place matches"
== "first place matches" – e.g. when shards.preference specified matches
*a
FYI: https://issues.apache.org/jira/browse/SOLR-14471
Wei, assuming you have only TLOG replicas, your "last place" matches
(to which the random fallback ordering would not be applied -- see
above issue) would be the same as the "first place" matches selected
for executing distributed requests.
On
Wei, probably no need to answer my earlier questions; I think I see
the problem here, and believe it is indeed a bug, introduced in 8.3.
Will file an issue and submit a patch shortly.
Michael
On Mon, May 11, 2020 at 12:49 PM Michael Gibney
wrote:
>
> Hi Wei,
>
> In considering this problem, I'm s
Hi Wei,
In considering this problem, I'm stumbling a bit on terminology
(particularly, where you mention "nodes", I think you're referring to
"replicas"?). Could you confirm that you have 10 TLOG replicas per
shard, for each of 6 shards? How many *nodes* (i.e., running solr
server instances) do yo
Update: after I remove the shards.preference parameter from
solrconfig.xml, issue is gone and internal shard requests are now
balanced. The same parameter works fine with solr 7.6. Still not sure of
the root cause, but I observed a strange coincidence: the nodes that are
most frequently picked f
Hi Eric,
I am measuring the number of shard requests, and it's for query only, no
indexing requests. I have an external load balancer and see each node
received about the equal number of external queries. However for the
internal shard queries, the distribution is uneven:6 nodes (one in
each
Wei:
How are you measuring utilization here? The number of incoming requests or CPU?
The leader for each shard are certainly handling all of the indexing requests
since they’re TLOG replicas, so that’s one thing that might skewing your
measurements.
Best,
Erick
> On Apr 27, 2020, at 7:13 PM,
Hi everyone,
I have a strange issue after upgrade from 7.6.0 to 8.4.1. My cloud has 6
shards with 10 TLOG replicas each shard. After upgrade I noticed that one
of the replicas in each shard is handling most of the distributed shard
requests, so 6 nodes are heavily loaded while other nodes are idl
16 matches
Mail list logo