Bernd:
Short form: Worrying about which node is the leader is wasting your
time. Details below:
Why do you care what nodes the leaders are on? There has to be some
concern you have about co-locating the leaders on the same node or you
wouldn't be spending the time on it. Please articulate that co
Myself, I am still in the old camp. For critical machines, I want to know that
it is my machine, with my disks, and what software is installed exactly. But
maybe the cloud provider's fast network is more important? Cheers--Rick
On May 10, 2017 6:13:27 AM EDT, Bernd Fehling
wrote:
>Hi Rick,
>
>
Hi Rick,
yes I have distributed 5 virtual server accross 5 physical machines.
So each virtual server is on a separate physical machine.
Splitting each virtual server (64GB RAM) into two (32GB RAM), which then
will be 10 virtual server accross 5 physical machines, is no option
because there is no
Bernd,
Yes, cloud, ahhh. As you say, the world changed. Do you have any hint
from the cloud provider as to which physical machine your virtual server
is on? If so, you can hopefully distribute your replicas across physical
machines. This is not just for reliability: in a sharded system, each
Bernd:
You rarely have to worry about who the leader is unless and until you
get many 100s of shards. The extra work a leader does is usually
minimal and spending time trying to control where the leaders live is
usually time wasted. Leaders will shift from replica to replica
anyway. Say your leade
On 5/9/2017 1:44 AM, Bernd Fehling wrote:
> From my point of view it is a good solution to have 5 virtual 64GB
> servers on 5 different huge physical machines and start 2 instances on
> each virtual server.
If the total amount of memory in the virtual machine is 64GB, then I
would run one Solr no
Hi Erik,
just went through
https://cwiki.apache.org/confluence/display/solr/Rule-based+Replica+Placement
I might be wrong but I didn't see anything to identify the "leader".
To solve my problem with a rule:
--> "do not create the replica on the same host where his leader exists"
May be something
I would name your solution more a work around as any similar solution of this
kind.
The issue SOLR-6027 is now 3 years open and the world has changed.
Instead of racks full of blades where you had many dedicated bare metal servers
you have now huge machines with 256GB RAM and many CPUs. Virtualiza
Also, you can specify custom placement rules, see:
https://cwiki.apache.org/confluence/display/solr/Rule-based+Replica+Placement
But Shawn's statement is the nub of what you're seeing, by default
multiple JVMs on the same physical machine are considered separate
Solr instances.
Also note that if
On 5/8/2017 5:38 AM, Bernd Fehling wrote:
> boss -- shard1 - server2:7574
>| |-- server2:8983 (leader)
The reason that this happened is because you've got two nodes running on
every server. From SolrCloud's perspective, there are ten distinct
nodes, not five.
SolrClou
And then delete replica shard2-->server1:8983 and add replica
shard2-->server2:7574 ?
Would be nice to have some automatic logic like ES (_cluster/reroute with move).
Regards
Bernd
Am 08.05.2017 um 14:16 schrieb Amrit Sarkar:
> Bernd,
>
> When you create a collection via Collections API, the
Bernd,
When you create a collection via Collections API, the internal logic tries
its best to equally distribute the nodes across the shards but sometimes it
don't happen.
The best thing about SolrCloud is you can manipulate its cloud architecture
on the fly using Collections API. You can delete
12 matches
Mail list logo