On 4/18/2016 11:22 AM, John Bickerstaff wrote:
> So - my IT guy makes the case that we don't really need Zookeeper / Solr
> Cloud...
<snip>
> I'm biased in terms of using the most recent functionality, but I'm aware
> that bias is not necessarily based on facts and want to do my due
> diligence...
>
> Aside from the obvious benefits of spreading work across nodes (which may
> not be a big deal in our application and which my IT guy proposes is more
> transparently handled with a load balancer he understands) are there any
> other considerations that would drive a choice for Solr Cloud (zookeeper
> etc)?

Erick has a point.  If your IT guy feels comfortable with a load
balancer, he should go ahead and set that up.

For a new install like you're describing, I would probably still use
SolrCloud on the back end, even with a load balancer.

As Daniel said, a non-cloud replicated setup requires configuration of
masters and slaves.  Instead of replication, you could go with a build
system that sends updates to each copy of the index independently.

When using replication, switching master/slave roles in the event of a
master server failure is not trivial.  SolrCloud handles all that,
making multi-server management a LOT easier.  Initial setup is slightly
more complicated due to zookeeper, and configuration management requires
an "upload to zookeeper" step ... but I do not think these are not high
hurdles considering how much easier it is to manage multiple servers.

With the deployment you have described (which I trimmed out of this
reply), I think you'd be fine running a standalone zookeeper process on
three of your Solr servers, so you won't even need a bunch of extra
hardware.

When combining a load balancer with SolrCloud, the handler definitions
in solrconfig.xml should set preferLocalShards to true (which Tom
mentioned) so the load balancer target is the machine that actually
processes the request.  Troubleshooting becomes more difficult if you
don't do this, and avoiding the extra network hop will help performance.

Thanks,
Shawn

Reply via email to