What I think was mentioned on this a bit ago is that the index stops
working if one of the "nodes" goes down unless its a replica.

You have 2 "nodes" running with numShards=2? Thus if one goes down the
entire index is inoperable. In the future I'm hoping this changes such
that the index cluster continues to operate but will lack results from
the downed node. Maybe this has changed in recent trunk updates though.
Not sure.

On Mon, 2012-03-05 at 20:49 -0800, Ranjan Bagchi wrote:
> Hi Mark,
> 
> So I tried this: started up one instance w/ zookeeper, and started a second
> instance defining a shard name in solr.xml -- it worked, searching would
> search both indices, and looking at the zookeeper ui, I'd see the second
> shard.  However, when I brought the second server down -- the first one
> stopped working:  it didn't kick the second shard out of the cluster.
> 
> Any way to do this?
> 
> Thanks,
> 
> Ranjan
> 
> 
> > From: Mark Miller <markrmil...@gmail.com>
> > To: solr-user@lucene.apache.org
> > Cc:
> > Date: Wed, 29 Feb 2012 22:57:26 -0500
> > Subject: Re: Building a resilient cluster
> > Doh! Sorry - this was broken - I need to fix the doc or add it back.
> >
> > The shard id is actually set in solr.xml since its per core - the sys prop
> > was a sugar option we had setup. So either add 'shard' to the core in
> > solr.xml, or to make it work like it does in the doc, do:
> >
> >  <core name="collection1" shard="${shard:}" instanceDir="." />
> >
> > That sets shard to the 'shard' system property if its set, or as a default,
> > act as if it wasn't set.
> >
> > I've been working with custom shard ids mainly through solrj, so I hadn't
> > noticed this.
> >
> > - Mark
> >
> > On Wed, Feb 29, 2012 at 10:36 AM, Ranjan Bagchi <ranjan.bag...@gmail.com
> > >wrote:
> >
> > > Hi,
> > >
> > > At this point I'm ok with one zk instance being a point of failure, I
> > just
> > > want to create sharded solr instances, bring them into the cluster, and
> > be
> > > able to shut them down without bringing down the whole cluster.
> > >
> > > According to the wiki page, I should be able to bring up new shard by
> > using
> > > shardId [-D shardId], but when I did that, the logs showed it replicating
> > > an existing shard.
> > >
> > > Ranjan
> > > Andre Bois-Crettez wrote:
> > >
> > > > You have to run ZK on a at least 3 different machines for fault
> > > > tolerance (a ZK ensemble).
> > > >
> > > >
> > >
> > http://wiki.apache.org/solr/SolrCloud#Example_C:_Two_shard_cluster_with_sha=
> > > > rd_replicas_and_zookeeper_ensemble
> > > >
> > > > Ranjan Bagchi wrote:
> > > > > Hi,
> > > > >
> > > > > I'm interested in setting up a solr cluster where each machine [at
> > > least
> > > > > initially] hosts a separate shard of a big index [too big to sit on
> > the
> > > > > machine].  I'm able to put a cloud together by telling it that I have
> > > (to
> > > > > start out with) 4 nodes, and then starting up nodes on 3 machines
> > > > pointin=
> > > > g
> > > > > at the zkInstance.  I'm able to load my sharded data onto each
> > machine
> > > > > individually and it seems to work.
> > > > >
> > > > > My concern is that it's not fault tolerant:  if one of the
> > > non-zookeeper
> > > > > machines falls over, the whole cluster won't work.  Also, I can't
> > > create
> > > > =
> > > > a
> > > > > shard with more data, and have it work within the existing cloud.
> > > > >
> > > > > I tried using -DshardId=3Dshard5 [on an existing 4-shard cluster],
> > but
> > > it
> > > > > just started replicating, which doesn't seem right.
> > > > >
> > > > > Are there ways around this?
> > > > >
> > > > > Thanks,
> > > > > Ranjan Bagchi
> > > > >
> > > > >
> > >
> >
> >
> >
> > --
> > - Mark
> >
> > http://www.lucidimagination.com
> >
> >


Reply via email to