On Mar 5, 2012, at 11:49 PM, Ranjan Bagchi wrote:
> it didn't kick the second shard out of the cluster.
>
> Any way to do this?
If you unload a core rather than just shut down the instance, that core will
remove it's info from zookeeper.
Currently, that won't make it forget about a logical sh
ht the second server down -- the first one
>> stopped working: it didn't kick the second shard out of the cluster.
>>
>> Any way to do this?
>>
>> Thanks,
>>
>> Ranjan
>>
>>
>>> From: Mark Miller
>>> To: solr-user@lu
second shard out of the cluster.
>
> Any way to do this?
>
> Thanks,
>
> Ranjan
>
>
> > From: Mark Miller
> > To: solr-user@lucene.apache.org
> > Cc:
> > Date: Wed, 29 Feb 2012 22:57:26 -0500
> > Subject: Re: Building a resilient cluster
&g
- the first one
stopped working: it didn't kick the second shard out of the cluster.
Any way to do this?
Thanks,
Ranjan
> From: Mark Miller
> To: solr-user@lucene.apache.org
> Cc:
> Date: Wed, 29 Feb 2012 22:57:26 -0500
> Subject: Re: Building a resilient cluster
> Doh! Sorr
One other fault-tolerance issue is that you'll need at least one replica
per shard. As I understand it, at least *one* machine has to be running
for each shard for the cluster to work.
This doesn't address the shardId issue, but is something to keep in
mind when testing.
Best
Erick
On Wed, Feb 2
Doh! Sorry - this was broken - I need to fix the doc or add it back.
The shard id is actually set in solr.xml since its per core - the sys prop
was a sugar option we had setup. So either add 'shard' to the core in
solr.xml, or to make it work like it does in the doc, do:
That sets shard to the
rereading your email, perhaps this doesn't answer the question though.
Can you provide your solr.xml so we can get a better idea of your
configuration?
On Wed, Feb 29, 2012 at 10:41 AM, Jamie Johnson wrote:
> That is correct, the cloud does not currently elastically expand.
> Essentially when yo
That is correct, the cloud does not currently elastically expand.
Essentially when you first start up you define something like
numShards, once numShards is reached all else goes in as replicas. If
you manually specify the shards using the create core commands you can
define the layout however you
Hi,
At this point I'm ok with one zk instance being a point of failure, I just
want to create sharded solr instances, bring them into the cluster, and be
able to shut them down without bringing down the whole cluster.
According to the wiki page, I should be able to bring up new shard by using
sha
You have to run ZK on a at least 3 different machines for fault
tolerance (a ZK ensemble).
http://wiki.apache.org/solr/SolrCloud#Example_C:_Two_shard_cluster_with_shard_replicas_and_zookeeper_ensemble
Ranjan Bagchi wrote:
Hi,
I'm interested in setting up a solr cluster where each machine [at l
10 matches
Mail list logo