rereading your email, perhaps this doesn't answer the question though.
 Can you provide your solr.xml so we can get a better idea of your
configuration?

On Wed, Feb 29, 2012 at 10:41 AM, Jamie Johnson <jej2...@gmail.com> wrote:
> That is correct, the cloud does not currently elastically expand.
> Essentially when you first start up you define something like
> numShards, once numShards is reached all else goes in as replicas.  If
> you manually specify the shards using the create core commands you can
> define the layout however you please, but that still doesn't change
> the fact that SolrCloud doesn't support elastically expanding after
> initially provisioning the cluster.
>
> I've seen this on the roadmap before, but don't know where it falls on
> the current wish list, it's high on my mine :)
>
> On Wed, Feb 29, 2012 at 10:36 AM, Ranjan Bagchi <ranjan.bag...@gmail.com> 
> wrote:
>> Hi,
>>
>> At this point I'm ok with one zk instance being a point of failure, I just
>> want to create sharded solr instances, bring them into the cluster, and be
>> able to shut them down without bringing down the whole cluster.
>>
>> According to the wiki page, I should be able to bring up new shard by using
>> shardId [-D shardId], but when I did that, the logs showed it replicating
>> an existing shard.
>>
>> Ranjan
>> Andre Bois-Crettez wrote:
>>
>>> You have to run ZK on a at least 3 different machines for fault
>>> tolerance (a ZK ensemble).
>>>
>>> http://wiki.apache.org/solr/SolrCloud#Example_C:_Two_shard_cluster_with_sha=
>>> rd_replicas_and_zookeeper_ensemble
>>>
>>> Ranjan Bagchi wrote:
>>> > Hi,
>>> >
>>> > I'm interested in setting up a solr cluster where each machine [at least
>>> > initially] hosts a separate shard of a big index [too big to sit on the
>>> > machine].  I'm able to put a cloud together by telling it that I have (to
>>> > start out with) 4 nodes, and then starting up nodes on 3 machines
>>> pointin=
>>> g
>>> > at the zkInstance.  I'm able to load my sharded data onto each machine
>>> > individually and it seems to work.
>>> >
>>> > My concern is that it's not fault tolerant:  if one of the non-zookeeper
>>> > machines falls over, the whole cluster won't work.  Also, I can't create
>>> =
>>> a
>>> > shard with more data, and have it work within the existing cloud.
>>> >
>>> > I tried using -DshardId=3Dshard5 [on an existing 4-shard cluster], but it
>>> > just started replicating, which doesn't seem right.
>>> >
>>> > Are there ways around this?
>>> >
>>> > Thanks,
>>> > Ranjan Bagchi
>>> >
>>> >

Reply via email to