Thanks for the explaination Erick!. I will try out your recommendation.

On Sun, Apr 17, 2016 at 3:34 PM, Erick Erickson <erickerick...@gmail.com>
wrote:

> bq: So inorder for me  move the shards to its own instances, I will have to
> take a down time and move the newly created shards & replicas to its own
> instances.
>
> No, this is not true.
>
> The easiest way to move things around is use the collections API
> ADDREPLICA command after splitting.
>
> Let's call this particular shard S1 on machine M1, and the results of
> the SPLITSHARD command S1.1 and S1.2 Further, let's say that your goal
> is to move _one_ of the subshards from machine M1 to M2
>
> So the sequence is:
>
> 1> issue SPLITSHARD and wait for it to complete. This requires no
> downtime and after the split the old shard becomes inactive and the
> two new subshards are servicing all requests. I'd probably stop
> indexing during this operation just to be on the safe side, although
> that's not necessary. So now you have both S1.1 and S1.2 running on M1
>
> 2> Use the ADDREPLICA command to add a replica of S1.2 to M2. Again,
> no downtime required. Wait until the new replica is "active", at which
> point it's fully operational. So now we have S1.1 and S1.2 running on
> M1 and S1.2.2 running on M2.
>
> 3> Use the DELETEREPLICA command to remove S1.2 from M1. Now you have
> S1.1 running on M1 and S1.2.1 running on M2. No downtime during any of
> this.
>
> 4> You should be able to delete S1 now from M1 just to tidy up.
>
> 5> Repeat for the other shards.
>
> Best,
> Erick
>
>
> On Sun, Apr 17, 2016 at 3:09 PM, Jay Potharaju <jspothar...@gmail.com>
> wrote:
> > Erik thanks for the reply. In my current prod setup I anticipate the
> number
> > of documents to grow almost 5 times by the end of the year and therefore
> > planning on how to scale when required. We have high query volume and
> > growing dataset, that is why would like to scale by sharding &
> replication.
> >
> > In my dev sandbox, I have 2 replicas & 2 shards created using compositeId
> > as my routing option. If I split the shard, it will create 2 new shards
> on
> > each the solr instances including replicas and my request will start
> going
> > to the new shards.
> > So inorder for me  move the shards to its own instances, I will have to
> > take a down time and move the newly created shards & replicas to its own
> > instances. Is that a correct interpretation of the how shard splitting
> > would work
> >
> >  I was hoping that solr will automagically split the existing shard &
> > create replicas on the new instances  rather than the existing nodes.
> That
> > is why I said the current shard splitting will not work for me.
> > Thanks
> >
> > On Sat, Apr 16, 2016 at 8:08 PM, Erick Erickson <erickerick...@gmail.com
> >
> > wrote:
> >
> >> Why don't you think splitting the shards will do what you need?
> >> Admittedly it will have to be applied to each shard and will
> >> double the number of shards you have, that's the current
> >> limitation. At the end, though, you will have 4 shards when
> >> you used to have 2 and you can move them around to whatever
> >> hardware you can scrape up.
> >>
> >> This assumes you're using the default compositeId routing
> >> scheme and not implicit routing. If you are using compositeId
> >> there is no provision to add another shard.
> >>
> >> As far as SOLR-5025 is concerned, nobody's working on that
> >> that I know of.
> >>
> >> I have to ask though whether you've tuned your existing
> >> machines. How many docs are on each? Why do you think
> >> you need more shards? Query speed? OOMs? Java heaps
> >> getting too big?
> >>
> >> Best,
> >> Erick
> >>
> >> On Fri, Apr 15, 2016 at 10:50 PM, Jay Potharaju <jspothar...@gmail.com>
> >> wrote:
> >> > I found ticket https://issues.apache.org/jira/browse/SOLR-5025 which
> >> talks
> >> > about sharding in solrcloud. Are there any plans to address this
> issue in
> >> > near future?
> >> > Can any of the users on the forum comment how they are handling this
> >> > scenario in production?
> >> > Thanks
> >> >
> >> > On Fri, Apr 15, 2016 at 4:28 PM, Jay Potharaju <jspothar...@gmail.com
> >
> >> > wrote:
> >> >
> >> >> Hi,
> >> >> I have an existing collection which has 2 shards, one on each node in
> >> the
> >> >> cloud. Now I want to split the existing collection into 3 shards
> >> because of
> >> >> increase in volume of data. And create this new shard  on a new node
> in
> >> the
> >> >> solrCloud.
> >> >>
> >> >>  I read about splitting a shard & creating a shard, but not sure it
> will
> >> >> work.
> >> >>
> >> >> Any suggestions how are others handling this scenario in production.
> >> >> --
> >> >> Thanks
> >> >> Jay
> >> >>
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks
> >> > Jay Potharaju
> >>
> >
> >
> >
> > --
> > Thanks
> > Jay Potharaju
>



-- 
Thanks
Jay Potharaju

Reply via email to