Out of curiosity, why are you manually deleting nodes in zookeeper?

It's always seemed to me that the majority (definitely not all) of
modifications needed during normal operations can usually be done through
Solr's APIs.

Thanks,
Chris

On Thu, Jan 10, 2019 at 12:04 AM Yogendra Kumar Soni <
yogendra.ku...@dolcera.com> wrote:

> I have an existing collection
> http://10.2.12.239:11080/solr/test/select?q=*:*&rows=0
>
> {
>   "responseHeader":{
>     "zkConnected":true,
>     "status":0,
>     "QTime":121,
>     "params":{
>       "q":"*:*",
>       "rows":"0"}},
>   "response":{"numFound":150,"start":0,"maxScore":1.0,"docs":[]
>   }}
>
> ls ls data?/index/shard?/
>
> d*ata1/index/shard1:
> test_shard2_replica_n2  solr
>
> data1/index/shard2:
> test_shard4_replica_n5
>
> data2/index/shard1:
> test_shard3_replica_n3
>
> data2/index/shard2:
> test_shard1_replica_n1 *
>
> ...
>
>
>
> 1. deleted /collections from zookeeper
>
> bin/solr zk rm -r /collections -z localhost:2181
>
> 2. restart solr cloud
>
> bin/solr stop -all
>
> bin/solr -c -s data1/index/shard1/ -p 11080 -z localhost:2181
> bin/solr -c -s data1/index/shard2/ -p 12080 -z localhost:2181
> bin/solr -c -s data2/index/shard1/ -p 13080 -z localhost:2181
> bin/solr -c -s data2/index/shard2/ -p 14080 -z localhost:2181
> bin/solr -c -s data3/index/shard1/ -p 15080 -z localhost:2181
> bin/solr -c -s data3/index/shard2/ -p 16080 -z localhost:2181
> bin/solr -c -s data4/index/shard1/ -p 17080 -z localhost:2181
> bin/solr -c -s data4/index/shard2/ -p 18080 -z localhost:2181
>
> 3. checked  again for data
>
> ls data?/index/shard?/
>
>
> *data1/index/shard1/:*
>
> *data1/index/shard2/:
>
> data2/index/shard1/:
>
> data2/index/shard2/:
>
> data3/index/shard1/:
>
> data3/index/shard2/:
>
> data4/index/shard1/:
>
> data4/index/shard2/:*
>
>
> All cores are wiped
>
>
>
>
> On Wed, Jan 9, 2019 at 11:39 PM lstusr 5u93n4 <lstusr...@gmail.com> wrote:
>
> > We've seen the same thing on solr 7.5 by doing:
> >  - create a collection
> >  - add some data
> >  - stop solr on all servers
> >  - delete all contents of the solr node from zookeeper
> >  - start solr on all nodes
> >  - create a collection with the same name as in the first step
> >
> > When doing this, solr wipes out the previous collection data and starts
> > new.
> >
> > In our case, this was due to a startup script that checked for the
> > existence of a collection and created it if non-existent.  When not
> present
> > in ZK, solr (as it should) didn't return that collection in it's list of
> > collections so we created it...
> >
> > Possible that you have something similar in your workflow?
> >
> > Kyle
> >
> > On Wed, 9 Jan 2019 at 10:22, Erick Erickson <erickerick...@gmail.com>
> > wrote:
> >
> > > Solr doesn't just remove directories, this is very likely
> > > something in your environment that's doing this.
> > >
> > > In any case, there's no information here to help
> > > diagnose. You must tell us _exactly_ what steps
> > > you take in order to have any hope of helping.
> > >
> > > Best,
> > > Erick
> > >
> > > On Wed, Jan 9, 2019 at 2:48 AM Yogendra Kumar Soni
> > > <yogendra.ku...@dolcera.com> wrote:
> > > >
> > > > We are running a solr cloud cluster using solr 7.4 with 8 shards.
> When
> > we
> > > > started our solr cloud with a zookeeper node (without collections
> > > directory
> > > > but with only solr.xml and configs) our data directory containing
> > > > core.propery and cores data becomes empty.
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Thanks and Regards,*
> > > > *Yogendra Kumar Soni*
> > >
> >
>
>
> --
> *Thanks and Regards,*
> *Yogendra Kumar Soni*
>

Reply via email to