A little more data. Note that the cloud status shows the black bubble for a leader. See http://i.imgur.com/k2MhGPM.png.
org.apache.solr.common.SolrException: No registered leader was found after waiting for 4000ms , collection: rni slice: shard4 at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:568) at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:551) at org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:1358) at org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1226) at org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55) at org.apache.solr.update.processor.LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:121) at org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55) On Wed, Feb 25, 2015 at 9:44 AM, Benson Margulies <bimargul...@gmail.com> wrote: > On Wed, Feb 25, 2015 at 8:04 AM, Shawn Heisey <apa...@elyograg.org> wrote: >> On 2/25/2015 5:50 AM, Benson Margulies wrote: >>> So, found the following line in the guide: >>> >>> java -DzkRun -DnumShards=2 >>> -Dbootstrap_confdir=./solr/collection1/conf >>> -Dcollection.configName=myconf -jar start.jar >>> >>> using a completely clean, new, solr_home. >>> >>> In my own bootstrap dir, I have my own solrconfig.xml and schema.xml, >>> and I modified to have: >>> >>> -DnumShards=8 -DmaxShardsPerNode=8 >>> >>> When I went to start loading data into this, I failed: >>> >>> Caused by: >>> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: >>> No registered leader was found after waiting for 4000ms , collection: >>> rni slice: shard4 >>> at >>> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:554) >>> at >>> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210) >>> at >>> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206) >>> at >>> org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) >>> at >>> org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:285) >>> at >>> org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:271) >>> at >>> com.basistech.rni.index.internal.SolrCloudEvaluationNameIndex.<init>(SolrCloudEvaluationNameIndex.java:53) >>> >>> with corresponding log traffic in the solr log. >>> >>> The cloud page in the Solr admin app shows the IP address in green. >>> It's a bit hard to read in general, it's all squished up to the top. >> >> The way I would do it would be to start Solr *only* with the zkHost >> parameter. If you're going to use embedded zookeeper, I guess you would >> use zkRun instead. >> >> Once I had Solr running in cloud mode, I would upload the config to >> zookeeper using zkcli, and create the collection using the Collections >> API, including things like numShards and maxShardsPerNode on that CREATE >> call, not as startup properties. Then I would completely reindex my >> data into the new collection. It's a whole lot cleaner than trying to >> convert non-cloud to cloud and split shards. > > Shawn, I _am_ starting from clean. However, I didn't find a recipe for > what you suggest as a process, and (following Hoss' suggestion) I > found the recipe above with the boostrap_confdir scheme. > > I am mostly confused as to how I supply my solrconfig.xml and > schema.xml when I follow the process you are suggesting. I know I'm > verging on vampirism here, but if you could possibly find the time to > turn your paragraph into either a pointer to a recipe or the command > lines in a bit more detail, I'd be exceedingly grateful. > > Thanks, > benson > > > >> >> Thanks, >> Shawn >>