I am trying to sort out what updating a relatively simple SolrCloud 4.1
deployment (one shard, 500 collections, 2 replicas each collection) looks like.
From experience and from reading other accounts, just restarting both Solr
instances is a coin toss - both instances get tied up trying to rec
Solr 4.7.1
I am trying to orchestrate a fast restart of a SolrCloud (4.7.1). I was
hoping to use clusterstate.json would reflect the up/down state of each
core as well as whether or not a given core was leader.
clusterstate.json is not kept up to date with what I see going on in my
logs though -
I see something similar where, given ~1000 shards, both nodes spend a LOT of
time sorting through the leader election process. Roughly 30 minutes.
I too am wondering - if I force all leaders onto one node, then shut down both,
then start up the node with all of the leaders on it first, then star
Shawn Heisey-4 wrote
> What are you trying to achieve with your restart? Can you just reload
> the collections one by one instead?
We restart when we update a handler, schema, or solrconfig for our cores.
I’ve tried just shutting down both nodes. Updating both, and restarting
both. With a 1,000
Shawn Heisey-4 wrote
> I can envision two issues for you to file in Jira. The first would be
> an Improvement issue, the second would be a Bug:
>
> * SolrCloud: Add API to move leader off a Solr instance
> * SolrCloud: LotsOfCollections takes a long time to stabilize
I've created:
* SOLR-5990 -
I was actually going to try orchestrating SolrCloud restart myself using
loadOnStartup="false".
Did you pursue this any further?
With Solr 4.7.1...
I found that using core LOAD, RELOAD, and CREATE do not take a "down"
replica to "active". What I've found so far is that I can startup a
collection
You're right, we're basically working around inherent problems. SolrCloud and
large numbers of cores is not a combination that yields reliable restarts.
Even under the best of conditions - a completely silent (no updates, no
selects) environment - if I restart two nodes, each containing ~800
replic
So I have basic master/slave replication set up with Solr 4.1. After startup,
however, nothing happens.
${enable.master:false}
startup
commit
00:00:10
${enable.slave:false}
http://localhost:19081/solr/$