“Take backups” is whatever you need for your environment. In AWS, we snapshot
the EBS volumes, and so on.
Backing up the Solr install and home directories would be good. There are some
core.properties files in there that seem to be useful. Honestly, I don’t have a
complete handle on the details
Hi,
Yes, zookeeper is external, and yes, we'll definitely wait until after solr
has stopped to bring it down.
Thanks for the tip about disabling `autoAddReplicas`, we definitely don't
want the shards moving around during the process.
Wunder, your point 3 mentions "take backups". Given that our d
In case you are using a recent Solr 7.x version with collections that have
autoAddReplicas=true, you should disable the auto add replicas feature
before powering off so that Solr does not decide to move replicas around
because nodes have been lost. See
https://lucene.apache.org/solr/guide/7_4/solrc
I agree.
1. Shut down each Solr server process using the “bin/solr” script.
2. Shut down the Zookeeper ensemble.
3. Take backups.
4. Shut down the OS.
Do that in reverse to get going.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Oct 30, 2018, at
bin/solr stop
As long as you don't kill it with extreme prejudice (i.e. kill -9 or
pull the plug) it should be fine. Assuming you're running ZooKeeper
in an external ensemble, I'd certainly stop those after all the Solr
instances were stopped.
Powering the nodes up is irrelevant to Solr, the bin/
Hi All,
We have a solr cloud running 3 shards, 3 hosts, 6 total NRT replicas, and
the data director on hdfs. It has 950 million documents in the index,
occupying 700GB of disk space.
We need to completely power off the system to move it.
Are there any actions we should take on shutdown to help t