On 7/4/2016 7:46 AM, Lorenzo Fundaró wrote: > I am trying to run Solr on my infrastructure using docker containers > and Mesos. My problem is that I don't have a shared filesystem. I have > a cluster of 3 shards and 3 replicas (9 nodes in total) so if I > distribute well my nodes I always have 2 fallbacks of my data for > every shard. Every solr node will store the index in its internal > docker filesystem. My problem is that if I want to relocate a certain > node (maybe an automatic relocation because of a hardware failure), I > need to create the core manually in the new node because it's > expecting to find the core.properties file in the data folder and of > course it won't because the storage is ephemeral. Is there a way to > make a new node join the cluster with no manual intervention ?
The things you're asking sound like SolrCloud. The rest of this message assumes that you're running cloud. If you're not, then we may need to start over. When you start a new node, it automatically joins the cluster described by the Zookeeper database that you point it to. SolrCloud will **NOT** automatically create replicas when a new node joins the cluster. There's no way for SolrCloud to know what you actually want to use that new node for, so anything that it did automatically might be completely the wrong thing. Once you add a new node, you can replicate existing data to it with the ADDREPLICA action on the Collections API: https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api_addreplica If the original problem was a down node, you might also want to use the DELETEREPLICA action to delete any replicas on the node that you lost that are marked down: https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api9 Creating cores manually in your situation is not advisable. The CoreAdmin API should not be used when you're running in cloud mode. Thanks, Shawn