: What I’m finding is that now and then base_url for the replica in : state.json is set to the internal IP of the AWS node. i.e.: : : "base_url":"http://10.29.XXX.XX:8983/solr”, : : On other attempts it’s set to the public DNS name of the node: : : "base_url":"http://ec2_host:8983/solr”, : : In my /etc/defaults/solr.in.sh I have: : : SOLR_HOST=“ec2_host” : : which I thought is what I needed to get the public DNS name set in base_url.
i believe you are correct. the "now and then" part of your question is weird -- it seems to indicate that sometimes the "correct" thing is happening, and other times it is not. /etc/defaults/solr.in.sh isn't the canonical path for solr.in.sh according to the docs/install script for running a production solr instance... https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#TakingSolrtoProduction-ServiceInstallationScript ...how *exactly* are you running Solr on all of your nodes? because my guess is that you've got some kind of inconsistent setup where sometimes when you startup (or restart) a new node it does refer to your solr.in.sh file, and other times it does not -- so sometimes solr never sees your SOLR_HOST option. In those cases, when it regesters itself with ZooKeeper it uses the current IP as a fallback, and then that info gets backed into the metadata for the replicas that get created on that node at that point in time. FWIW, you should be able to spot check that the SOLR_HOST is being applied correctly by looking at the java process command line args (using PS, or loading the SOlr UI in your browser) and checking for the "-Dhost=..." option -- if it's not there, then your solr.in.sh probably wasn't read in correctly -Hoss http://www.lucidworks.com/