I had that problem. Very annoying and we probably should require special flag 
to use localhost.

We need to start solr like this:

./solr start -c -h `hostname`

If anybody ever forgets, we get a 127.0.0.1 node that shows down in cluster 
status. No idea how to get rid of that.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Mar 29, 2018, at 7:46 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> 
> On 3/29/2018 8:25 AM, Abhi Basu wrote:
>> "Operation create caused
>> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
>> and the number of nodes currently live or live and part of your
> 
> I'm betting that all your nodes are registering themselves with the same 
> name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an address 
> on the loopback interface.
> 
> Usually this problem (on an OS other than Windows, at least) is caused by an 
> incorrect /etc/hosts file that maps your hostname to a  loopback address 
> instead of a real address.
> 
> You can override the value that SolrCloud uses to register itself into 
> zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, I 
> think this is the SOLR_HOST variable, which gets translated into -Dhost=XXX 
> on the java commandline.  It can also be configured in solr.xml.
> 
> Thanks,
> Shawn
> 

Reply via email to