Sounds like you should file 3 JIRA issues. They all look like legit stuff we 
should dig into on a glance.

-- 
Mark Miller
about.me/markrmiller

On August 24, 2014 at 12:35:13 PM, ralph tice (ralph.t...@gmail.com) wrote:
> Hi all,
> 
> Two issues, first, when I issue an ADDREPLICA call like so:
> 
> 
> http://localhost:8983/solr/admin/collections?action=ADDREPLICA&shard=myshard&collection=mycollection&createNodeSet=solr18.mycorp.com:8983_solr
>  
> 
> It does not seem to respect the 8983_solr designation in the createNodeSet
> parameter and instead places the shard on any JVM on the node. First
> attempt I got a replica on 8994_solr and second attempt to place a replica
> on 8983 got a replica on 8992_solr instead.
> 
> As an aside, is there any particular reason why DELETEREPLICA asks for the
> ZK "shard id" (node_###) instead of the same syntax as createNodeSet? I
> can't recall any other instance in which the ZK "shard id" is exposed via
> query parameter and I've only ever seen it in clusterstate.json /
> CLUSTERSTATUS calls.
> 
> The 2nd issues is as follows:
> 
> I am running Solr built off branch_4x, and thanks to some help from IRC
> we've determined that we have an incompatible index situation where we have
> indexes built with 4.9 that we can read but not index into further or
> update. Understandable, and going forward we don't intend to run off of
> master. In this situation, if I try to add a replica, this also fails,
> however, the only log ouput (at WARN threshold) is:
> 
> 16:21:58.156 [RecoveryThread] WARN org.apache.solr.update.PeerSync - no
> frame of reference to tell if we've missed updates
> 
> ...and the replica comes up green. I think this might indicate a missing
> integrity check on replication but certainly IMO a replica should report as
> green/active if it is not on the same revision as the leader, or at least
> if it has never been on the same revision as the leader.
> 
> Thanks for any assistance/validation/advice,
> 
> --Ralph
> 

Reply via email to