I think the issue is that the distributed update processor isn't configured.  
That's necessary for SolrCloud to forward docs. 

   Erik 

> On Feb 21, 2016, at 15:20, Erick Erickson <erickerick...@gmail.com> wrote:
> 
> Why are you using old-style replication with solr cloud? I suggest you turn
> it off and just let solr cloud do all the work.  The reason (probably) that
> restarting catches things up is it may be doing a full replication. Mixing
> solr cloud with old-style replication is tricky, so please explain what the
> reasoning is...
> 
> Best
> Erick
>> On Feb 21, 2016 21:24, "Ilan Schwarts" <ila...@gmail.com> wrote:
>> 
>> Hi, we had a running solr 4.3.1 with 1 core and no replication.
>> We are migrating to solrcloud 5.2.1 with 2 shards, on each shard we have 1
>> leader and 1 replica, total 4, The replication is not working.
>> I have updated solrconfig.xml and schema.xml, And when i add document i
>> can retreive it, It is being added.
>> But it is not being replicated to the replica node.
>> [image: Inline image 1]
>> 
>> This is the cluster, and this is what i see in cloud state.json under
>> collection1:
>> 
>> {"collection1":{
>>    "replicationFactor":"2",
>>    "shards":{
>>      "shard1":{
>>        "range":"80000000-ffffffff",
>>        "state":"active",
>>        "replicas":{
>>          "core_node3":{
>>            "core":"collection1_shard1_replica2",
>>            "base_url":"http://10.171.3.106:8984/solr";,
>>            "node_name":"10.171.3.106:8984_solr",
>>            "state":"active",
>>            "leader":"true"},
>>          "core_node4":{
>>            "core":"collection1_shard1_replica1",
>>            "base_url":"http://10.171.3.106:8986/solr";,
>>            "node_name":"10.171.3.106:8986_solr",
>>            "state":"active"}}},
>>      "shard2":{
>>        "range":"0-7fffffff",
>>        "state":"active",
>>        "replicas":{
>>          "core_node1":{
>>            "core":"collection1_shard2_replica1",
>>            "base_url":"http://10.171.3.106:8983/solr";,
>>            "node_name":"10.171.3.106:8983_solr",
>>            "state":"active",
>>            "leader":"true"},
>>          "core_node2":{
>>            "core":"collection1_shard2_replica2",
>>            "base_url":"http://10.171.3.106:8985/solr";,
>>            "node_name":"10.171.3.106:8985_solr",
>>            "state":"active"}}}},
>>    "router":{"name":"compositeId"},
>>    "maxShardsPerNode":"1",
>>    "autoAddReplicas":"false"}}
>> 
>> 
>> 
>> What is weird, if i stop all solr cores, and then start, it will be
>> synced, the documents will be on both nodes.
>> 
>> I am using a custom update handler, maybe the problem is there ? i have
>> set it as before:
>> 
>> 
>> *Custom update handler:*
>>   <requestHandler name="/witupdate"   class="solr.UpdateRequestHandler" >
>>  <lst name="defaults">
>>      <str name="update.chain">WitStandardUpdater</str>
>>     </lst>
>> </requestHandler>
>> <updateRequestProcessorChain name="WitStandardUpdater" default="false">
>>  <processor
>> class="WiT.ir.solrcomponents.WitStandardUpdateProcessorFactory">
>>   <str name="urlParam">url</str>
>>   <str name="batchStatusParam">batchStatus</str>
>>   <str name="successStatusStr">0 </str>
>>   <str name="failStatusStr">1 </str>
>>   <str name="enabled">true</str>
>>     </processor>
>>     <processor class="solr.RunUpdateProcessorFactory" />
>>     <processor class="solr.LogUpdateProcessorFactory" />
>> </updateRequestProcessorChain>
>> <queryResponseWriter name="tcp"
>> class="WiT.ir.solrcomponents.TcpResponseWriter">
>>  <str name="hostParam">host</str>
>>  <str name="portParam">port</str>
>>  <str name="queryIdParam">queryId</str>
>> </queryResponseWriter>
>> 
>> --
>> 
>> 
>> -
>> Ilan Schwarts
>> 

Reply via email to