Hi,

I wanted to configure one core as Master and one core as slave.
This is my existing configuration:-

In my SOLR_HOME I have conf/schema.xml, conf/solrconfig.xml  and the others
when no core was present
Also in my SOLR_HOME are solr.xml and coreA created using the CREATE command
for cores

I have my other coreB's index in a different dataDir

I believe in this configuration both the cores share the same schema.xml and
solrconfig.xml. I added the master slave replication code in my
{SOLR_HOME}/conf/solrconfig.xml.

 <requestHandler name="/replication" class="solr.ReplicationHandler" >
    <lst name="master">
        <!--Replicate on 'startup' and 'commit'. 'optimize' is also a valid
value for replicateAfter. -->
        <!--<str name="replicateAfter">startup</str>-->
        <str name="replicateAfter">optimize</str>
        <!--Create a backup after 'optimize'. Other values can be 'commit',
'startup'. It is possible to have multiple entries of this config string.
Note that this is just for backup, replication does not require this. -->
        <!-- <str name="backupAfter">optimize</str> -->
        <!--If configuration files need to be replicated give the names
here, separated by comma -->
        <!--<str name="confFiles">schema.xml,stopwords.txt,elevate.xml</str>
-->
       <!--The default value of reservation is 10 secs.See the documentation
below . Normally , you should not need to specify this -->
        <str name="commitReserveDuration">00:00:10</str>
    </lst>
</requestHandler>


Just below that I specified the slave

<requestHandler name="/replication" class="solr.ReplicationHandler" >
    <lst name="slave">
        <!--fully qualified url for the replication handler of master . It
is possible to pass on this as a request param for the fetchindex command-->
        <str name="masterUrl">{specified the instanceDir}
/coreA/replication</str>
        <!--Interval in which the slave should poll master .Format is
HH:mm:ss . If this is absent slave does not poll automatically.
         But a fetchindex can be triggered from the admin or the http API
-->

        <!--<str name="pollInterval">00:00:20</str>  -->

        <!-- THE FOLLOWING PARAMETERS ARE USUALLY NOT REQUIRED-->
        <!--to use compression while transferring the index files. The
possible values are internal|external
         if the value is 'external' make sure that your master Solr has the
settings to honour the accept-encoding header.
         see here for details
http://wiki.apache.org/solr/SolrHttpCompression
         If it is 'internal' everything will be taken care of automatically.
         USE THIS ONLY IF YOUR BANDWIDTH IS LOW . THIS CAN ACTUALLY SLOWDOWN
REPLICATION IN A LAN-->
        <str name="compression">internal</str>
        <!--The following values are used when the slave connects to the
master to download the index files.
         Default values implicitly set as 5000ms and 10000ms respectively.
The user DOES NOT need to specify
         these unless the bandwidth is extremely low or if there is an
extremely high latency-->
        <str name="httpConnTimeout">5000</str>
        <str name="httpReadTimeout">10000</str>
        <!-- If HTTP Basic authentication is enabled on the master, then the
slave can be configured with the following -->
        <str name="httpBasicAuthUser">username</str>
        <str name="httpBasicAuthPassword">password</str>
     </lst>
</requestHandler>

When I optimize coreA, replication to coreB doesn't happen. CoreA (my
supposed to be master here) gets the new values but not coreB. When I tried
the *startup* option in the first block of replication it gave lucene write
error in the index so I went for optimize.

Is there something wrong here or do I need to have separate solrconfig.xml
for coreA and coreB to clearly indicate who is master and who is slave by
including only one of the replicaiton codes in the corresponding
solrconfig.xml rather than have a common solrconfig.xml and specify both in
that.

If I need to specify separate solrconfig.xml for both cores, how do I do
that??

Any help is appreciated.

Thanks and Rgds,
Mark

Reply via email to