Let us see what is the desired behavior.

When s1 comes back up online , s2 must download a fresh copy of index
from s1 because s1 is the slave and s2 has a newer version of index
than s1.

Are you suggesting that s2 downloads the index files and then commit
fails? The code is written as follows

boolean freshDownloadneeded = myIndexGeneration >= mastersIndexgeneration;

then it should be a problem

can u post the stacktrace?

On Thu, May 21, 2009 at 11:45 PM, Otis Gospodnetic
<otis_gospodne...@yahoo.com> wrote:
>
> Aha, I see.  Perhaps you can post the error message/stack trace?
>
> As for the sanity check, I bet a call to 
> http://host:port/solr/replication?command=indexversion could be used ensure 
> only newer versions of the index are being pulled.  We'll see what Paul says 
> when he wakes up. :)
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> ----- Original Message ----
>> From: Damien Tournoud <dam...@tournoud.net>
>> To: solr-user@lucene.apache.org
>> Sent: Thursday, May 21, 2009 1:26:30 PM
>> Subject: Re: No sanity checks before replicating files?
>>
>> Hi Otis,
>>
>> Thanks for your answer.
>>
>> On Thu, May 21, 2009 at 7:14 PM, Otis Gospodnetic
>> wrote:
>> > Interesting, this is similar to my suggestion to another person I just 
>> > replied
>> to here on solr-user.
>> > Have you actually run into this problem?  I haven't tried it, but I'd think
>> the first next replication (copying index from s1 to s2) would not 
>> necessarily
>> fail, but would simply overwrite any changes that were made on s2 while it 
>> was
>> serving as the master.  Is that not what happens?
>>
>> No it doesn't. For some reason, Solr download all the files of the
>> index, but fails to commit the changes locally. At the next poll, the
>> process restarts. Not only does this clogs the network, but it also
>> unnecessarily uses resources on the newly promoted slave, until we
>> change its configuration.
>>
>> > If that's what happens, then I think what you'd simply have to do is to:
>> >
>> > 1) bring s1 back up, but don't make it a master immediately
>> > 2) take away the master role from s2
>> > 3) make s1 copy the index from s2, since s2 might have a more up to date 
>> > index
>> now
>> > 4) make s1 the master
>>
>> Once s2 is the master, we want it to stay this way. We will reassign
>> s1 as the slave at a later stage, when resources allows. What worries
>> me is that strange behavior of Solr 1.4 replication when the "slave"
>> index is fresher then the "master" one.
>>
>> Damien
>
>



-- 
-----------------------------------------------------
Noble Paul | Principal Engineer| AOL | http://aol.com

Reply via email to