Did you look at the new in-built replication?
http://wiki.apache.org/solr/SolrReplication#head-0e25211b6ef50373fcc2f9a6ad40380c169a5397

It can help you decide where to replicate from during runtime . Look
at the snappull command you can pass the masterUrl at the time of
replication.



On Fri, Jan 23, 2009 at 7:55 PM, edre...@ha <edre...@homeaway.com> wrote:
>
> Thanks for the response. Let me clarify things a bit.
>
> Regarding the Slaves:
> Our project is a web application. It is our desire to embedd Solr into the
> web application.   The web applications are configured with a local embedded
> Solr instance configured as a slave, and a remote Solr instance configured
> as a master.
>
> We have a requirement for real-time updates to the Solr indexes.  Our
> strategy is to use the local embedded Solr instance as a read-only
> repository.  Any time a write is made, we will send it to the remote Master.
> Once a user pushes a write operation to the remote Master, all subsequent
> read operations for this user now are made against the Master for the
> duration of the session.  This approximates "realtime" updates and seems to
> work for our purposes.  Writes to our system are a small percentage of Read
> operations.
>
> Now, back to the original question.  We're simply looking for failover
> solution if the Master server goes down.  Oh, and we are using the
> replication scripts to sync the servers.
>
>
>
>> It seems like you are trying to write to Solr directly from your front end
>> application. This is why you are thinking of multiple masters. I'll let
>> others comment on how easy/hard/correct the solution would be.
>>
>
> Well, yes.  We have business requirements that want updates to Solr to be
> realtime, or as close to that as possible, so when a user changes something,
> our strategy was to save it to the DB and push it to the Solr Master as
> well.  Although, we will have a background application that will help ensure
> that Solr is in sync with the DB for times that Solr is down and the DB is
> not.
>
>
>
>> But, do you really need to have live writes? Can they be channeled through
>> a
>> background process? Since you anyway cannot do a commit per-write, the
>> advantage of live writes is minimal. Moreover you would need to invest a
>> lot
>> of time in handling availability concerns to avoid losing updates. If you
>> log/record the write requests to an intermediate store (or queue), you can
>> do with one master (with another host on standby acting as a slave).
>>
>
> We do need to have live writes, as I mentioned above.  The concern you
> mention about losing live writes is exactly why we are looking at a Master
> Solr server failover strategy.  We thought about having a backup Solr server
> that is a Slave to the Master and could be easily reconfigured as a new
> Master in a pinch.  Our operations team has pushed us to come up with a
> solution that would be more seamless.  This is why we came up with a
> Master/Master solution where both Masters are also slaves to each other.
>
>
>
>>>
>>> To test this, I ran the following scenario.
>>>
>>> 1) Slave 1 (S1) is configured to use M2 as it's master.
>>> 2) We push an update to M2.
>>> 3) We restart S1, now pointing to M1.
>>> 4) We wait for M1 to sync from M2
>>> 5) We then sync S1 to M1.
>>> 6) Success!
>>>
>>
>> How do you co-ordinate all this?
>>
>
> This was just a test scenario I ran manually to see if the setup I described
> above would even work.
>
> Is there a Wiki page that outlines typical web application Solr deployment
> strategies?  There are a lot of questions on the forum about this type of
> thing (including this one).  For those who have expertise in this area, I'm
> sure there are many who could benefit from this (hint hint).
>
> As before, any comments or suggestions on the above would be much
> appreciated.
>
> Thanks,
> Erik
> --
> View this message in context: 
> http://www.nabble.com/Master-failover---seeking-comments-tp21614750p21625324.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>



-- 
--Noble Paul

Reply via email to