Hello,
----- Original Message -----

> From: Robert Stewart <bstewart...@gmail.com>
> To: solr-user@lucene.apache.org
> Cc: 
> Sent: Tuesday, October 11, 2011 3:37 PM
> Subject: Re: Replication with an HA master
> 
> In the case of using a shared (SAN) index between 2 masters, what happens if 
> the 
> live master fails in such a way that the index remains "locked" (such 
> as if some hardware failure and it did not unlock/close index).  Will the 
> other 
> master be able to open/write to the index as new documents are added?


You'd use native locks, which should disappear if the JVM dies.  If it does 
not, then I'm not 100% sure what happens, but in the worst case there would be 
a need for a quick manual (or scripted) intervention.  But your index would be 
up to date!

> Also, if that can work ok, would it work if you have a LB (VIP) from both 
> indexing and replication sides of the 2 masters, such that some VIP used by 
> solrj for indexing new documents via HTTP, and the same VIP used by slave 
> searchers for replication?  That sounds like it would work.


Precisely what you should do.  e.g. "master-vip" is the "hostname" that both 
SolrJ would post new docs to and the master "server" slaves would poll for 
index changes.

Otis
----

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/




> On Oct 11, 2011, at 3:16 PM, Otis Gospodnetic wrote:
> 
>>  Hello,
>> 
>>  Yes, you've read about NFS, which is why I gave the example of a SAN 
> (which can have multiple power supplies, controllers, etc.)
>> 
>>  Yes, should be OK to have multiple Solr instances have the same index open, 
> since only one of them will actually be writing to it, thanks to LB.
>> 
>>  Otis
>>  ----
>>  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
>>  Lucene ecosystem search :: http://search-lucene.com/
>> 
>> 
>>>  ________________________________
>>>  From: Brandon Ramirez <brandon_rami...@elementk.com>
>>>  To: "solr-user@lucene.apache.org" 
> <solr-user@lucene.apache.org>
>>>  Sent: Tuesday, October 11, 2011 2:55 PM
>>>  Subject: RE: Replication with an HA master
>>> 
>>>  Using a shared volume crossed my mind too, but I discarded the idea 
> because of literature I have read about Lucene performing poorly against 
> remote 
> file systems.  But then I suppose a SAN wouldn't be a remote file system in 
> the same sense as an NFS-mounted NAS or similar.
>>> 
>>>  Should I be concerned about two solr instances on two machines having 
> the same SAN-based index open, as long as only one of them is receiving 
> requests?  I would think in theory it would work, but I don't have any 
> production-level experience with Solr yet, only textbook knowledge.
>>> 
>>> 
>>>  Brandon Ramirez | Office: 585.214.5413 | Fax: 585.295.4848 
>>>  Software Engineer II | Element K | www.elementk.com
>>> 
>>> 
>>>  -----Original Message-----
>>>  From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com] 
>>>  Sent: Tuesday, October 11, 2011 2:28 PM
>>>  To: solr-user@lucene.apache.org
>>>  Subject: Re: Replication with an HA master
>>> 
>>>  A few alternatives:
>>>  * Have the master keep the index on a shared disk (e.g. SAN)
>>>  * Use LB to easily switch to between masters, potentially even 
> automatically if LB can detect the primary is down
>>> 
>>>  Otis
>>>  ----
>>>  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene 
> ecosystem search :: http://search-lucene.com/
>>> 
>>> 
>>>>  ________________________________
>>>>  From: Robert Stewart <bstewart...@gmail.com>
>>>>  To: solr-user@lucene.apache.org
>>>>  Sent: Friday, October 7, 2011 10:22 AM
>>>>  Subject: Re: Replication with an HA master
>>>> 
>>>>  Your idea sounds like the correct path.  Setup 2 masters, one 
> running 
>>>>  in "slave" mode which pulls replicas from the live 
> master.  When/if live master goes down, you just reconfigure and restart the 
> backup master to be the live master.  You'd also need to then start data 
> import on the backup master (enable/start cron job?), and redirect slave 
> searchers to pull replicas from the new live master.  All that could be done 
> using scripts or something like puppet possibly.
>>>> 
>>>>  Another solution maybe is to run 2 "live" masters, which 
> both index the same content from the same data source.  If one goes down, 
> then 
> you just need to redirect slave searchers to the backup master for 
> replication.
>>>> 
>>>>  I am also starting a similar project which needs some disaster 
> recovery processes in place, so any other info would be useful to me as well.
>>>> 
>>>>  Bob
>>>> 
>>>>  On Oct 7, 2011, at 9:53 AM, Brandon Ramirez wrote:
>>>> 
>>>>>  We are getting ready to start a project using Solr as our 
> backend search engine and I am trying to devise a deployment architecture 
> that 
> works for us.  We definitely need a master/slave replication strategy, 
> that's for sure, but my concern is the master becomes a single point of 
> failure.
>>>>> 
>>>>>  Fortunately, real-time search is not a requirement for us.  If 
> search results are a few minutes out of sync with our database, it's not a 
> big deal.
>>>>> 
>>>>>  So what I would like to do is have a set of query servers 
> (slaves) that are only used for querying, no indexing and have them use 
> Solr's HTTP replication mechanism on a 2 or 3 minute interval.  To get HA 
> indexing, I'd like to have 2 masters: a primary and a standby.  All indexing 
> requests go to the primary unless it's taken out of service.  To keep the 
> standby ready to takeover if it needs to, it needs to be more up to date than 
> the slaves.  I'd like to have it replicate every 30 seconds or so.
>>>>> 
>>>>>  The reason I'm asking about it on this list is that I 
> haven't seen any Solr documentation or even anything that talks about this.  
> I can't be the only one concerned about having a single point of failure, so 
> I'm reaching out to see what others have done in this case before I go with 
> my own solution.
>>>>> 
>>>>> 
>>>>>  Brandon Ramirez | Office: 585.214.5413 | Fax: 585.295.4848 
> Software 
>>>>>  Engineer II | Element K | 
> www.elementk.com<http://www.elementk.com/>
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> 
>

Reply via email to