The only problem that I see possibly happening is that you may end up committing more often than SOLR can open/prewarm new searchers. This happens in the peak of the day on our servers - leaving us with 5-10 searchers just hanging out waiting for prewarm to be up - only be closed as soon as they're registered because there's already another searcher waiting behind it.

That said, I need to tune my cache. A lot.

+--------------------------------------------------------+
 | Matthew Runo
 | Zappos Development
 | [EMAIL PROTECTED]
 | 702-943-7833
+--------------------------------------------------------+


On Oct 4, 2007, at 9:07 AM, John Reuning wrote:

Apologies if this has been covered. I searched the archives and didn't see a thread on this topic.

Has anyone experimented with a near real-time replication scheme similar to RDBMS replication? There's large efficiency in using rsync to copy the lucene index files to slaves, but what if you want index changes to propagate in a few seconds instead of a few minutes?

Is it feasible to make a solr manager take update requests and send them to slaves as it receives them? (I guess maybe they're not really slaves in this case.) The manager might issue commits every 10-30 seconds to reduce the write load. Write overhead still exists on all read servers, but at least the read requests are spread across the pool.

Thanks,

-John R.


Reply via email to