Hi Brian, If you have enough servers - take that machine that's doing snapinstall out of the pool for a bit, do snapinstall, warm it up well, put it back in the pool. You'd really need to have enough servers, so that when you do this you can avoid having multiple live query slaves with different indices (and different results).
Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch ----- Original Message ---- > From: Brian Whitman <br...@echonest.com> > To: solr-user@lucene.apache.org > Sent: Tuesday, February 24, 2009 7:16:20 AM > Subject: general survey of master/replica setups > > Say you have a bunch of solr servers that index new data, and then some > replica/"slave" setup that snappulls from the master on a cron or some > schedule. Live internet facing queries hit the replica, not the master, as > indexes/commits on the master slow down queries. > But even the query-only solr installs need to "snap-install" every so often, > triggering a commit, and there is a slowdown in queries when this happens. > Measured avg QTimes during normal times are 400ms, during commit/snapinstall > times they dip into the seconds. Say in the 5m between snappulls 1000 > documents have been updated/deleted/added. > > How do people mitigate the effect of the commit on replica query instances?