This was answered yesterday on the list:

http://www.nabble.com/Re%3A-exceeded-limit-of-maxWarmingSearchers-p17165631.html

regards,
-Mike

On 12-May-08, at 6:12 PM, David Stevenson wrote:

We have a table that has roughly 1M rows.

If we run a query against the table and order by a string field that has a large number of unique values then subsequent commits of any other document
takes much longer.

If we don't run the query or if we order on a string field with very few
unique values (or don't order at all) the commits are unaffected.

Our question is, why does running a query and ordering on a string field
with a large number unique values affect all subsequent commits???

The only way we've found to fix the problem, once it has started, is to
restart solr.


Our test begins by executing a SOLR add like so:

<add>
 <doc>
   <field name="type_t">User</field>
   <field name="pk_i">13</field>
   <field name="id">User:13</field>
   ...
 </doc>
</add>
<commit/ >
==> This takes approx 0.3 sec

Then we do a SOLR select:
wt=ruby&q=%28solr_categories_s%3Arestaurant%29%20AND%20type_t %3ARatable%3Bsolr_title_s%20asc&start=0&fl=pk_i%2Cscore&qt=standard
==> This takes approx 2 sec

Then we execute the SAME SOLR <add> command above and <commit/ >
==> This takes approx 3 sec

CONFIG INFO:
512mb heap, JVM 1.5, lucene-core-2007-05-20_00-04-53.jar, solr 1.2


-Chris & David

Reply via email to