The query time increases because in order to calculate the set of documents that belongs in page N, you must first calculate all the pages prior to page N, and this information is not stored in between requests.
Two ways of speeding this stuff up are to request bigger pages, and/or use filter queries over some sort of orderable field in your index to do the paging. So for example, if you have a timestamp field in your index, and your data represents 100 days, doing 100 queries, one for each day, is much better than doing 100 queries using start/rows. Michael Della Bitta Applications Developer o: +1 646 532 3062 | c: +1 917 477 7906 appinions inc. “The Science of Influence Marketing” 18 East 41st Street New York, NY 10017 t: @appinions <https://twitter.com/Appinions> | g+: plus.google.com/appinions<https://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts> w: appinions.com <http://www.appinions.com/> On Mon, Nov 4, 2013 at 8:43 AM, michael.boom <my_sky...@yahoo.com> wrote: > I saw that some time ago there was a JIRA ticket dicussing this, but still > i > found no relevant information on how to deal with it. > > When working with big nr of docs (e.g. 70M) in my case, I'm using > start=0&rows=30 in my requests. > For the first req the query time is ok, the next one is visibily slower, > the > third even more slow and so on until i get some huge query times of up > 140secs, after a few hundreds requests. My test were done with SolrMeter at > a rate of 1000qpm. Same thing happens at 100qpm, tough. > > Is there a best practice on how to do in this situation, or maybe an > explanation why is the query time increasing, from request to request ? > > Thanks! > > > > ----- > Thanks, > Michael > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Performance-of-rows-and-start-parameters-tp4099194.html > Sent from the Solr - User mailing list archive at Nabble.com. >