We've been able to speed up deep paging through big sets by using a
filter query to segment them as well as start/rows paging.

Michael Della Bitta

------------------------------------------------
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271

www.appinions.com

Where Influence Isn’t a Game


On Tue, Mar 26, 2013 at 4:20 PM, Walter Underwood <wun...@wunderwood.org> wrote:
> That is extremely deep paging. That is page 10,000 with ten hits on each 
> page. No human will look at ten thousand pages of results.
>
> The system really does need to rank the first 100,000 before it knows which 
> document should be at rank 100,001. There is no way around that.
>
> wunder
>
> On Mar 26, 2013, at 1:14 PM, Jack Krupansky wrote:
>
>> (You mean, other than "deep paging".)
>>
>> -- Jack Krupansky
>>
>> -----Original Message----- From: Walter Underwood
>> Sent: Tuesday, March 26, 2013 3:47 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Slow performance on distributed search
>>
>> Why on earth are you starting at row 100,000? What use case is hat?  --wunder
>>
>> On Mar 26, 2013, at 11:55 AM, qungg wrote:
>>
>>> for start=100,000&row=10. event though each individual shard take only < 
>>> 10ms
>>> to query, the merging process done by controller would take about a minutes.
>>>
>>> By looking at logs, each shard is giving the controller shard 100,010 rows
>>> of data, and because there are 40 shards in total, the controller is getting
>>> 100,010*40 rows of data, therefore merging is taking a long time.
>>>
>>> I have not tried solr cloud, does any one know the performance of query
>>> large start row on solr cloud?
>>>
>>
>
>
>
>
>

Reply via email to