Well, you're serving the first set of results very quickly because you're
only looking for, say, the first 1,000. Thereafter you assemble the rest
of the result set in the background (and I'd use the export function) to
have your app have the next N ready for immediate response to the
user.

But look. You simply will not get a search with 1M IDs to return in 50 ms.
You have to cache (or something) somewhere to make this work.

A simple idea: You have to have assembled the list of 1M IDs somewhere.
Why don't _you_ do the paging? That is fire off your first query with N
IDs (and you _still_ haven't made clear whether you expect to find all
these IDs or just a subset). Anyway, rather than try to get Solr to
search 1M doc IDs and throw almost all of them away, only _ask_ for
a few at a time. After all, _you_ control the query, _you_ presumably
have a list of IDs so why do you need to ask for all of them when the
user is (presumably) paging?

My point is that you'll have to get creative to meet your requirements,
Solr is unlikely to meet them.

Best,
Erick

On Tue, Jan 5, 2016 at 8:25 PM, Mugeesh Husain <muge...@gmail.com> wrote:
> @Erick Erickson thanks for reply,
>
> Actually they give me only this task to search 1 millions ID's with good
> performance ,result should be appear within 50-100ms.
>
> Yeah i will fire off the full query (up to millions) in the background, but
> how what is the efficient way of doing it in term of performace.
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/how-to-search-miilions-of-record-in-solr-query-tp4248360p4248793.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to