Thanks, that link is very helpful, especially the section, "Leapfrog, anyone?" This actually seems quite slow for my use case. Suppose we have 10,000 users and 1,000,000 documents. We search for "hello" for a particular user and let's assume that the fq set for the user is cached. "hello" is a common word and perhaps 10,000 documents will match. If the user has 100 documents, then finding the intersection requires checking each list ~100 times. If the user has 1,000 documents, we check each list ~1,000 times. That doesn't scale well.
My searches are usually in one user's data. How can I take advantage of that? I could have a separate index for each user, but loading so many indexes at once seems infeasible; and dynamically loading & unloading indexes is a pain. Or I could create a filter that takes tokens and prepends them with the user id. That seems like a good solution, since my keyword searches always include a user id (and usually just 1 user id). Though I wonder if there is a downside I haven't thought of. Thanks, Scott > -----Original Message----- > From: Shawn Heisey [mailto:s...@elyograg.org] > Sent: Tuesday, November 05, 2013 4:35 PM > To: solr-user@lucene.apache.org > Subject: Re: fq efficiency > > On 11/5/2013 3:36 PM, Scott Schneider wrote: > > I'm wondering if filter queries are efficient enough for my use > cases. I have lots and lots of users in a big, multi-tenant, sharded > index. To run a search, I can use an fq on the user id and pass in the > search terms. Does this scale well with the # users? I suppose that, > since user id is indexed, generating the filter data (which is cached) > will be fast. And looking up search terms is fast, of course. But if > the search term is a common one that many users have in their > documents, then Solr may have to perform an intersection between two > large sets: docs from all users with the search term and all of the > current user's docs. > > > > Also, how about auto-complete and searching with a trailing wildcard? > As I understand it, these work well in a single-tenant index because > keywords are sorted in the index, so it's easy to get all the search > terms that match "foo*". In a multi-tenant index, all users' keywords > are stored together. So if Lucene were to look at all the keywords > from "foo" to "foozzzzz" (I'm not sure if it actually does this), it > would skip over a large majority of keywords that don't belong to this > user. > > From what I understand, there's not really a whole lot of difference > between queries and filter queries when they are NOT cached, except > that > the main query and the filter queries are executed in parallel, which > can save time. > > When filter queries are found in the filterCache, it's a different > story. They get applied *before* the main query, which means that the > main query won't have to work as hard. The filterCache stores > information about which documents in the entire index match the filter. > By storing it as a bitset, the amount of space required is relatively > low. Applying filterCache results is very efficient. > > There are also advanced techniques, like assigning a cost to each > filter > and creating postfilters: > > http://yonik.com/posts/advanced-filter-caching-in-solr/ > > Thanks, > Shawn