Is it reasonable to implement a RequestHandler that systematically uses a DocSet as a filter for the restriction queries? I'm under the impression that SolrIndexSearcher.getDocSet(Query, DocSet) would use the cache properly & that calling it in a loop would perform the 'and' between the filters...
pseudo code (refactored from Standard & Dismax): /* * * Restrict Results * * */ List<Query> restrictions = U.parseFilterQueries(req); DocSet rdocs = myUniqueKeySetThatMayBeNull(); if (restrictions != null) for(Query r : restrictions) { rdocs = s.getDocSet(r, rdocs); } /* * * Generate Main Results * * */ flags |= U.setReturnFields(req,rsp); DocListAndSet results = null; NamedList facetInfo = null; if (params.getBool(FACET,false)) { results = s.getDocListAndSet(query, rdocs, SolrPluginUtils.getSort(req), params.getInt(START,0), params.getInt(ROWS,10), flags); facetInfo = getFacetInfo(req, rsp, results.docSet); } else { results = new DocListAndSet(); results.docList = s.getDocList(query, rdocs, SolrPluginUtils.getSort(req), params.getInt(START,0), params.getInt(ROWS,10), flags); } Yonik Seeley wrote: > > On 6/18/07, Henrib <[EMAIL PROTECTED]> wrote: >> Thanks Yonik; >> Let me twist the same question another way; I'm running Solr embedded, >> the >> uniqueKey set that pre-exists may be large, is per-query (most likely >> not >> useful to cache it) and is iterable. I'd rather avoid making a string to >> build the 'fq', get it parsed, etc. >> Would it be as safe & more efficient in a (custom) request handler to >> create >> a DocSet by fetching termDocs for each key used as a Term & use is as a >> filter? > > Yes, that should work fine. > Most of the savings will be avoiding the query parsing. > > -Yonik > > -- View this message in context: http://www.nabble.com/Filtering-on-a-%27unique-key%27-set-tf3935694.html#a11195979 Sent from the Solr - User mailing list archive at Nabble.com.