Hi Yonik,
Any update on sampling based facets. The current faceting is really slow
for fields with high cardinality even with method=uif. Or are there
alternative work-arounds to only look at N docs when computing facets?

On Fri, Nov 4, 2016 at 4:43 PM, Yonik Seeley <ysee...@gmail.com> wrote:

> Sampling has been on my TODO list for the JSON Facet API.
> How much it would help depends on where the bottlenecks are, but that
> in conjunction with a hashing approach to collection (assuming field
> cardinality is high) should definitely help.
>
> -Yonik
>
>
> On Fri, Nov 4, 2016 at 3:02 PM, John Davis <johndavis925...@gmail.com>
> wrote:
> > Hi,
> > I am trying to improve the performance of queries with facets. I
> understand
> > that for queries with high facet cardinality and large number results the
> > current facet computation algorithms can be slow as they are trying to
> loop
> > across all docs and facet values.
> >
> > Does there exist an option to compute facets by just looking at the top-n
> > results instead of all of them or a sample of results based on some query
> > parameters? I couldn't find one and if it does not exist, has this come
> up
> > before? This would definitely not be a precise facet count but using
> > reasonable sampling algorithms we should be able to extrapolate well.
> >
> > Thank you in advance for any advice!
> >
> > John
>

Reply via email to