Hi:

when you start talking about really large data sets, with an extremely
large vloume of unique field values for fields you want to facet on, then
"generic" solutions stop being very feasible, and you have to start ooking
at solutions more tailored to your dataset.  at CNET, when dealing with
Product data, we don't make any attempt to use the Simple Facet support
Solr provides to facet on things like Manufacturer or Operating System
because enumerating through every Manufacturer in the catalog on every
query would be too expensive -- instead we have strucured metadata that
drives the logic: only compute the constraint counts for this subset of
manufactures where looking at teh Desktops category, only look at teh
Operating System facet when in these categories, etc...  rules like these
need to be defined based on your user experience, and it can be easy to
build them using the metadata in your index -- but they really
need to be precomputed, not calculated on the fly every time.

Sounds interesting, Could you please provide an example of how would one
go about doing a precomuted qurery?

For something like a LIbrary system, where you might want to facet on
Author, but you have way to many to be practical, a system that either
required a category to be picked first (allowing you to constrain the list
of authors you need to worry about) or precomputed the top 1000 authors
for displaying initially (when the user hasn't provided any other
constraints) are examples of the types of things a RequestHandler Solr
Plugin might do -- but the logic involved would probably be domain
specific.

Specifically here without getting any user constrains? how would one do this.. I
thought facets needs to have user constraints?

I would appreciate your feedback.

thanks



-Hoss


Reply via email to