I've tried both of your recommendations (use facet.enum.cache.minDf=1000 and optimise the index). The query time is around 0.4-0.5s now but it's still slow compared to the old "string" type. I haven't tried to increase filterCache but 1000000 of cached items looks a bit too much for my server atm. It's quite pitty that we can't force Solr to use FieldCache. I think I might pre-process "title" field and index it as "string" instead of using analyser. However, it defeats the purpose of having pluggable analysers, tokenisers...
On 7/17/07, Yonik Seeley <[EMAIL PROTECTED]> wrote:
On 7/16/07, climbingrose <[EMAIL PROTECTED]> wrote: > Thanks Yonik. In my case, there is only one "title" field per document so is > there a way to force Solr to work the old way? My analyser doesn't break up > the "title" field into multiple tokens. It only tries to format the field > value (to lower case, remove unwanted chars and words). Therefore, it's no > difference from using "string" single-valued type. There is currently no way to force Solr to use the FieldCache method. Oh, and in "2) expand the size of the fieldcache to 1000000 if you have the memory" should have been filterCache, not fieldcache. -Yonik > I'll try your first recommendation to see how it goes. faceting typically proceeds much faster on an optimized index too. -Yonik
-- Regards, Cuong Hoang