Time to say: Thank you all for your great support!
-Andre
> You need to use an untokenized field for facets.
At least 3 answers in 5 minutes... we should try synchronized swimming
;-)
-Yonik
*
On 9/13/06, Erik Hatcher <[EMAIL PROTECTED]> wrote:
You need to use an untokenized field for facets.
At least 3 answers in 5 minutes... we should try synchronized swimming ;-)
-Yonik
On 9/13/06, Erik Hatcher <[EMAIL PROTECTED]> wrote:
Would it ever make sense to generate facets on a tokenized field?
Maybe the facet implementation could throw an error if the field name
specified is tokenized?
I think it probably can make sense...
- finding top terms in a full-text field that
On Sep 13, 2006, at 9:37 PM, Chris Hostetter wrote:
http://www.nabble.com/Error-in-faceted-browsing-tf2267819.html
...i'll try to update the docs for facet.field to make this more
obvious.
Would it ever make sense to generate facets on a tokenized field?
Maybe the facet implementation co
You need to use an untokenized field for facets. I can see we're
going to get this question frequently now - it was mentioned earlier
today in fact. You can use a that is untokenized such
that you can use one field for searching, and one for facets.
You are obviously using a stemming ana
Sorry, please ignore that email. Problem solved (I should read more mails...)
Thanks to Jeff.
Hi all,
I just installed the nightly build to try the Faceted Searching . After some
testing I discovered that some characters are missing in the result XML and
that fields with "/" chars ar
On 9/13/06, Andre Basse <[EMAIL PROTECTED]> wrote:
Example:
1 should be France
1 should be Culture/Festivals
Hi Andre,
Field faceting works over the indexed terms... so you get back what
was indexed (word splitting, lowercasing, stemming, etc... the
process is not generally reversible).
Perh
: I just installed the nightly build to try the Faceted Searching . After
: some testing I discovered that some characters are missing in the result
: XML and that fields with "/" chars are sometimes split into two entries.
I believe what you are encountering is an issue of tokenization (or
analy