On Tue, 24 Jun 2008 04:54:46 -0700 (PDT)
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:

> One tokenizer is followed by filters.  I think this all might be a bit 
> clearer if you read the chapter about Analyzers in Lucene in Action if you 
> have a copy.  I think if you try to break down that "the result of all this 
> passed to " into something more concrete and real you will see how things 
> (should) work.


thanks Otis, from this and Ryan's previous reply I understand I was mistaken on
how I was seeing the process - i was expecting the filters / tokenizers to work
as processes with the output of one going to the input of the next , in the
order shown in fieldType definition. .. now that I write this i remember
reading some posts on this list about doing something like this ... open-pipe ?

anyway, it makes sense...not what  I was hoping for, but it's what I have to
work with. 

Now, if only I can get n-gram to work with search terms > minGramSize :P

Thanks for your time, help and recommendation of Lucene in Action.

B
_________________________
{Beto|Norberto|Numard} Meijome

"The greatest dangers to liberty lurk in insidious encroachment by men of zeal,
well-meaning but without understanding." Justice Louis D. Brandeis

I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers makes you go blind. Writing them is worse. You have been
Warned.

Reply via email to