Hi Friends,

I am concerned on Tokenizer, my scenario is:

During indexing i want to token on all punctuations, so i can use
StandardTokenizer, but at search time i want to consider punctuations as
part of text,

I dont store contents but only indexes.

What should i use.

Any advices ?


-- 
Thanks and kind Regards,
Abhishek jain

Reply via email to