If your goal is simply to determine if a character occurs within a term, you
would have to use a wildcard such as *&*.
If your goal is to make special characters be complete terms, you may have
to develop a custom tokenizer that emits multiple tokens at the same
position. Or, maybe you want to treat these complex terms as completely
separate terms. You need to be clear about what you expect.
You'll have to tell us what your goal really is.
If one of these special characters occurs embedded in a larger term, how
many terms do you wish to generate, and how do you wish them to behave in
terms of things like phrase query?
You need to be a lot more clear about your use case.
-- Jack Krupansky
-----Original Message-----
From: vsl
Sent: Wednesday, March 13, 2013 6:11 AM
To: solr-user@lucene.apache.org
Subject: Re: Special characters not indexed
After changing to white space tokenizer there are still no results for given
search term "&". Only when the whole word ("ยง$ %&/( )=? +*#'-<>") was given
as a search term, this document was shown in results.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Special-characters-not-indexed-tp4046630p4046929.html
Sent from the Solr - User mailing list archive at Nabble.com.