thanks it is work for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multi-Language-Suggester-Solr-Issue-tp4176075p4176324.html
Sent from the Solr - User mailing list archive at Nabble.com.
I noticed that your suggester analyzers include
which seems like a bad idea -- this will strip all those arabic, russian
and japanese characters entirely, leaving you with probably only
whitespace in your tokens. Try just removing that?
-Mike
On 12/24/14 6:09 PM, alaa.abuzaghleh wrote:
I
Thanks Eric for your comment,
If I do suggest by Full_name I got good result look to this result set
http://localhost:9090/solr/people/suggest?q=full_name%3A%D9%85%D8%B3%D8%B9%D9%88%D8%AF&wt=json&indent=true
the result will be
{
"responseHeader":{
"status":0,
"QTime":3,
"params":{
It's interesting results...
I'm not a Unicode specialist, but Japanese query cannot match Arabic
documents if both of them correctly encoded.
I cannot recommend such use case, single field for all languages,
but maybe you should check "indexed" (analyzed) tokens for inspection, not
"stored" data.
Throwing all the languages into a single field then searching/suggesting on
that is going to lead to "interesting" results. This is especially true when you
mix very different languages such as Arabic and Japanese and, apparently
English is also in the mix
The txt_general isn't going to do muc