Hi,
I think that`s not the way you can do it, because you cannot give a hint to
your analyzer, which text fragment is more relevant than another at runtime.
There is no marker so a filter process cannot know, which terms are to
boost. You could write your own filter and let it read a file with som
thank you . Hoss
Tokenization is not a problem in english,but in some other languages like
chinese, there are no space to seperate each term in article.Lt is a long
string like this “AABCDAEFSABS”,in which “AA” and "BCD" ...represent a
meaningful term ,so l want to boost some special and meaningful
: if the query word is "ABCD",then after being tokenized it is "A" "BC" "D" ,
: l want to boost term "BC" ,so the query word is like this: "A BC^10 D" and
: phrase query "ABCD" . all query words users typing in will be processed
: like that automaticly.
it's not really clear from your example how