Digging into the code, I see this:
[code]
public SpanWeight(SpanQuery query, IndexSearcher searcher)
throws IOException {
this.similarity = searcher.getSimilarity();
this.query = query;
termContexts = new HashMap<>();
TreeSet terms = new TreeSet<>();
query.extractTerms(ter
To clarify additionally: we use StandardTokenizer & StandardFilter in front
of the WDF. Already following ST's transformations e-tail gets split into
two consecutive tokens
On Mon, Jun 15, 2015 at 10:08 AM, Dmitry Kan wrote:
> Thanks, Erick. Analysis page shows the positions are growing=> there
Thanks, Erick. Analysis page shows the positions are growing=> there are no
"glued" words on the same position.
On Sun, Jun 14, 2015 at 6:10 PM, Erick Erickson
wrote:
> My guess is that you have WordDelimiterFilterFactory in your
> analysis chain with parameters that break up E-Tail to both "e"
My guess is that you have WordDelimiterFilterFactory in your
analysis chain with parameters that break up E-Tail to both "e" and "tail" _and_
put them in the same position. This assumes that the result fragment
you pasted is incomplete and "commerce" is in it
>From E-Tail commerce
or some such. T
Hi guys,
We observe some strange bug in solr 4.10.2, where by a sloppy query hits
words it should not:
the "e commerce"the "e commerce"SpanNearQuery(spanNear([Contents:the,
spanNear([Contents:eä, Contents:commerceä], 0, true)], 300,
false))spanNear([Contents:the,
spanNear([Contents:eä, Contents:c