[GitHub] [lucene] zhaih commented on a diff in pull request #11881: Further optimize DrillSideways scoring

2022-11-04 Thread GitBox


zhaih commented on code in PR #11881:
URL: https://github.com/apache/lucene/pull/11881#discussion_r1013684022


##
lucene/facet/src/java/org/apache/lucene/facet/DrillSidewaysScorer.java:
##
@@ -166,89 +160,158 @@ public int score(LeafCollector collector, Bits 
acceptDocs, int min, int maxDoc)
 return Integer.MAX_VALUE;
   }
 
+  /**
+   * Query-first scoring specialization when there is only one drill-sideways 
dimension, which is
+   * likely a common scenario.
+   */
+  private void doQueryFirstScoringSingleDim(
+  Bits acceptDocs, LeafCollector collector, DocsAndCost dim) throws 
IOException {
+int docID = baseApproximation.docID();
+while (docID != DocIdSetIterator.NO_MORE_DOCS) {
+  assert docID == baseApproximation.docID();
+
+  if (acceptDocs != null && acceptDocs.get(docID) == false) {
+docID = baseApproximation.nextDoc();
+continue;
+  }
+
+  if (baseTwoPhase != null && baseTwoPhase.matches() == false) {
+docID = baseApproximation.nextDoc();
+continue;
+  }
+
+  // We have either a near-miss or full match. Check the sideways dim to 
see which it is:
+  collectDocID = docID;
+  if (advanceIfBehind(docID, dim.approximation) != docID
+  || (dim.twoPhase != null && dim.twoPhase.matches() == false)) {
+// The sideways dim missed, so we have a "near miss":
+collectNearMiss(dim.sidewaysLeafCollector);
+  } else {
+// Hit passed all filters, so it's "real":
+collectHit(collector, dim);
+  }
+
+  docID = baseApproximation.nextDoc();
+}
+  }
+
   /**
* Used when base query is highly constraining vs the drilldowns, or when 
the docs must be scored
-   * at once (i.e., like BooleanScorer2, not BooleanScorer). In this case we 
just .next() on base
-   * and .advance() on the dim filters.
+   * at once (i.e., like BooleanScorer2, not BooleanScorer).
*/
   private void doQueryFirstScoring(Bits acceptDocs, LeafCollector collector, 
DocsAndCost[] dims)
   throws IOException {
 setScorer(collector, ScoreCachingWrappingScorer.wrap(baseScorer));
 
-List allDims = Arrays.asList(dims);
-CollectionUtil.timSort(allDims, APPROXIMATION_COMPARATOR);
+// Specialize the single-dim use-case as we have a more efficient 
implementation for that:
+if (dims.length == 1) {
+  doQueryFirstScoringSingleDim(acceptDocs, collector, dims[0]);
+  return;
+}
+
+// Sort our sideways dims by approximation cost so we can advance the 
lower cost ones first:
+List sidewaysDims = new ArrayList<>(dims.length);
+sidewaysDims.addAll(List.of(dims));
+CollectionUtil.timSort(sidewaysDims, APPROXIMATION_COMPARATOR);
 
-List twoPhaseDims = null;
+// Maintain (optional) subset of sideways dims that support two-phase 
iteration, sorted by
+// matchCost:
+List sidewaysTwoPhaseDims = null;
 for (DocsAndCost dim : dims) {
   if (dim.twoPhase != null) {
-if (twoPhaseDims == null) {
-  twoPhaseDims = new ArrayList<>(dims.length);
+if (sidewaysTwoPhaseDims == null) {
+  sidewaysTwoPhaseDims = new ArrayList<>();
 }
-twoPhaseDims.add(dim);
+sidewaysTwoPhaseDims.add(dim);
   }
 }
-if (twoPhaseDims != null) {
-  CollectionUtil.timSort(twoPhaseDims, TWO_PHASE_COMPARATOR);
+if (sidewaysTwoPhaseDims != null) {
+  CollectionUtil.timSort(sidewaysTwoPhaseDims, TWO_PHASE_COMPARATOR);
 }
 
+// We keep track of a "runaway" dimension, which is a previously "near 
missed" dimension that
+// has advanced beyond the docID the rest of the dimensions are positioned 
on. This functions
+// a bit like the "head" queue in WANDScorer's "min should match" 
implementation. We use a
+// single-valued PQ ordered by docID to easily determine the "closest" 
runaway dim we'll use
+// for advancing in the case that multiple dim approximations miss.
+PriorityQueue runawayDim =
+new PriorityQueue<>(1) {
+  @Override
+  protected boolean lessThan(DocsAndCost a, DocsAndCost b) {
+return a.approximation.docID() < b.approximation.docID();
+  }
+};
+
 int docID = baseApproximation.docID();
 
 nextDoc:
-while (docID != PostingsEnum.NO_MORE_DOCS) {
+while (docID != DocIdSetIterator.NO_MORE_DOCS) {
   assert docID == baseApproximation.docID();
 
   if (acceptDocs != null && acceptDocs.get(docID) == false) {
 docID = baseApproximation.nextDoc();
 continue;
   }
 
-  DocsAndCost failedDim = null;
-  for (DocsAndCost dim : allDims) {
-final int dimDocID;
-if (dim.approximation.docID() < docID) {
-  dimDocID = dim.approximation.advance(docID);
-} else {
-  dimDocID = dim.approximation.docID();
-}
-if (dimDocID != docID) {
-  if (failedDim != null) {
-int next = Math.min(dimDocID, failedDim

[GitHub] [lucene] A-U commented on pull request #816: LUCENE-10519: Improvement for CloseableThreadLocal

2022-11-04 Thread GitBox


A-U commented on PR #816:
URL: https://github.com/apache/lucene/pull/816#issuecomment-1303067203

   I tried to replace the `ThreadLocal` in `SolrQueryTimeoutImpl` with this new 
`CloseableThreadLocal`, some of the test cases were timeout when reading the 
DocValues. It still happened when switching the lock to 
`ReentrantReadWriteLock`. So it might be better to add a caveat for the usage 
of the `CloseableThreadLocal` for some scenarios with heavy reads.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene] donnerpeter commented on pull request #11893: hunspell: allow for faster dictionary iteration during 'suggest' by using more memory (opt-in)

2022-11-04 Thread GitBox


donnerpeter commented on PR #11893:
URL: https://github.com/apache/lucene/pull/11893#issuecomment-1303240921

   I've realized that the entry cache should have a longer lifetime than 
`checkCanceled`, so I moved the suggester stuff into a separate class (where I 
also plan to add more optimizations). I'd appreciate comments on the new APIs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene] dweiss commented on a diff in pull request #11893: hunspell: allow for faster dictionary iteration during 'suggest' by using more memory (opt-in)

2022-11-04 Thread GitBox


dweiss commented on code in PR #11893:
URL: https://github.com/apache/lucene/pull/11893#discussion_r1013880109


##
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/Suggester.java:
##
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+import static org.apache.lucene.analysis.hunspell.Dictionary.FLAG_UNSET;
+import static org.apache.lucene.analysis.hunspell.TimeoutPolicy.NO_TIMEOUT;
+import static org.apache.lucene.analysis.hunspell.WordContext.COMPOUND_BEGIN;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import org.apache.lucene.util.CharsRef;
+
+/**
+ * A generator for misspelled word corrections based on Hunspell flags. The 
suggestions are searched
+ * for in two main ways:
+ *
+ * 
+ *   Modification: trying to insert/remove/delete/swap parts of the word 
to get something
+ *   acceptable. The performance of this part depends heavily on the 
contents of TRY, MAP, REP,
+ *   KEY directives in the .aff file.
+ *   Enumeration: if the modification hasn't produced "good enough" 
suggestions, the whole
+ *   dictionary is scanned and simple affixes are added onto the entries 
to check if that
+ *   produces anything similar to the given misspelled word. This depends 
on the dictionary size
+ *   and the affix count, and it can take noticeable amount of time. To 
speed this up, {@link
+ *   #withSuggestibleEntryCache()} can be used.
+ * 
+ */
+public class Suggester {
+  private final Dictionary dictionary;
+  private final SuggestibleEntryCache suggestibleCache;
+
+  public Suggester(Dictionary dictionary) {
+this(dictionary, null);
+  }
+
+  private Suggester(Dictionary dictionary, SuggestibleEntryCache 
suggestibleCache) {
+this.dictionary = dictionary;
+this.suggestibleCache = suggestibleCache;
+  }
+
+  /**
+   * Returns a copy of this suggester instance with better "Enumeration" phase 
performance (see
+   * {@link Suggester} documentation), but using more memory. With this 
option, the dictionary
+   * entries are stored as fast-to-iterate plain words instead of highly 
compressed prefix trees.
+   */
+  public Suggester withSuggestibleEntryCache() {

Review Comment:
   I like the Suggester as a separate entity but this bit seems a bit awkward 
to me - it's a chicken and egg (constructor accepting SuggestibleEntryCache, 
but SuggestibleEntryCache is created via non-static method on the suggester). 
If the cache's lifecycle can be longer than the Suggester's then perhaps it 
should just be left as an explicit argument for a public constructor, without 
the static factory/ modifier method on the Suggester? Then folks who need the 
cache can compile it (for the same dictionary) and pass it to one or more 
suggesters, however they like.
   
   Which also makes me think that buildCache should accept the dictionary 
reference and then a sanity-check could be added in the suggester to make sure 
the cache and the suggester's dictionary are the same... Or even modify the 
constructor to accept just the entry cache, which would piggyback the 
dictionary reference consistently.
   
   Finally - I'm not a native speaker, but wouldn't it be simpler to call the 
cache just an EntryCache (or a SuggestionsCache)? Suggestible sounds a bit 
awkward to my (slavic) ear, but maybe it's just me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene] donnerpeter commented on a diff in pull request #11893: hunspell: allow for faster dictionary iteration during 'suggest' by using more memory (opt-in)

2022-11-04 Thread GitBox


donnerpeter commented on code in PR #11893:
URL: https://github.com/apache/lucene/pull/11893#discussion_r1013968050


##
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/Suggester.java:
##
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+import static org.apache.lucene.analysis.hunspell.Dictionary.FLAG_UNSET;
+import static org.apache.lucene.analysis.hunspell.TimeoutPolicy.NO_TIMEOUT;
+import static org.apache.lucene.analysis.hunspell.WordContext.COMPOUND_BEGIN;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import org.apache.lucene.util.CharsRef;
+
+/**
+ * A generator for misspelled word corrections based on Hunspell flags. The 
suggestions are searched
+ * for in two main ways:
+ *
+ * 
+ *   Modification: trying to insert/remove/delete/swap parts of the word 
to get something
+ *   acceptable. The performance of this part depends heavily on the 
contents of TRY, MAP, REP,
+ *   KEY directives in the .aff file.
+ *   Enumeration: if the modification hasn't produced "good enough" 
suggestions, the whole
+ *   dictionary is scanned and simple affixes are added onto the entries 
to check if that
+ *   produces anything similar to the given misspelled word. This depends 
on the dictionary size
+ *   and the affix count, and it can take noticeable amount of time. To 
speed this up, {@link
+ *   #withSuggestibleEntryCache()} can be used.
+ * 
+ */
+public class Suggester {
+  private final Dictionary dictionary;
+  private final SuggestibleEntryCache suggestibleCache;
+
+  public Suggester(Dictionary dictionary) {
+this(dictionary, null);
+  }
+
+  private Suggester(Dictionary dictionary, SuggestibleEntryCache 
suggestibleCache) {
+this.dictionary = dictionary;
+this.suggestibleCache = suggestibleCache;
+  }
+
+  /**
+   * Returns a copy of this suggester instance with better "Enumeration" phase 
performance (see
+   * {@link Suggester} documentation), but using more memory. With this 
option, the dictionary
+   * entries are stored as fast-to-iterate plain words instead of highly 
compressed prefix trees.
+   */
+  public Suggester withSuggestibleEntryCache() {

Review Comment:
   Thank you!
   
   I've tried to avoid making the cache also a publicly-accessible API because 
I don't think that clients need such detail. The cache's lifecycle can be 
longer than the one of the `checkCanceled` runnable (which is now bound to 
`Hunspell` unfortunately, but it turned out that we need to create it afresh on 
each `spell`/`suggest` call), not necessarily than the suggester's lifecycle. 
So far I envision clients creating a single suggester for a dictionary, 
customizing it in the way they need, and keeping it in the memory.
   
   If the cache remains private API, renaming it to `EntryCache` seems fine to 
me. If it should be made public, I'd leave something about suggestion there. 
But `SuggestionCache` means that suggestions themselves are cached, while here 
I cache some intermediate results.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene] msokolov merged pull request #11899: #11896: reduce top k in test to avoid split-graph

2022-11-04 Thread GitBox


msokolov merged PR #11899:
URL: https://github.com/apache/lucene/pull/11899


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene] msokolov closed issue #11896: TestLucene91HnswVectorsFormat reproducible failure

2022-11-04 Thread GitBox


msokolov closed issue #11896: TestLucene91HnswVectorsFormat reproducible failure
URL: https://github.com/apache/lucene/issues/11896


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene] msokolov commented on issue #11896: TestLucene91HnswVectorsFormat reproducible failure

2022-11-04 Thread GitBox


msokolov commented on issue #11896:
URL: https://github.com/apache/lucene/issues/11896#issuecomment-1303544526

   should be fixed; feel free to re-open if you see it happen again


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene] gsmiller commented on a diff in pull request #11881: Further optimize DrillSideways scoring

2022-11-04 Thread GitBox


gsmiller commented on code in PR #11881:
URL: https://github.com/apache/lucene/pull/11881#discussion_r1014106369


##
lucene/facet/src/java/org/apache/lucene/facet/DrillSidewaysScorer.java:
##
@@ -166,89 +160,158 @@ public int score(LeafCollector collector, Bits 
acceptDocs, int min, int maxDoc)
 return Integer.MAX_VALUE;
   }
 
+  /**
+   * Query-first scoring specialization when there is only one drill-sideways 
dimension, which is
+   * likely a common scenario.
+   */
+  private void doQueryFirstScoringSingleDim(
+  Bits acceptDocs, LeafCollector collector, DocsAndCost dim) throws 
IOException {
+int docID = baseApproximation.docID();
+while (docID != DocIdSetIterator.NO_MORE_DOCS) {
+  assert docID == baseApproximation.docID();
+
+  if (acceptDocs != null && acceptDocs.get(docID) == false) {
+docID = baseApproximation.nextDoc();
+continue;
+  }
+
+  if (baseTwoPhase != null && baseTwoPhase.matches() == false) {
+docID = baseApproximation.nextDoc();
+continue;
+  }
+
+  // We have either a near-miss or full match. Check the sideways dim to 
see which it is:
+  collectDocID = docID;
+  if (advanceIfBehind(docID, dim.approximation) != docID
+  || (dim.twoPhase != null && dim.twoPhase.matches() == false)) {
+// The sideways dim missed, so we have a "near miss":
+collectNearMiss(dim.sidewaysLeafCollector);
+  } else {
+// Hit passed all filters, so it's "real":
+collectHit(collector, dim);
+  }
+
+  docID = baseApproximation.nextDoc();
+}
+  }
+
   /**
* Used when base query is highly constraining vs the drilldowns, or when 
the docs must be scored
-   * at once (i.e., like BooleanScorer2, not BooleanScorer). In this case we 
just .next() on base
-   * and .advance() on the dim filters.
+   * at once (i.e., like BooleanScorer2, not BooleanScorer).
*/
   private void doQueryFirstScoring(Bits acceptDocs, LeafCollector collector, 
DocsAndCost[] dims)
   throws IOException {
 setScorer(collector, ScoreCachingWrappingScorer.wrap(baseScorer));
 
-List allDims = Arrays.asList(dims);
-CollectionUtil.timSort(allDims, APPROXIMATION_COMPARATOR);
+// Specialize the single-dim use-case as we have a more efficient 
implementation for that:
+if (dims.length == 1) {
+  doQueryFirstScoringSingleDim(acceptDocs, collector, dims[0]);
+  return;
+}
+
+// Sort our sideways dims by approximation cost so we can advance the 
lower cost ones first:
+List sidewaysDims = new ArrayList<>(dims.length);
+sidewaysDims.addAll(List.of(dims));
+CollectionUtil.timSort(sidewaysDims, APPROXIMATION_COMPARATOR);
 
-List twoPhaseDims = null;
+// Maintain (optional) subset of sideways dims that support two-phase 
iteration, sorted by
+// matchCost:
+List sidewaysTwoPhaseDims = null;
 for (DocsAndCost dim : dims) {
   if (dim.twoPhase != null) {
-if (twoPhaseDims == null) {
-  twoPhaseDims = new ArrayList<>(dims.length);
+if (sidewaysTwoPhaseDims == null) {
+  sidewaysTwoPhaseDims = new ArrayList<>();
 }
-twoPhaseDims.add(dim);
+sidewaysTwoPhaseDims.add(dim);
   }
 }
-if (twoPhaseDims != null) {
-  CollectionUtil.timSort(twoPhaseDims, TWO_PHASE_COMPARATOR);
+if (sidewaysTwoPhaseDims != null) {
+  CollectionUtil.timSort(sidewaysTwoPhaseDims, TWO_PHASE_COMPARATOR);
 }
 
+// We keep track of a "runaway" dimension, which is a previously "near 
missed" dimension that
+// has advanced beyond the docID the rest of the dimensions are positioned 
on. This functions
+// a bit like the "head" queue in WANDScorer's "min should match" 
implementation. We use a
+// single-valued PQ ordered by docID to easily determine the "closest" 
runaway dim we'll use
+// for advancing in the case that multiple dim approximations miss.
+PriorityQueue runawayDim =
+new PriorityQueue<>(1) {
+  @Override
+  protected boolean lessThan(DocsAndCost a, DocsAndCost b) {
+return a.approximation.docID() < b.approximation.docID();
+  }
+};
+
 int docID = baseApproximation.docID();
 
 nextDoc:
-while (docID != PostingsEnum.NO_MORE_DOCS) {
+while (docID != DocIdSetIterator.NO_MORE_DOCS) {
   assert docID == baseApproximation.docID();
 
   if (acceptDocs != null && acceptDocs.get(docID) == false) {
 docID = baseApproximation.nextDoc();
 continue;
   }
 
-  DocsAndCost failedDim = null;
-  for (DocsAndCost dim : allDims) {
-final int dimDocID;
-if (dim.approximation.docID() < docID) {
-  dimDocID = dim.approximation.advance(docID);
-} else {
-  dimDocID = dim.approximation.docID();
-}
-if (dimDocID != docID) {
-  if (failedDim != null) {
-int next = Math.min(dimDocID, failed

[GitHub] [lucene] gsmiller commented on a diff in pull request #11881: Further optimize DrillSideways scoring

2022-11-04 Thread GitBox


gsmiller commented on code in PR #11881:
URL: https://github.com/apache/lucene/pull/11881#discussion_r1014106369


##
lucene/facet/src/java/org/apache/lucene/facet/DrillSidewaysScorer.java:
##
@@ -166,89 +160,158 @@ public int score(LeafCollector collector, Bits 
acceptDocs, int min, int maxDoc)
 return Integer.MAX_VALUE;
   }
 
+  /**
+   * Query-first scoring specialization when there is only one drill-sideways 
dimension, which is
+   * likely a common scenario.
+   */
+  private void doQueryFirstScoringSingleDim(
+  Bits acceptDocs, LeafCollector collector, DocsAndCost dim) throws 
IOException {
+int docID = baseApproximation.docID();
+while (docID != DocIdSetIterator.NO_MORE_DOCS) {
+  assert docID == baseApproximation.docID();
+
+  if (acceptDocs != null && acceptDocs.get(docID) == false) {
+docID = baseApproximation.nextDoc();
+continue;
+  }
+
+  if (baseTwoPhase != null && baseTwoPhase.matches() == false) {
+docID = baseApproximation.nextDoc();
+continue;
+  }
+
+  // We have either a near-miss or full match. Check the sideways dim to 
see which it is:
+  collectDocID = docID;
+  if (advanceIfBehind(docID, dim.approximation) != docID
+  || (dim.twoPhase != null && dim.twoPhase.matches() == false)) {
+// The sideways dim missed, so we have a "near miss":
+collectNearMiss(dim.sidewaysLeafCollector);
+  } else {
+// Hit passed all filters, so it's "real":
+collectHit(collector, dim);
+  }
+
+  docID = baseApproximation.nextDoc();
+}
+  }
+
   /**
* Used when base query is highly constraining vs the drilldowns, or when 
the docs must be scored
-   * at once (i.e., like BooleanScorer2, not BooleanScorer). In this case we 
just .next() on base
-   * and .advance() on the dim filters.
+   * at once (i.e., like BooleanScorer2, not BooleanScorer).
*/
   private void doQueryFirstScoring(Bits acceptDocs, LeafCollector collector, 
DocsAndCost[] dims)
   throws IOException {
 setScorer(collector, ScoreCachingWrappingScorer.wrap(baseScorer));
 
-List allDims = Arrays.asList(dims);
-CollectionUtil.timSort(allDims, APPROXIMATION_COMPARATOR);
+// Specialize the single-dim use-case as we have a more efficient 
implementation for that:
+if (dims.length == 1) {
+  doQueryFirstScoringSingleDim(acceptDocs, collector, dims[0]);
+  return;
+}
+
+// Sort our sideways dims by approximation cost so we can advance the 
lower cost ones first:
+List sidewaysDims = new ArrayList<>(dims.length);
+sidewaysDims.addAll(List.of(dims));
+CollectionUtil.timSort(sidewaysDims, APPROXIMATION_COMPARATOR);
 
-List twoPhaseDims = null;
+// Maintain (optional) subset of sideways dims that support two-phase 
iteration, sorted by
+// matchCost:
+List sidewaysTwoPhaseDims = null;
 for (DocsAndCost dim : dims) {
   if (dim.twoPhase != null) {
-if (twoPhaseDims == null) {
-  twoPhaseDims = new ArrayList<>(dims.length);
+if (sidewaysTwoPhaseDims == null) {
+  sidewaysTwoPhaseDims = new ArrayList<>();
 }
-twoPhaseDims.add(dim);
+sidewaysTwoPhaseDims.add(dim);
   }
 }
-if (twoPhaseDims != null) {
-  CollectionUtil.timSort(twoPhaseDims, TWO_PHASE_COMPARATOR);
+if (sidewaysTwoPhaseDims != null) {
+  CollectionUtil.timSort(sidewaysTwoPhaseDims, TWO_PHASE_COMPARATOR);
 }
 
+// We keep track of a "runaway" dimension, which is a previously "near 
missed" dimension that
+// has advanced beyond the docID the rest of the dimensions are positioned 
on. This functions
+// a bit like the "head" queue in WANDScorer's "min should match" 
implementation. We use a
+// single-valued PQ ordered by docID to easily determine the "closest" 
runaway dim we'll use
+// for advancing in the case that multiple dim approximations miss.
+PriorityQueue runawayDim =
+new PriorityQueue<>(1) {
+  @Override
+  protected boolean lessThan(DocsAndCost a, DocsAndCost b) {
+return a.approximation.docID() < b.approximation.docID();
+  }
+};
+
 int docID = baseApproximation.docID();
 
 nextDoc:
-while (docID != PostingsEnum.NO_MORE_DOCS) {
+while (docID != DocIdSetIterator.NO_MORE_DOCS) {
   assert docID == baseApproximation.docID();
 
   if (acceptDocs != null && acceptDocs.get(docID) == false) {
 docID = baseApproximation.nextDoc();
 continue;
   }
 
-  DocsAndCost failedDim = null;
-  for (DocsAndCost dim : allDims) {
-final int dimDocID;
-if (dim.approximation.docID() < docID) {
-  dimDocID = dim.approximation.advance(docID);
-} else {
-  dimDocID = dim.approximation.docID();
-}
-if (dimDocID != docID) {
-  if (failedDim != null) {
-int next = Math.min(dimDocID, failed

[GitHub] [lucene] dweiss commented on a diff in pull request #11893: hunspell: allow for faster dictionary iteration during 'suggest' by using more memory (opt-in)

2022-11-04 Thread GitBox


dweiss commented on code in PR #11893:
URL: https://github.com/apache/lucene/pull/11893#discussion_r1014272956


##
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/Suggester.java:
##
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+import static org.apache.lucene.analysis.hunspell.Dictionary.FLAG_UNSET;
+import static org.apache.lucene.analysis.hunspell.TimeoutPolicy.NO_TIMEOUT;
+import static org.apache.lucene.analysis.hunspell.WordContext.COMPOUND_BEGIN;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import org.apache.lucene.util.CharsRef;
+
+/**
+ * A generator for misspelled word corrections based on Hunspell flags. The 
suggestions are searched
+ * for in two main ways:
+ *
+ * 
+ *   Modification: trying to insert/remove/delete/swap parts of the word 
to get something
+ *   acceptable. The performance of this part depends heavily on the 
contents of TRY, MAP, REP,
+ *   KEY directives in the .aff file.
+ *   Enumeration: if the modification hasn't produced "good enough" 
suggestions, the whole
+ *   dictionary is scanned and simple affixes are added onto the entries 
to check if that
+ *   produces anything similar to the given misspelled word. This depends 
on the dictionary size
+ *   and the affix count, and it can take noticeable amount of time. To 
speed this up, {@link
+ *   #withSuggestibleEntryCache()} can be used.
+ * 
+ */
+public class Suggester {
+  private final Dictionary dictionary;
+  private final SuggestibleEntryCache suggestibleCache;
+
+  public Suggester(Dictionary dictionary) {
+this(dictionary, null);
+  }
+
+  private Suggester(Dictionary dictionary, SuggestibleEntryCache 
suggestibleCache) {
+this.dictionary = dictionary;
+this.suggestibleCache = suggestibleCache;
+  }
+
+  /**
+   * Returns a copy of this suggester instance with better "Enumeration" phase 
performance (see
+   * {@link Suggester} documentation), but using more memory. With this 
option, the dictionary
+   * entries are stored as fast-to-iterate plain words instead of highly 
compressed prefix trees.
+   */
+  public Suggester withSuggestibleEntryCache() {

Review Comment:
   Ok, makes sense. Thanks for explaining this to me - looking at the diff 
only, I didn't get the whole picture. I still feel like that static mutating 
method is a bit awkward but if it's an experimental API then it's fine to 
change it as you go - no problem there.
   
   As for naming - if nobody from the English part of the world expresses their 
opinion - leave it as it is. I really don't feel like I'm in the position to 
have the final say here :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org