[GitHub] [lucene-site] atris commented on a change in pull request #33: Fix change log URL
atris commented on a change in pull request #33: URL: https://github.com/apache/lucene-site/pull/33#discussion_r517862652 ## File path: content/core/core_news/2020-11-03-8-7-0-available.md ## @@ -19,8 +19,8 @@ Faster sorting by field. When a doc-value field is also indexed with points, Luc Faster flushing of stored fields when index sorting is enabled. This can significantly speed up indexing when a non-negligible amount of data is stored in the index and index sorting is enabled. -Further details of changes are available in the change log available at: http://lucene.apache.org/core/8_7/changes/Changes.html +Further details of changes are available in the change log available at: http://lucene.apache.org/core/8_7_0/changes/Changes.html Please report any feedback to the mailing lists (http://lucene.apache.org/core/discussion.html) -Note: The Apache Software Foundation uses an extensive mirroring network for distributing releases. It is possible that the mirror you are using may not have replicated the release yet. If that is the case, please try another mirror. This also applies to Maven access. \ No newline at end of file +Note: The Apache Software Foundation uses an extensive mirroring network for distributing releases. It is possible that the mirror you are using may not have replicated the release yet. If that is the case, please try another mirror. This also applies to Maven access. Review comment: Unintended change? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta opened a new pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
mocobeta opened a new pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064 This refactors misc module to resolve package name conflicts between misc and lucene-core. - moves `o.a.l.document`, `o.a.l.index`, `o.a.l.search`, `o.a.l.store`, `o.a.l.util`, and their sub-packages under `o.a.l.misc`. - made some refactorings to remove unnecessary accesses to package-private methods in `lucene-core`. - made methods in o.a.l.index.DocHelper and o.a.l.util.FSTTester in test-framework publicly accessible for unit testing. - lucene-core isn't touched. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-site] johtani commented on a change in pull request #33: Fix change log URL
johtani commented on a change in pull request #33: URL: https://github.com/apache/lucene-site/pull/33#discussion_r517865206 ## File path: content/core/core_news/2020-11-03-8-7-0-available.md ## @@ -19,8 +19,8 @@ Faster sorting by field. When a doc-value field is also indexed with points, Luc Faster flushing of stored fields when index sorting is enabled. This can significantly speed up indexing when a non-negligible amount of data is stored in the index and index sorting is enabled. -Further details of changes are available in the change log available at: http://lucene.apache.org/core/8_7/changes/Changes.html +Further details of changes are available in the change log available at: http://lucene.apache.org/core/8_7_0/changes/Changes.html Please report any feedback to the mailing lists (http://lucene.apache.org/core/discussion.html) -Note: The Apache Software Foundation uses an extensive mirroring network for distributing releases. It is possible that the mirror you are using may not have replicated the release yet. If that is the case, please try another mirror. This also applies to Maven access. \ No newline at end of file +Note: The Apache Software Foundation uses an extensive mirroring network for distributing releases. It is possible that the mirror you are using may not have replicated the release yet. If that is the case, please try another mirror. This also applies to Maven access. Review comment: Yes. I made this PR on browser. I don't know why it has changed... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517888111 ## File path: lucene/misc/src/java/org/apache/lucene/misc/index/IndexSplitter.java ## @@ -123,8 +118,8 @@ private SegmentCommitInfo getInfo(String name) { public void remove(String[] segs) throws IOException { for (String n : segs) { - int idx = getIdx(n); - infos.remove(idx); + SegmentCommitInfo info = getInfo(n); + infos.remove(info); Review comment: SegmentInfos.remove(int) isn't accessible from external packages. This method call can be switched to SegmentInfo.remove(SegmentCommitInfo); that is already opened to public. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517891574 ## File path: lucene/misc/src/java/org/apache/lucene/misc/index/MultiPassIndexSplitter.java ## @@ -101,7 +111,7 @@ public void split(IndexReader in, Directory[] outputs, boolean seq) throws IOExc .setOpenMode(OpenMode.CREATE)); System.err.println("Writing part " + (i + 1) + " ..."); // pass the subreaders directly, as our wrapper's numDocs/hasDeletetions are not up-to-date - final List sr = input.getSequentialSubReaders(); + final List sr = input.getSequentialSubReadersWrapper(); Review comment: BaseCompositeReader.getSequentialSubReaders() is a protected method so it cannot be directly called from here. A wrapper method is needed to move this class to `o.a.l.misc.index` from `o.a.l.index`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
cpoerschke commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r517889175 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/search/LTRQParserPlugin.java ## @@ -146,93 +149,114 @@ public LTRQParser(String qstr, SolrParams localParams, SolrParams params, @Override public Query parse() throws SyntaxError { Review comment: 7/n minor comments: * technically not just `modelNames[0]` but all the model names could be `isEmpty` checked * `threadManager.setExecutor` (already mentioned in 6/n) need not be inside the loop, likewise `extractFeatures` and `fvStoreName` and `extractEFIParams(localParams)` could move out. ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/interleaving/TeamDraftInterleaving.java ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.ltr.interleaving; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.LinkedHashSet; +import java.util.Random; +import java.util.Set; + +import org.apache.lucene.search.ScoreDoc; + +public class TeamDraftInterleaving implements Interleaving{ Review comment: 8/n suggestions: * class level javadocs re: the interleaving algorithm * comments or javadocs re: any assumptions e.g. * must rerankedA.length and rerankedB.length match? * can rerankedA and rerankedB contain the same docs? * can rerankedA contain the same doc more than once? * can rerankedB contain the same doc more than once? * consider guarding against array-index-out-of-bounds exceptions (even if they shouldn't happen if all assumptions are met) ``` indexA = updateIndex(interleavedResults,indexA,rerankedA); if (indexA < rerankedA.length) { interleavedResults.add(rerankedA[indexA]); teamA.add(rerankedA[indexA].doc); indexA++; } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517895258 ## File path: lucene/misc/src/test/org/apache/lucene/misc/index/TestIndexSplitter.java ## @@ -60,9 +67,9 @@ public void test() throws Exception { iw.addDocument(doc); } iw.commit(); -DirectoryReader iwReader = iw.getReader(); -assertEquals(3, iwReader.leaves().size()); -iwReader.close(); +//DirectoryReader iwReader = iw.getReader(); +//assertEquals(3, iwReader.leaves().size()); +//iwReader.close(); Review comment: IndexWriter.getReader() is package-private so it cannot be accessed from here. It would be better that we omit this assertion (it seems these lines are not the main part of the test anyway), instead of opening IW.getReader() up to public? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517897262 ## File path: lucene/test-framework/src/java/org/apache/lucene/util/fst/FSTTester.java ## @@ -95,7 +95,7 @@ private static BytesRef toBytesRef(IntsRef ir) { return br; } - static String getRandomString(Random random) { + public static String getRandomString(Random random) { Review comment: Looks like this should be publicly accessible for tests in any modules? ## File path: lucene/test-framework/src/java/org/apache/lucene/util/fst/FSTTester.java ## @@ -121,7 +121,7 @@ static String simpleRandomString(Random r) { return new String(buffer, 0, end); } - static IntsRef toIntsRef(String s, int inputMode) { + public static IntsRef toIntsRef(String s, int inputMode) { Review comment: Looks like this should be publicly accessible for tests in any modules? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517897531 ## File path: lucene/test-framework/src/java/org/apache/lucene/index/DocHelper.java ## @@ -35,7 +35,8 @@ import org.apache.lucene.search.similarities.Similarity; import org.apache.lucene.store.Directory; -class DocHelper { +/** Helper functions for tests that handles documents */ +public class DocHelper { Review comment: Looks like this should be publicly accessible for tests in any modules? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on a change in pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
dweiss commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517898320 ## File path: lucene/misc/src/test/org/apache/lucene/misc/index/TestIndexSplitter.java ## @@ -60,9 +67,9 @@ public void test() throws Exception { iw.addDocument(doc); } iw.commit(); -DirectoryReader iwReader = iw.getReader(); -assertEquals(3, iwReader.leaves().size()); -iwReader.close(); +//DirectoryReader iwReader = iw.getReader(); +//assertEquals(3, iwReader.leaves().size()); +//iwReader.close(); Review comment: Or maybe switch to DirectoryReader.open(iw...) and keep the assertion? This is public. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on a change in pull request #2064: LUCENE-9660: Clean up package name conflicts between misc and core modules
dweiss commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517899107 ## File path: lucene/test-framework/src/java/org/apache/lucene/util/fst/FSTTester.java ## @@ -95,7 +95,7 @@ private static BytesRef toBytesRef(IntsRef ir) { return br; } - static String getRandomString(Random random) { + public static String getRandomString(Random random) { Review comment: Don't know what it actually does but randomizedtesting has a number of utility methods for random strings: http://labs.carrotsearch.com/download/randomizedtesting/2.0.0/api/randomizedtesting-runner/com/carrotsearch/randomizedtesting/generators/RandomStrings.html This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9600) Clean up package name conflicts for misc module
[ https://issues.apache.org/jira/browse/LUCENE-9600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226594#comment-17226594 ] Tomoko Uchida commented on LUCENE-9600: --- I opened [https://github.com/apache/lucene-solr/pull/2064]. > Clean up package name conflicts for misc module > --- > > Key: LUCENE-9600 > URL: https://issues.apache.org/jira/browse/LUCENE-9600 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/misc >Affects Versions: master (9.0) >Reporter: Tomoko Uchida >Assignee: Tomoko Uchida >Priority: Minor > > misc module shares the package names o.a.l.document, o.a.l.index, > o.a.l.search, o.a.l.store, and o.a.l.util with lucene-core. Those should be > moved under o.a.l.misc (or some classed should be moved to core?). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9600: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517910420 ## File path: lucene/misc/src/test/org/apache/lucene/misc/index/TestIndexSplitter.java ## @@ -60,9 +67,9 @@ public void test() throws Exception { iw.addDocument(doc); } iw.commit(); -DirectoryReader iwReader = iw.getReader(); -assertEquals(3, iwReader.leaves().size()); -iwReader.close(); +//DirectoryReader iwReader = iw.getReader(); +//assertEquals(3, iwReader.leaves().size()); +//iwReader.close(); Review comment: Thanks! DirectoryReader.open(iw) and IW.getReader() are actually equivalent. https://github.com/apache/lucene-solr/pull/2064/commits/4858a5d8347c72364c4a6572345a3fc52f43cc77 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9600: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517929357 ## File path: lucene/test-framework/src/java/org/apache/lucene/util/fst/FSTTester.java ## @@ -95,7 +95,7 @@ private static BytesRef toBytesRef(IntsRef ir) { return br; } - static String getRandomString(Random random) { + public static String getRandomString(Random random) { Review comment: Yes, they look similar except for that Lucene's TestUtil selects different codepoint range (`blockStarts` and `blockEnds`) to generate random String instance each time. I think both one is okay for the test, what do you think? o.a.l.util.TestUtil.randomRealisticUnicodeString() ``` /** Returns random string of length between min and max codepoints, all codepoints within the same unicode block. */ public static String randomRealisticUnicodeString(Random r, int minLength, int maxLength) { final int end = nextInt(r, minLength, maxLength); final int block = r.nextInt(blockStarts.length); StringBuilder sb = new StringBuilder(); for (int i = 0; i < end; i++) sb.appendCodePoint(nextInt(r, blockStarts[block], blockEnds[block])); return sb.toString(); } ``` com.carrotsearch.randomizedtesting.generators.UnicodeGenerator.ofCodePointsLength() ``` public String ofCodePointsLength(Random r, int minCodePoints, int maxCodePoints) { int length = RandomNumbers.randomIntBetween(r, minCodePoints, maxCodePoints); int [] chars = new int [length]; for (int i = 0; i < chars.length; i++) { int v = RandomNumbers.randomIntBetween(r, 0, CODEPOINT_RANGE); if (v >= Character.MIN_SURROGATE) v += SURROGATE_RANGE; chars[i] = v; } return new String(chars, 0, chars.length); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9600: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517929357 ## File path: lucene/test-framework/src/java/org/apache/lucene/util/fst/FSTTester.java ## @@ -95,7 +95,7 @@ private static BytesRef toBytesRef(IntsRef ir) { return br; } - static String getRandomString(Random random) { + public static String getRandomString(Random random) { Review comment: Yes, they look similar except for that Lucene's TestUtil selects different codepoint range (`blockStarts` and `blockEnds`) to generate random String instance each time. I think both of them are okay for the test, what do you think? o.a.l.util.TestUtil.randomRealisticUnicodeString() ``` /** Returns random string of length between min and max codepoints, all codepoints within the same unicode block. */ public static String randomRealisticUnicodeString(Random r, int minLength, int maxLength) { final int end = nextInt(r, minLength, maxLength); final int block = r.nextInt(blockStarts.length); StringBuilder sb = new StringBuilder(); for (int i = 0; i < end; i++) sb.appendCodePoint(nextInt(r, blockStarts[block], blockEnds[block])); return sb.toString(); } ``` com.carrotsearch.randomizedtesting.generators.UnicodeGenerator.ofCodePointsLength() ``` public String ofCodePointsLength(Random r, int minCodePoints, int maxCodePoints) { int length = RandomNumbers.randomIntBetween(r, minCodePoints, maxCodePoints); int [] chars = new int [length]; for (int i = 0; i < chars.length; i++) { int v = RandomNumbers.randomIntBetween(r, 0, CODEPOINT_RANGE); if (v >= Character.MIN_SURROGATE) v += SURROGATE_RANGE; chars[i] = v; } return new String(chars, 0, chars.length); } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on a change in pull request #2064: LUCENE-9600: Clean up package name conflicts between misc and core modules
dweiss commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517932599 ## File path: lucene/test-framework/src/java/org/apache/lucene/util/fst/FSTTester.java ## @@ -95,7 +95,7 @@ private static BytesRef toBytesRef(IntsRef ir) { return br; } - static String getRandomString(Random random) { + public static String getRandomString(Random random) { Review comment: Not sure whether the difference matters. It shouldn't at first glance. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-8673) o.a.s.search.facet classes not public/extendable
[ https://issues.apache.org/jira/browse/SOLR-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Owen updated SOLR-8673: --- Attachment: SOLR-8673.patch > o.a.s.search.facet classes not public/extendable > > > Key: SOLR-8673 > URL: https://issues.apache.org/jira/browse/SOLR-8673 > Project: Solr > Issue Type: Improvement > Components: Facet Module >Affects Versions: 5.4.1 >Reporter: Markus Jelsma >Priority: Major > Fix For: 6.2, 7.0 > > Attachments: SOLR-8673.patch > > > It is not easy to create a custom JSON facet function. A simple function > based on AvgAgg quickly results in the following compilation failures: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.3:compile (default-compile) > on project openindex-solr: Compilation failure: Compilation failure: > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[22,36] > org.apache.solr.search.facet.FacetContext is not public in > org.apache.solr.search.facet; cannot be accessed from outside package > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[23,36] > org.apache.solr.search.facet.FacetDoubleMerger is not public in > org.apache.solr.search.facet; cannot be accessed from outside package > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[40,32] > cannot find symbol > [ERROR] symbol: class FacetContext > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[49,39] > cannot find symbol > [ERROR] symbol: class FacetDoubleMerger > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[54,43] > cannot find symbol > [ERROR] symbol: class Context > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg.Merger > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[41,16] > cannot find symbol > [ERROR] symbol: class AvgSlotAcc > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[46,12] > incompatible types: i.o.s.search.facet.CustomAvgAgg.Merger cannot be > converted to org.apache.solr.search.facet.FacetMerger > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[53,5] > method does not override or implement a method from a supertype > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[60,5] > method does not override or implement a method from a supertype > {code} > It seems lots of classes are tucked away in FacetModule, which we can't reach > from outside. > Originates from this thread: > http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201602.mbox/%3ccab_8yd9ldbg_0zxm_h1igkfm6bqeypd5ilyy7tty8cztscv...@mail.gmail.com%3E > ( also available at > https://lists.apache.org/thread.html/9fddcad3136ec908ce1c57881f8d3069e5d153f08b71f80f3e18d995%401455019826%40%3Csolr-user.lucene.apache.org%3E > ) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-8673) o.a.s.search.facet classes not public/extendable
[ https://issues.apache.org/jira/browse/SOLR-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226628#comment-17226628 ] Tim Owen commented on SOLR-8673: I've attached a patch that I used and it allowed me to port our custom aggregate functions into our plugin jar instead of being in solr. I added getters only for FacetContext, as I couldn't see any need for setters there (and it's a conservative change). Also I made some changes in SlotAcc because many of the useful base classes e.g. DoubleFuncSlotAcc were package-private too, and we were using those as the superclass for our accumulators. So mostly I've made those inner classes public, and their fields protected, and fixed up some whitespace indentation issues. Also IntSlotAcc wasn't a static inner class, which I think was just an oversight from before. Do you think these are OK to expose now? Could these changes also be put into 8.x? > o.a.s.search.facet classes not public/extendable > > > Key: SOLR-8673 > URL: https://issues.apache.org/jira/browse/SOLR-8673 > Project: Solr > Issue Type: Improvement > Components: Facet Module >Affects Versions: 5.4.1 >Reporter: Markus Jelsma >Priority: Major > Fix For: 6.2, 7.0 > > Attachments: SOLR-8673.patch > > > It is not easy to create a custom JSON facet function. A simple function > based on AvgAgg quickly results in the following compilation failures: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.3:compile (default-compile) > on project openindex-solr: Compilation failure: Compilation failure: > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[22,36] > org.apache.solr.search.facet.FacetContext is not public in > org.apache.solr.search.facet; cannot be accessed from outside package > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[23,36] > org.apache.solr.search.facet.FacetDoubleMerger is not public in > org.apache.solr.search.facet; cannot be accessed from outside package > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[40,32] > cannot find symbol > [ERROR] symbol: class FacetContext > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[49,39] > cannot find symbol > [ERROR] symbol: class FacetDoubleMerger > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[54,43] > cannot find symbol > [ERROR] symbol: class Context > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg.Merger > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[41,16] > cannot find symbol > [ERROR] symbol: class AvgSlotAcc > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[46,12] > incompatible types: i.o.s.search.facet.CustomAvgAgg.Merger cannot be > converted to org.apache.solr.search.facet.FacetMerger > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[53,5] > method does not override or implement a method from a supertype > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[60,5] > method does not override or implement a method from a supertype > {code} > It seems lots of classes are tucked away in FacetModule, which we can't reach > from outside. > Originates from this thread: > http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201602.mbox/%3ccab_8yd9ldbg_0zxm_h1igkfm6bqeypd5ilyy7tty8cztscv...@mail.gmail.com%3E > ( also available at > https://lists.apache.org/thread.html/9fddcad3136ec908ce1c57881f8d3069e5d153f08b71f80f3e18d995%401455019826%40%3Csolr-user.lucene.apache.org%3E > ) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #2064: LUCENE-9600: Clean up package name conflicts between misc and core modules
mocobeta commented on a change in pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#discussion_r517941795 ## File path: lucene/test-framework/src/java/org/apache/lucene/util/fst/FSTTester.java ## @@ -95,7 +95,7 @@ private static BytesRef toBytesRef(IntsRef ir) { return br; } - static String getRandomString(Random random) { + public static String getRandomString(Random random) { Review comment: If there's no good reason to change, I'd like to keep the original test code here and make the test utility method publicly accessible (the change shouldn't hurt anything, to me) :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14923) Indexing performance is unacceptable when child documents are involved
[ https://issues.apache.org/jira/browse/SOLR-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226632#comment-17226632 ] Thomas Wöckinger commented on SOLR-14923: - Some comments about: {quote}When there isn't an updateLog (it's technically optional), then there may be a bug because the reader probably needs to be re-opened still. {quote} If there is no update log configured, atomic updates cannot be used, you get an exception in this case. [~dsmiley] ,[~markrmiller] I will have a look at RTG and how it interacts with 'Atomic Update', obviously it is currently required, because when removing the refresh part from DistributedUpdateProcessor there are failing three unit tests as mentioned in the comments before. > Indexing performance is unacceptable when child documents are involved > -- > > Key: SOLR-14923 > URL: https://issues.apache.org/jira/browse/SOLR-14923 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: update, UpdateRequestProcessors >Affects Versions: master (9.0), 8.3, 8.4, 8.5, 8.6 >Reporter: Thomas Wöckinger >Priority: Critical > Labels: performance > > Parallel indexing does not make sense at moment when child documents are used. > The org.apache.solr.update.processor.DistributedUpdateProcessor checks at the > end of the method doVersionAdd if Ulog caches should be refreshed. > This check will return true if any child document is included in the > AddUpdateCommand. > If so ulog.openRealtimeSearcher(); is called, this call is very expensive, > and executed in a synchronized block of the UpdateLog instance, therefore all > other operations on the UpdateLog are blocked too. > Because every important UpdateLog method (add, delete, ...) is done using a > synchronized block almost each operation is blocked. > This reduces multi threaded index update to a single thread behavior. > The described behavior is not depending on any option of the UpdateRequest, > so it does not make any difference if 'waitFlush', 'waitSearcher' or > 'softCommit' is true or false. > The described behavior makes the usage of ChildDocuments useless, because the > performance is unacceptable. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
cpoerschke commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r517960447 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/LTRRescorer.java ## @@ -166,64 +186,77 @@ public void scoreFeatures(IndexSearcher indexSearcher, TopDocs firstPassTopDocs, docBase = readerContext.docBase; scorer = modelWeight.scorer(readerContext); } - // Scorer for a LTRScoringQuery.ModelWeight should never be null since we always have to - // call score - // even if no feature scorers match, since a model might use that info to - // return a - // non-zero score. Same applies for the case of advancing a LTRScoringQuery.ModelWeight.ModelScorer - // past the target - // doc since the model algorithm still needs to compute a potentially - // non-zero score from blank features. - assert (scorer != null); - final int targetDoc = docID - docBase; - scorer.docID(); - scorer.iterator().advance(targetDoc); - - scorer.getDocInfo().setOriginalDocScore(hit.score); - hit.score = scorer.score(); - if (hitUpto < topN) { -reranked[hitUpto] = hit; -// if the heap is not full, maybe I want to log the features for this -// document + scoreSingleHit(indexSearcher, topN, modelWeight, docBase, hitUpto, hit, docID, scoringQuery, scorer, reranked); Review comment: 9/n observation and suggestions: * very elegant factoring out of methods for use by `LTRInterleavingRescorer` * the `rerank` method already takes a `searcher` and so it could determine its own `leaves` from that * `this.scoringQuery` being passed to the `scoreSingleHit` method as an argument (rather than it using the `this.scoringQuery` directly) is very subtle. it is of course present as an argument because `LTRInterleavingRescorer` will passing `rerankingQueries[i]` for that argument. the subtlety could be removed by making `scoreSingleHit` a static method. https://github.com/cpoerschke/lucene-solr/commit/16512dbddbabe12ca5a2a26a9180ceeaae62cea2 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9536) Optimize OrdinalMap when one segment contains all distinct values?
[ https://issues.apache.org/jira/browse/LUCENE-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226645#comment-17226645 ] ASF subversion and git services commented on LUCENE-9536: - Commit 95b4ee02ef97387ed18be8e1383267f66e850c4e in lucene-solr's branch refs/heads/branch_8x from Adrien Grand [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=95b4ee0 ] LUCENE-9536: Address test failure. > Optimize OrdinalMap when one segment contains all distinct values? > -- > > Key: LUCENE-9536 > URL: https://issues.apache.org/jira/browse/LUCENE-9536 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Julie Tibshirani >Priority: Minor > Fix For: 8.8 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > For doc values that are not too high cardinality, it seems common to have > some large segments that contain all distinct values (plus many small > segments who are missing some values). In this case, we could check if the > first segment ords map perfectly to global ords and if so store > `globalOrdDeltas` and `firstSegments` as `LongValues.ZEROES`. This could save > a small amount of space. > I don’t think it would help a huge amount, especially since the optimization > might only kick in with small/ medium cardinalities, which don’t create huge > `OrdinalMap` instances anyways? But it is simple and seemed worth mentioning. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9536) Optimize OrdinalMap when one segment contains all distinct values?
[ https://issues.apache.org/jira/browse/LUCENE-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226644#comment-17226644 ] ASF subversion and git services commented on LUCENE-9536: - Commit bcd9711ab67339c106c14c407c9d52a181ab1e81 in lucene-solr's branch refs/heads/master from Adrien Grand [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bcd9711 ] LUCENE-9536: Address test failure. > Optimize OrdinalMap when one segment contains all distinct values? > -- > > Key: LUCENE-9536 > URL: https://issues.apache.org/jira/browse/LUCENE-9536 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Julie Tibshirani >Priority: Minor > Fix For: 8.8 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > For doc values that are not too high cardinality, it seems common to have > some large segments that contain all distinct values (plus many small > segments who are missing some values). In this case, we could check if the > first segment ords map perfectly to global ords and if so store > `globalOrdDeltas` and `firstSegments` as `LongValues.ZEROES`. This could save > a small amount of space. > I don’t think it would help a huge amount, especially since the optimization > might only kick in with small/ medium cardinalities, which don’t create huge > `OrdinalMap` instances anyways? But it is simple and seemed worth mentioning. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14749) Provide a clean API for cluster-level event processing
[ https://issues.apache.org/jira/browse/SOLR-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226657#comment-17226657 ] ASF subversion and git services commented on SOLR-14749: Commit bdc6e8247fdb162902c794b73fdc228d526a3a6e in lucene-solr's branch refs/heads/master from Andrzej Bialecki [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bdc6e82 ] SOLR-14749: Provide a clean API for cluster-level event processing. > Provide a clean API for cluster-level event processing > -- > > Key: SOLR-14749 > URL: https://issues.apache.org/jira/browse/SOLR-14749 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Labels: clean-api > Fix For: master (9.0) > > Time Spent: 21h 40m > Remaining Estimate: 0h > > This is a companion issue to SOLR-14613 and it aims at providing a clean, > strongly typed API for the functionality formerly known as "triggers" - that > is, a component for generating cluster-level events corresponding to changes > in the cluster state, and a pluggable API for processing these events. > The 8x triggers have been removed so this functionality is currently missing > in 9.0. However, this functionality is crucial for implementing the automatic > collection repair and re-balancing as the cluster state changes (nodes going > down / up, becoming overloaded / unused / decommissioned, etc). > For this reason we need this API and a default implementation of triggers > that at least can perform automatic collection repair (maintaining the > desired replication factor in presence of live node changes). > As before, the actual changes to the collections will be executed using > existing CollectionAdmin API, which in turn may use the placement plugins > from SOLR-14613. > h3. Division of responsibility > * built-in Solr components (non-pluggable): > ** cluster state monitoring and event generation, > ** simple scheduler to periodically generate scheduled events > * plugins: > ** automatic collection repair on {{nodeLost}} events (provided by default) > ** re-balancing of replicas (periodic or on {{nodeAdded}} events) > ** reporting (eg. requesting additional node provisioning) > ** scheduled maintenance (eg. removing inactive shards after split) > h3. Other considerations > These plugins (unlike the placement plugins) need to execute on one > designated node in the cluster. Currently the easiest way to implement this > is to run them on the Overseer leader node. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] sigram closed pull request #1962: SOLR-14749 Provide a clean API for cluster-level event processing
sigram closed pull request #1962: URL: https://github.com/apache/lucene-solr/pull/1962 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] sigram commented on pull request #1962: SOLR-14749 Provide a clean API for cluster-level event processing
sigram commented on pull request #1962: URL: https://github.com/apache/lucene-solr/pull/1962#issuecomment-722317372 Merged to master. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on pull request #2064: LUCENE-9600: Clean up package name conflicts between misc and core modules
mocobeta commented on pull request #2064: URL: https://github.com/apache/lucene-solr/pull/2064#issuecomment-722333573 The changes look okay to me; the refactoring on misc module shouldn't change its behaviour while some classes are moved under o.a.l.misc package. I'm going to merge it into the master after waiting for a couple of days, if there is no any comment/objection about the changes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14749) Provide a clean API for cluster-level event processing
[ https://issues.apache.org/jira/browse/SOLR-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226674#comment-17226674 ] Andrzej Bialecki commented on SOLR-14749: - Thanks [~noble.paul] and [~ilan] for comments and reviews! > Provide a clean API for cluster-level event processing > -- > > Key: SOLR-14749 > URL: https://issues.apache.org/jira/browse/SOLR-14749 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Labels: clean-api > Fix For: master (9.0) > > Time Spent: 22h > Remaining Estimate: 0h > > This is a companion issue to SOLR-14613 and it aims at providing a clean, > strongly typed API for the functionality formerly known as "triggers" - that > is, a component for generating cluster-level events corresponding to changes > in the cluster state, and a pluggable API for processing these events. > The 8x triggers have been removed so this functionality is currently missing > in 9.0. However, this functionality is crucial for implementing the automatic > collection repair and re-balancing as the cluster state changes (nodes going > down / up, becoming overloaded / unused / decommissioned, etc). > For this reason we need this API and a default implementation of triggers > that at least can perform automatic collection repair (maintaining the > desired replication factor in presence of live node changes). > As before, the actual changes to the collections will be executed using > existing CollectionAdmin API, which in turn may use the placement plugins > from SOLR-14613. > h3. Division of responsibility > * built-in Solr components (non-pluggable): > ** cluster state monitoring and event generation, > ** simple scheduler to periodically generate scheduled events > * plugins: > ** automatic collection repair on {{nodeLost}} events (provided by default) > ** re-balancing of replicas (periodic or on {{nodeAdded}} events) > ** reporting (eg. requesting additional node provisioning) > ** scheduled maintenance (eg. removing inactive shards after split) > h3. Other considerations > These plugins (unlike the placement plugins) need to execute on one > designated node in the cluster. Currently the easiest way to implement this > is to run them on the Overseer leader node. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14749) Provide a clean API for cluster-level event processing
[ https://issues.apache.org/jira/browse/SOLR-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226705#comment-17226705 ] ASF subversion and git services commented on SOLR-14749: Commit 0bfa2a690857099aa3a8a5bd90e4d8db89d5ccc0 in lucene-solr's branch refs/heads/master from Andrzej Bialecki [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0bfa2a6 ] SOLR-14749: Restructure the docs + add some examples. > Provide a clean API for cluster-level event processing > -- > > Key: SOLR-14749 > URL: https://issues.apache.org/jira/browse/SOLR-14749 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Labels: clean-api > Fix For: master (9.0) > > Time Spent: 22h > Remaining Estimate: 0h > > This is a companion issue to SOLR-14613 and it aims at providing a clean, > strongly typed API for the functionality formerly known as "triggers" - that > is, a component for generating cluster-level events corresponding to changes > in the cluster state, and a pluggable API for processing these events. > The 8x triggers have been removed so this functionality is currently missing > in 9.0. However, this functionality is crucial for implementing the automatic > collection repair and re-balancing as the cluster state changes (nodes going > down / up, becoming overloaded / unused / decommissioned, etc). > For this reason we need this API and a default implementation of triggers > that at least can perform automatic collection repair (maintaining the > desired replication factor in presence of live node changes). > As before, the actual changes to the collections will be executed using > existing CollectionAdmin API, which in turn may use the placement plugins > from SOLR-14613. > h3. Division of responsibility > * built-in Solr components (non-pluggable): > ** cluster state monitoring and event generation, > ** simple scheduler to periodically generate scheduled events > * plugins: > ** automatic collection repair on {{nodeLost}} events (provided by default) > ** re-balancing of replicas (periodic or on {{nodeAdded}} events) > ** reporting (eg. requesting additional node provisioning) > ** scheduled maintenance (eg. removing inactive shards after split) > h3. Other considerations > These plugins (unlike the placement plugins) need to execute on one > designated node in the cluster. Currently the easiest way to implement this > is to run them on the Overseer leader node. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #2047: LUCENE-9592: Use doubles in VectorUtil to maintain precision.
msokolov commented on a change in pull request #2047: URL: https://github.com/apache/lucene-solr/pull/2047#discussion_r518030964 ## File path: lucene/core/src/java/org/apache/lucene/util/VectorUtil.java ## @@ -25,47 +25,22 @@ private VectorUtil() { } - public static float dotProduct(float[] a, float[] b) { -float res = 0f; -/* - * If length of vector is larger than 8, we use unrolled dot product to accelerate the - * calculation. - */ -int i; -for (i = 0; i < a.length % 8; i++) { - res += b[i] * a[i]; -} -if (a.length < 8) { - return res; -} -float s0 = 0f; -float s1 = 0f; -float s2 = 0f; -float s3 = 0f; -float s4 = 0f; -float s5 = 0f; -float s6 = 0f; -float s7 = 0f; -for (; i + 7 < a.length; i += 8) { - s0 += b[i] * a[i]; - s1 += b[i + 1] * a[i + 1]; - s2 += b[i + 2] * a[i + 2]; - s3 += b[i + 3] * a[i + 3]; - s4 += b[i + 4] * a[i + 4]; - s5 += b[i + 5] * a[i + 5]; - s6 += b[i + 6] * a[i + 6]; - s7 += b[i + 7] * a[i + 7]; + public static double dotProduct(float[] a, float[] b) { Review comment: That's a fair comment. I put this in because we had someone do some extensive work on trying to optimize this code loop, and he found that unrolling like this made it more amenable to being expressed as AVX instructions, leading to some speedup, but it wasn't huge. Still - would you mind running some microbenchmark to see if there is any measurable change in performance with this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
cpoerschke commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518035475 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/LTRInterleavingRescorer.java ## @@ -0,0 +1,158 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.ltr; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Set; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.ScoreMode; +import org.apache.lucene.search.TopDocs; +import org.apache.solr.ltr.interleaving.Interleaving; +import org.apache.solr.ltr.interleaving.InterleavingResult; +import org.apache.solr.ltr.interleaving.TeamDraftInterleaving; + +import static org.apache.solr.ltr.search.LTRQParserPlugin.isOriginalRanking; + +/** + * Implements the rescoring logic. The top documents returned by solr with their + * original scores, will be processed by a {@link LTRScoringQuery} that will assign a + * new score to each document. The top documents will be resorted based on the + * new score. + * */ +public class LTRInterleavingRescorer extends LTRRescorer { + + LTRScoringQuery[] rerankingQueries; + Interleaving interleavingAlgorithm = new TeamDraftInterleaving(); + + public LTRInterleavingRescorer(LTRScoringQuery[] rerankingQueries) { +this.rerankingQueries = rerankingQueries; + } + + /** + * rescores the documents: + * + * @param searcher + * current IndexSearcher + * @param firstPassTopDocs + * documents to rerank; + * @param topN + * documents to return; + */ + @Override + public TopDocs rescore(IndexSearcher searcher, TopDocs firstPassTopDocs, + int topN) throws IOException { +if ((topN == 0) || (firstPassTopDocs.scoreDocs.length == 0)) { + return firstPassTopDocs; +} + +int originalRankingIndex = -1; Review comment: 10/n suggestions: * calculate originalRankingIndex in the constructor * use originalRankingIndex to minimise "rerankingQueries[i] is not original ranking" checks * use originalRankingIndex to remove the "original ranking query is always the last query" assumption * the 'rerank' method already takes a `searcher` and so it could determine its own `leaves` from that https://github.com/cpoerschke/lucene-solr/commit/002c31cceaeff411ad35111f00b7cee71c5b11de This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #2063: LUCENE-9599 Make comparator aware of index sorting
msokolov commented on a change in pull request #2063: URL: https://github.com/apache/lucene-solr/pull/2063#discussion_r518055212 ## File path: lucene/core/src/java/org/apache/lucene/search/comparators/TermOrdValComparator.java ## @@ -0,0 +1,324 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search.comparators; + +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.FieldComparator; +import org.apache.lucene.search.LeafFieldComparator; +import org.apache.lucene.search.Scorable; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.BytesRefBuilder; + +import java.io.IOException; + +/** + * Comparator that sorts by field's natural Term sort order using ordinals. + * This is functionally equivalent to + * {@link org.apache.lucene.search.comparators.TermValComparator}, + * but it first resolves the string to their relative ordinal positions + * (using the index returned by + * {@link org.apache.lucene.index.LeafReader#getSortedDocValues(String)}), + * and does most comparisons using the ordinals. + * For medium to large results, this comparator will be much faster + * than {@link org.apache.lucene.search.comparators.TermValComparator}. + * For very small result sets it may be slower. + * + * The comparator provides an iterator that can efficiently skip + * documents when search sort is done according to the index sort. + */ +public class TermOrdValComparator extends FieldComparator { + private final String field; + private final boolean reverse; + private final int[] ords; // ordinals for each slot + private final BytesRef[] values; // values for each slot + private final BytesRefBuilder[] tempBRs; + /* Which reader last copied a value into the slot. When + we compare two slots, we just compare-by-ord if the + readerGen is the same; else we must compare the + values (slower).*/ + private final int[] readers; + private int currentReader = -1; // index of the current reader we are on + private final int missingSortCmp; // -1 – if missing values are sorted first, 1 – if sorted last + private final int missingOrd; // which ordinal to use for a missing value + + private BytesRef topValue; + private boolean topSameReader; + private int topOrd; + + private BytesRef bottomValue; + boolean bottomSameReader; // true if current bottom slot matches the current reader + int bottomSlot = -1; // bottom slot, or -1 if queue isn't full yet + int bottomOrd; // bottom ord (same as ords[bottomSlot] once bottomSlot is set), cached for faster comparison + + protected boolean hitsThresholdReached; + + public TermOrdValComparator(int numHits, String field, boolean sortMissingLast, boolean reverse) { +this.field = field; +this.reverse = reverse; +this.ords = new int[numHits]; +this.values = new BytesRef[numHits]; +tempBRs = new BytesRefBuilder[numHits]; +readers = new int[numHits]; +if (sortMissingLast) { + missingSortCmp = 1; + missingOrd = Integer.MAX_VALUE; +} else { + missingSortCmp = -1; + missingOrd = -1; +} + } + + @Override + public int compare(int slot1, int slot2) { +if (readers[slot1] == readers[slot2]) { + return ords[slot1] - ords[slot2]; +} +final BytesRef val1 = values[slot1]; +final BytesRef val2 = values[slot2]; +if (val1 == null) { + if (val2 == null) { +return 0; + } + return missingSortCmp; +} else if (val2 == null) { + return -missingSortCmp; +} +return val1.compareTo(val2); + } + + @Override + public void setTopValue(BytesRef value) { +// null is accepted, this means the last doc of the prior search was missing this value +topValue = value; + } + + @Override + public BytesRef value(int slot) { +return values[slot]; + } + + @Override + public LeafFieldComparator getLeafComparator(LeafReaderContext context) throws IOException { +return new TermOrdValLeafComparator(context); + } + + @Override + public int compareValues(BytesRef val1, B
[jira] [Commented] (SOLR-14683) Review the metrics API to ensure consistent placeholders for missing values
[ https://issues.apache.org/jira/browse/SOLR-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226729#comment-17226729 ] Andrzej Bialecki commented on SOLR-14683: - Prometheus best practices recommend "avoiding missing metrics" (as if that were always possible... what about eg. missing them due to network connectivity?), and recommend reporting 0 or NaN for the missing numeric metrics: {quote} Avoid missing metrics Time series that are not present until something happens are difficult to deal with, as the usual simple operations are no longer sufficient to correctly handle them. To avoid this, export 0 (or NaN, if 0 would be misleading) for any time series you know may exist in advance. Most Prometheus client libraries (including Go, Java, and Python) will automatically export a 0 for you for metrics with no labels. {quote} For frequently occurring events, where the average value of the metric may be high, reporting 0 WILL skew the stats more than reporting NaN. Reporting NaN also clearly indicates that the data is not available, as opposed to 0 which may be a legitimate value of the metric. The problem is that serialization of NaN in JSON is not present in the JSON standard, only in extensions such as JSON 5 (http://json5.org). The current JSON standard ECMA-404 says "Numeric values that cannot be represented as sequences of digits (such as Infinity and NaN) are not permitted." So the only standard option left in JSON to indicate that the data is missing is to return {{null}}. > Review the metrics API to ensure consistent placeholders for missing values > --- > > Key: SOLR-14683 > URL: https://issues.apache.org/jira/browse/SOLR-14683 > Project: Solr > Issue Type: Improvement > Components: metrics >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > > Spin-off from SOLR-14657. Some gauges can legitimately be missing or in an > unknown state at some points in time, eg. during SolrCore startup or shutdown. > Currently the API returns placeholders with either impossible values for > numeric gauges (such as index size -1) or empty maps / strings for other > non-numeric gauges. > [~hossman] noticed that the values for these placeholders may be misleading, > depending on how the user treats them - if the client has no special logic to > treat them as "missing values" it may erroneously treat them as valid data. > E.g. numeric values of -1 or 0 may severely skew averages and produce > misleading peaks / valleys in metrics histories. > On the other hand returning a literal {{null}} value instead of the expected > number may also cause unexpected client issues - although in this case it's > clearer that there's actually no data available, so long-term this may be a > better strategy than returning impossible values, even if it means that the > client should learn to handle {{null}} values appropriately. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r500956870 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/search/LTRQParserPlugin.java ## @@ -146,93 +149,114 @@ public LTRQParser(String qstr, SolrParams localParams, SolrParams params, @Override public Query parse() throws SyntaxError { // ReRanking Model - final String modelName = localParams.get(LTRQParserPlugin.MODEL); - if ((modelName == null) || modelName.isEmpty()) { + final String[] modelNames = localParams.getParams(LTRQParserPlugin.MODEL); + if ((modelNames == null) || modelNames.length==0 || modelNames[0].isEmpty()) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Must provide model in the request"); } - - final LTRScoringModel ltrScoringModel = mr.getModel(modelName); - if (ltrScoringModel == null) { -throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, -"cannot find " + LTRQParserPlugin.MODEL + " " + modelName); - } - - final String modelFeatureStoreName = ltrScoringModel.getFeatureStoreName(); - final boolean extractFeatures = SolrQueryRequestContextUtils.isExtractingFeatures(req); - final String fvStoreName = SolrQueryRequestContextUtils.getFvStoreName(req); - // Check if features are requested and if the model feature store and feature-transform feature store are the same - final boolean featuresRequestedFromSameStore = (modelFeatureStoreName.equals(fvStoreName) || fvStoreName == null) ? extractFeatures:false; - if (threadManager != null) { - threadManager.setExecutor(req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor()); - } - final LTRScoringQuery scoringQuery = new LTRScoringQuery(ltrScoringModel, - extractEFIParams(localParams), - featuresRequestedFromSameStore, threadManager); - - // Enable the feature vector caching if we are extracting features, and the features - // we requested are the same ones we are reranking with - if (featuresRequestedFromSameStore) { -scoringQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); + + LTRScoringQuery[] rerankingQueries = new LTRScoringQuery[modelNames.length]; + for (int i = 0; i < modelNames.length; i++) { +final LTRScoringQuery rerankingQuery; +if (!ORIGINAL_RANKING.equals(modelNames[i])) { Review comment: So @cpoerschke , I made a step back with the latest commit, let me explain, First of all, the approach you detailed can definitely work, I don't want to undervalue your effort and contribution, I am actually quite grateful for the time you dedicated in reviewing my proposal, so I really appreciate that. But let's aim to a Keep It Simple Stupid approach here (KISS) and avoid over-complication in a first release, if possible: 1) Apache Solr makes use of "special names" in various places : - _DEFAULT_ to indicate the default feature/model store - _version_ to indicate a meta-field used for versioning/optimistic concurrency ect I notice the prefix and suffix '_' is used for them. So let's use it : "_OriginalRanking_" could be a fair name to identify the Apache Solr ranking pre-re-ranking. I named it after the "OriginalScore" feature, already in place, happy to use a different name, but wouldn't spend to much time on it. My proposal would be to use "_OriginalRanking_" or potentially "_OriginalScoreRanking_" if more readable 2) the user interact with the !ltr query parser as usual, and if he/she wants to use interleaving, he/she just passes a second model (yes, we are supporting only 2 interleaved model right now): model=A model=B If he/she wants to interleave with the original ranking: model=A model=_OriginalRanking_ I think this is quite intuitive and easy to be used, quite explicit and following current Apache Solr nomenclature for special named items 3) what happens if a user had/wants to use "_OriginalRanking_" for one of his/her models? In this unlikely event, a Jira issue can be raised and we design a possible solution at that time. A possible simple approach could be to make the original ranking name parametric, so the user can specify the name in the solrconfig.xml But I would delay this reasoning only if we get requests for that functionality from users What do you think? If we find a good compromise here, we could move the review to the re-scoring part, that is more delicate and we can make sure no mistake happened there (I definitely don't want to slow down normal re-ranking and I definitely want the interleaved re-ranking to be as fast as possible) You find the latest changes in the latest commits, I will update the related documentation as well today.
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518141759 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/search/LTRQParserPlugin.java ## @@ -146,93 +149,114 @@ public LTRQParser(String qstr, SolrParams localParams, SolrParams params, @Override public Query parse() throws SyntaxError { // ReRanking Model - final String modelName = localParams.get(LTRQParserPlugin.MODEL); - if ((modelName == null) || modelName.isEmpty()) { + final String[] modelNames = localParams.getParams(LTRQParserPlugin.MODEL); + if ((modelNames == null) || modelNames.length==0 || modelNames[0].isEmpty()) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Must provide model in the request"); } - - final LTRScoringModel ltrScoringModel = mr.getModel(modelName); - if (ltrScoringModel == null) { -throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, -"cannot find " + LTRQParserPlugin.MODEL + " " + modelName); - } - - final String modelFeatureStoreName = ltrScoringModel.getFeatureStoreName(); - final boolean extractFeatures = SolrQueryRequestContextUtils.isExtractingFeatures(req); - final String fvStoreName = SolrQueryRequestContextUtils.getFvStoreName(req); - // Check if features are requested and if the model feature store and feature-transform feature store are the same - final boolean featuresRequestedFromSameStore = (modelFeatureStoreName.equals(fvStoreName) || fvStoreName == null) ? extractFeatures:false; - if (threadManager != null) { - threadManager.setExecutor(req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor()); - } - final LTRScoringQuery scoringQuery = new LTRScoringQuery(ltrScoringModel, - extractEFIParams(localParams), - featuresRequestedFromSameStore, threadManager); - - // Enable the feature vector caching if we are extracting features, and the features - // we requested are the same ones we are reranking with - if (featuresRequestedFromSameStore) { -scoringQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); + + LTRScoringQuery[] rerankingQueries = new LTRScoringQuery[modelNames.length]; + for (int i = 0; i < modelNames.length; i++) { +final LTRScoringQuery rerankingQuery; +if (!ORIGINAL_RANKING.equals(modelNames[i])) { + final LTRScoringModel ltrScoringModel = mr.getModel(modelNames[i]); + if (ltrScoringModel == null) { +throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, +"cannot find " + LTRQParserPlugin.MODEL + " " + modelNames[i]); + } + final String modelFeatureStoreName = ltrScoringModel.getFeatureStoreName(); + final boolean extractFeatures = SolrQueryRequestContextUtils.isExtractingFeatures(req); + final String fvStoreName = SolrQueryRequestContextUtils.getFvStoreName(req);// Check if features are requested and if the model feature store and feature-transform feature store are the same + final boolean featuresRequestedFromSameStore = (modelFeatureStoreName.equals(fvStoreName) || fvStoreName == null) ? extractFeatures : false; + if (threadManager != null) { + threadManager.setExecutor(req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor()); + } + rerankingQuery = new LTRScoringQuery(ltrScoringModel, + extractEFIParams(localParams), + featuresRequestedFromSameStore, threadManager); + + // Enable the feature vector caching if we are extracting features, and the features + // we requested are the same ones we are reranking with + if (featuresRequestedFromSameStore) { +rerankingQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); + } +}else{ + rerankingQuery = new LTRScoringQuery(null); +} + +// External features +rerankingQuery.setRequest(req); +rerankingQueries[i] = rerankingQuery; } - SolrQueryRequestContextUtils.setScoringQuery(req, scoringQuery); + SolrQueryRequestContextUtils.setScoringQuery(req, rerankingQueries); int reRankDocs = localParams.getInt(RERANK_DOCS, DEFAULT_RERANK_DOCS); if (reRankDocs <= 0) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, - "Must rerank at least 1 document"); +"Must rerank at least 1 document"); + } + if (rerankingQueries.length == 1) { +return new LTRQuery(rerankingQueries[0], reRankDocs); + } else { +return new LTRQuery(rerankingQueries, reRankDocs); } - - // External features - scoringQuery.setRequest(req); - - return new
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518145813 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/response/transform/LTRInterleavingTransformerFactory.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.ltr.response.transform; + +import java.io.IOException; +import org.apache.solr.common.SolrDocument; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.NamedList; +import org.apache.solr.ltr.LTRScoringQuery; +import org.apache.solr.ltr.SolrQueryRequestContextUtils; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.ResultContext; +import org.apache.solr.response.transform.DocTransformer; +import org.apache.solr.response.transform.TransformerFactory; +import org.apache.solr.util.SolrPluginUtils; + +import static org.apache.solr.ltr.search.LTRQParserPlugin.ORIGINAL_RANKING; +import static org.apache.solr.ltr.search.LTRQParserPlugin.isOriginalRanking; Review comment: Good idea! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518147905 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/SolrQueryRequestContextUtils.java ## @@ -47,12 +47,12 @@ public static FeatureLogger getFeatureLogger(SolrQueryRequest req) { /** scoring query accessors **/ - public static void setScoringQuery(SolrQueryRequest req, LTRScoringQuery scoringQuery) { -req.getContext().put(SCORING_QUERY, scoringQuery); + public static void setScoringQuery(SolrQueryRequest req, LTRScoringQuery[] scoringQueries) { Review comment: absolutely reasonable! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14985) Slow indexing and search performance when using HttpClusterStateProvider
Shalin Shekhar Mangar created SOLR-14985: Summary: Slow indexing and search performance when using HttpClusterStateProvider Key: SOLR-14985 URL: https://issues.apache.org/jira/browse/SOLR-14985 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrJ Reporter: Shalin Shekhar Mangar HttpClusterStateProvider fetches and caches Aliases and Live Nodes for 5 seconds. The BaseSolrCloudClient caches DocCollection for 60 seconds but only if the DocCollection is not lazy and all collections returned by HttpClusterStateProvider are not lazy which means they are never cached. The BaseSolrCloudClient has a method for resolving aliases which fetches DocCollection for each input collection. This is an HTTP call with no caching when using HttpClusterStateProvider. This resolveAliases method is called twice for each update. So overall, at least 3 HTTP calls are made to fetch cluster state for each update request when using HttpClusterStateProvider. There may be more if aliases are involved or if more than one collection is specified in the request. Similar problems exist on the query path as well. Due to these reasons, using HttpClusterStateProvider causes horrible latencies and throughput for update requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14985) Slow indexing and search performance when using HttpClusterStateProvider
[ https://issues.apache.org/jira/browse/SOLR-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226782#comment-17226782 ] Shalin Shekhar Mangar commented on SOLR-14985: -- Linking to SOLR-14966 and SOLR-14967 > Slow indexing and search performance when using HttpClusterStateProvider > > > Key: SOLR-14985 > URL: https://issues.apache.org/jira/browse/SOLR-14985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: Shalin Shekhar Mangar >Priority: Major > > HttpClusterStateProvider fetches and caches Aliases and Live Nodes for 5 > seconds. > The BaseSolrCloudClient caches DocCollection for 60 seconds but only if the > DocCollection is not lazy and all collections returned by > HttpClusterStateProvider are not lazy which means they are never cached. > The BaseSolrCloudClient has a method for resolving aliases which fetches > DocCollection for each input collection. This is an HTTP call with no caching > when using HttpClusterStateProvider. This resolveAliases method is called > twice for each update. > So overall, at least 3 HTTP calls are made to fetch cluster state for each > update request when using HttpClusterStateProvider. There may be more if > aliases are involved or if more than one collection is specified in the > request. Similar problems exist on the query path as well. > Due to these reasons, using HttpClusterStateProvider causes horrible > latencies and throughput for update requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
cpoerschke commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518156348 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/LTRInterleavingRescorer.java ## @@ -0,0 +1,156 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.ltr; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Set; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.ScoreMode; +import org.apache.lucene.search.TopDocs; +import org.apache.solr.ltr.interleaving.Interleaving; +import org.apache.solr.ltr.interleaving.InterleavingResult; +import org.apache.solr.ltr.interleaving.TeamDraftInterleaving; + +/** + * Implements the rescoring logic. The top documents returned by solr with their + * original scores, will be processed by a {@link LTRScoringQuery} that will assign a + * new score to each document. The top documents will be resorted based on the + * new score. + * */ +public class LTRInterleavingRescorer extends LTRRescorer { + + LTRScoringQuery[] rerankingQueries; + Interleaving interleavingAlgorithm = new TeamDraftInterleaving(); Review comment: 11.1/n If we go with 4/n then `LTRQParserPlugin.LTRQParser` could pass a `Interleaving interleavingAlgorithm` argument to the `LTRInterleavingQuery` constructor which could pass it to the `LTRInterleavingRescorer` constructor. For now `TeamDraftInterleaving` would be the only supported algorithm but in future other algorithms could then easily be added e.g. based on an additional `ltr` parameter. What do you think? An easy change to make now or something better left for later? ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/interleaving/TeamDraftInterleaving.java ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.ltr.interleaving; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.LinkedHashSet; +import java.util.Random; +import java.util.Set; + +import org.apache.lucene.search.ScoreDoc; + +public class TeamDraftInterleaving implements Interleaving{ + public static Random RANDOM; + + static { +// We try to make things reproducible in the context of our tests by initializing the random instance +// based on the current seed +String seed = System.getProperty("tests.seed"); Review comment: 11.2/n `LTRQParserPlugin.LTRQParser` also has access to the `SolrQueryRequest` and its `SolrCore` object. For some reason I thought that within that some 'official' source of random-ness might be available which could be passed to a `TeamDraftInterleaving(Random)` constructor. And I imagined that our test harnesses would use seeds to make tests reproducible w.r.t. that 'official' source of random-ness. There however doesn't appear to be such a source of non-test official random-ness? `System.getProperty("tests.seed");` being used/available to non-test code seems potentially tricky. @dweiss would you perhaps have any insights around non-test sources of randomness? This is an automated message from the Apache Git Service. To respond to the message, p
[jira] [Updated] (SOLR-14985) Slow indexing and search performance when using HttpClusterStateProvider
[ https://issues.apache.org/jira/browse/SOLR-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-14985: - Description: HttpClusterStateProvider fetches and caches Aliases and Live Nodes for 5 seconds. The BaseSolrCloudClient caches DocCollection for 60 seconds but only if the DocCollection is not lazy and all collections returned by HttpClusterStateProvider are not lazy which means they are never cached. The BaseSolrCloudClient has a method for resolving aliases which fetches DocCollection for each input collection. This is an HTTP call with no caching when using HttpClusterStateProvider. This resolveAliases method is called twice for each update. So overall, at least 3 HTTP calls are made to fetch cluster state for each update request when using HttpClusterStateProvider. There may be more if aliases are involved or if more than one collection is specified in the request. Similar problems exist on the query path as well. Due to these reasons, using HttpClusterStateProvider causes horrible latencies and throughput for update and search requests. was: HttpClusterStateProvider fetches and caches Aliases and Live Nodes for 5 seconds. The BaseSolrCloudClient caches DocCollection for 60 seconds but only if the DocCollection is not lazy and all collections returned by HttpClusterStateProvider are not lazy which means they are never cached. The BaseSolrCloudClient has a method for resolving aliases which fetches DocCollection for each input collection. This is an HTTP call with no caching when using HttpClusterStateProvider. This resolveAliases method is called twice for each update. So overall, at least 3 HTTP calls are made to fetch cluster state for each update request when using HttpClusterStateProvider. There may be more if aliases are involved or if more than one collection is specified in the request. Similar problems exist on the query path as well. Due to these reasons, using HttpClusterStateProvider causes horrible latencies and throughput for update requests. > Slow indexing and search performance when using HttpClusterStateProvider > > > Key: SOLR-14985 > URL: https://issues.apache.org/jira/browse/SOLR-14985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: Shalin Shekhar Mangar >Priority: Major > > HttpClusterStateProvider fetches and caches Aliases and Live Nodes for 5 > seconds. > The BaseSolrCloudClient caches DocCollection for 60 seconds but only if the > DocCollection is not lazy and all collections returned by > HttpClusterStateProvider are not lazy which means they are never cached. > The BaseSolrCloudClient has a method for resolving aliases which fetches > DocCollection for each input collection. This is an HTTP call with no caching > when using HttpClusterStateProvider. This resolveAliases method is called > twice for each update. > So overall, at least 3 HTTP calls are made to fetch cluster state for each > update request when using HttpClusterStateProvider. There may be more if > aliases are involved or if more than one collection is specified in the > request. Similar problems exist on the query path as well. > Due to these reasons, using HttpClusterStateProvider causes horrible > latencies and throughput for update and search requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14986) Restrict the properties possible to define with "property.name=value" when creating a collection
Erick Erickson created SOLR-14986: - Summary: Restrict the properties possible to define with "property.name=value" when creating a collection Key: SOLR-14986 URL: https://issues.apache.org/jira/browse/SOLR-14986 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson Assignee: Erick Erickson This came to light when I was looking at two user-list questions where people try to manually define core.properties to define _replicas_ in SolrCloud. There are two related issues: 1> You can do things like "action=CREATE&name=eoe&property.collection=blivet" which results in an opaque error about "could not create replica." I propose we return a better error here like "property.collection should not be specified when creating a collection". What do people think about the rest of the auto-created properties on collection creation? coreNodeName collection.configName name numShards shard collection replicaType "name" seems to be OK to change, although i don't see anyplace anyone can actually see it afterwards 2> Change the ref guide to steer people away from attempting to manually create a core.properties file to define cores/replicas in SolrCloud. There's no warning on the "defining-core-properties.adoc" for instance. Additionally there should be some kind of message on the collections API documentation about not trying to set the properties in <1> on the CREATE command. <2> used to actually work (apparently) with legacyCloud... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] saatchibhalla commented on pull request #2040: SOLR-14965 add overseer queue size metrics
saatchibhalla commented on pull request #2040: URL: https://github.com/apache/lucene-solr/pull/2040#issuecomment-722489672 @sigram Would you mind taking another look at this PR whenever you have a chance? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9188) BlockUnknown property makes inter-node communication impossible
[ https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226817#comment-17226817 ] Yevhen Tienkaiev edited comment on SOLR-9188 at 11/5/20, 4:45 PM: -- Same issue found in 8.6, what is going on? was (Author: hronom): Same issue found in 8.6 > BlockUnknown property makes inter-node communication impossible > --- > > Key: SOLR-9188 > URL: https://issues.apache.org/jira/browse/SOLR-9188 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 6.0 >Reporter: Piotr Tempes >Assignee: Noble Paul >Priority: Critical > Labels: BasicAuth, Security > Fix For: 6.2.1, 6.3, 7.0 > > Attachments: SOLR-9188.patch, solr9188-errorlog.txt > > > When I setup my solr cloud without blockUnknown property it works as > expected. When I want to block non authenticated requests I get following > stacktrace during startup (see attached file). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-9188) BlockUnknown property makes inter-node communication impossible
[ https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226817#comment-17226817 ] Yevhen Tienkaiev commented on SOLR-9188: Same issue found in 8.6 > BlockUnknown property makes inter-node communication impossible > --- > > Key: SOLR-9188 > URL: https://issues.apache.org/jira/browse/SOLR-9188 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 6.0 >Reporter: Piotr Tempes >Assignee: Noble Paul >Priority: Critical > Labels: BasicAuth, Security > Fix For: 6.2.1, 6.3, 7.0 > > Attachments: SOLR-9188.patch, solr9188-errorlog.txt > > > When I setup my solr cloud without blockUnknown property it works as > expected. When I want to block non authenticated requests I get following > stacktrace during startup (see attached file). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226823#comment-17226823 ] David Smiley commented on SOLR-8330: I'm looking at logs to debug a problem, and the logger name I see in some logs is {{org.apache.solr.handler.RequestHandlerBase}}. Wouldn't it be superior to see SearchHandler or whatever the subclass might be? This is an abstract base class, yet it declares a static logger. I'm surprised to learn that this was intentional, decided in this issue by [~anshum]. I would like to reverse this decision. Thoughts? I'll file a new issue of course. CC [~erickerickson] as well; who cares about logging. > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: 6.0 >Reporter: Jason Gerlowski >Assignee: Anshum Gupta >Priority: Major > Labels: logging > Fix For: 5.4, 6.0 > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14951) Upgrade Angular JS 1.7.9 to 1.8.0
[ https://issues.apache.org/jira/browse/SOLR-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226862#comment-17226862 ] Cassandra Targett commented on SOLR-14951: -- It looks like this didn't make it into 8.7, is there anything besides time available to finish it holding it up? Just curious. > Upgrade Angular JS 1.7.9 to 1.8.0 > - > > Key: SOLR-14951 > URL: https://issues.apache.org/jira/browse/SOLR-14951 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Angular JS released 1.8.0 to fix some security vulnerabilities. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
cpoerschke commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518217755 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/response/transform/LTRFeatureLoggerTransformerFactory.java ## @@ -210,50 +216,59 @@ public void setContext(ResultContext context) { } // Setup LTRScoringQuery - scoringQuery = SolrQueryRequestContextUtils.getScoringQuery(req); - docsWereNotReranked = (scoringQuery == null); - String featureStoreName = SolrQueryRequestContextUtils.getFvStoreName(req); - if (docsWereNotReranked || (featureStoreName != null && (!featureStoreName.equals(scoringQuery.getScoringModel().getFeatureStoreName() { -// if store is set in the transformer we should overwrite the logger - -final ManagedFeatureStore fr = ManagedFeatureStore.getManagedFeatureStore(req.getCore()); - -final FeatureStore store = fr.getFeatureStore(featureStoreName); -featureStoreName = store.getName(); // if featureStoreName was null before this gets actual name - -try { - final LoggingModel lm = new LoggingModel(loggingModelName, - featureStoreName, store.getFeatures()); - - scoringQuery = new LTRScoringQuery(lm, - LTRQParserPlugin.extractEFIParams(localparams), - true, - threadManager); // request feature weights to be created for all features - -}catch (final Exception e) { - throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, - "retrieving the feature store "+featureStoreName, e); -} - } + rerankingQueries = SolrQueryRequestContextUtils.getScoringQueries(req); - if (scoringQuery.getOriginalQuery() == null) { -scoringQuery.setOriginalQuery(context.getQuery()); + docsWereNotReranked = (rerankingQueries == null || rerankingQueries.length == 0); + if (docsWereNotReranked) { +rerankingQueries = new LTRScoringQuery[]{null}; } - if (scoringQuery.getFeatureLogger() == null){ -scoringQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); - } - scoringQuery.setRequest(req); - - featureLogger = scoringQuery.getFeatureLogger(); + modelWeights = new LTRScoringQuery.ModelWeight[rerankingQueries.length]; + String featureStoreName = SolrQueryRequestContextUtils.getFvStoreName(req); + for (int i = 0; i < rerankingQueries.length; i++) { +LTRScoringQuery scoringQuery = rerankingQueries[i]; +if ((scoringQuery == null || !(scoringQuery instanceof OriginalRankingLTRScoringQuery)) && (docsWereNotReranked || (featureStoreName != null && !featureStoreName.equals(scoringQuery.getScoringModel().getFeatureStoreName() { Review comment: 12/n observations/thoughts/questions: Most tricky to articulate, hence left until last. Prior to interleaving the existing logic is that if feature vectors are requested and there is no model (or the model is for a different feature store) then a logging model is created. So now then if we have two models: * if both models are for the requested feature store then that's great and each document would have been picked by one of the models and so we use the feature vector already previously calculated by whatever model had picked the document. * if neither model is for the requested feature store then we need to create a logging model, is one logging model sufficient or do we need two? intuitively to me one would seem to be sufficient but that's based on partial analysis only so far. * if one of the two models (modelA) is for the requested feature store then for the documents picked by modelA we can use the feature vector already previously calculated by modelA. what about documents picked by modelB? it could be that modelA actually has the feature vector for that document but that modelB simply managed to pick the document first. or if modelA does not have the feature vector then we could calculate it for modelA. would a logging model help in this scenario? intuitively to me it would seem that calculating the missing feature vector via modelA or via the logging model would both be equally efficient and hence no logging model would be needed but again that's only based on partial analysis so far. ## File path: solr/solr-ref-guide/src/learning-to-rank.adoc ## @@ -247,6 +254,81 @@ The output XML will include feature values as a comma-separated list, resembling }} +=== Running a Rerank Query Interleaving Two Models + +To rerank the results of a query, interleaving two models (myModelA, myModelB) add the `rq` parameter to your search, passing two models in input, for example: + +[source,text] +http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModelA model=myModelB reRankDocs=100}&fl=id,score + +To obt
[jira] [Commented] (SOLR-14683) Review the metrics API to ensure consistent placeholders for missing values
[ https://issues.apache.org/jira/browse/SOLR-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226897#comment-17226897 ] Chris M. Hostetter commented on SOLR-14683: --- Where it's in the spec or not, Solr's JSON Response writer already has long standing support to output {{Float.NaN}} as a quoted string {{NaN}}. If the metrics libraries recommend using NaN as a standard approach to this then i think that makes sense -- anyone consuming Solr's metrics API using the JSON response writer should be able deal with the quoted string -- just like if they found it in any other solr JSON response. (...or switch to using javabin/XML response writer to get a type specific {{NaN}}, or if NaN is really enough of a problem for JSON then we can consdier adding a {{json.nan}} param making the behavior configurable .. but these are all orthoginal to the metrics API doing "the right thing") > Review the metrics API to ensure consistent placeholders for missing values > --- > > Key: SOLR-14683 > URL: https://issues.apache.org/jira/browse/SOLR-14683 > Project: Solr > Issue Type: Improvement > Components: metrics >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > > Spin-off from SOLR-14657. Some gauges can legitimately be missing or in an > unknown state at some points in time, eg. during SolrCore startup or shutdown. > Currently the API returns placeholders with either impossible values for > numeric gauges (such as index size -1) or empty maps / strings for other > non-numeric gauges. > [~hossman] noticed that the values for these placeholders may be misleading, > depending on how the user treats them - if the client has no special logic to > treat them as "missing values" it may erroneously treat them as valid data. > E.g. numeric values of -1 or 0 may severely skew averages and produce > misleading peaks / valleys in metrics histories. > On the other hand returning a literal {{null}} value instead of the expected > number may also cause unexpected client issues - although in this case it's > clearer that there's actually no data available, so long-term this may be a > better strategy than returning impossible values, even if it means that the > client should learn to handle {{null}} values appropriately. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14683) Review the metrics API to ensure consistent placeholders for missing values
[ https://issues.apache.org/jira/browse/SOLR-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226897#comment-17226897 ] Chris M. Hostetter edited comment on SOLR-14683 at 11/5/20, 5:47 PM: - Where it's in the spec or not, Solr's JSON Response writer already has long standing support to output {{Float.NaN}} as a quoted string {{"NaN"}}. If the metrics libraries recommend using NaN as a standard approach to this then i think that makes sense -- anyone consuming Solr's metrics API using the JSON response writer should be able deal with the quoted string -- just like if they found it in any other solr JSON response. (...or switch to using javabin/XML response writer to get a type specific {{NaN}}, or if NaN is really enough of a problem for JSON then we can consdier adding a {{json.nan}} param making the behavior configurable .. but these are all orthoginal to the metrics API doing "the right thing") was (Author: hossman): Where it's in the spec or not, Solr's JSON Response writer already has long standing support to output {{Float.NaN}} as a quoted string {{NaN}}. If the metrics libraries recommend using NaN as a standard approach to this then i think that makes sense -- anyone consuming Solr's metrics API using the JSON response writer should be able deal with the quoted string -- just like if they found it in any other solr JSON response. (...or switch to using javabin/XML response writer to get a type specific {{NaN}}, or if NaN is really enough of a problem for JSON then we can consdier adding a {{json.nan}} param making the behavior configurable .. but these are all orthoginal to the metrics API doing "the right thing") > Review the metrics API to ensure consistent placeholders for missing values > --- > > Key: SOLR-14683 > URL: https://issues.apache.org/jira/browse/SOLR-14683 > Project: Solr > Issue Type: Improvement > Components: metrics >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > > Spin-off from SOLR-14657. Some gauges can legitimately be missing or in an > unknown state at some points in time, eg. during SolrCore startup or shutdown. > Currently the API returns placeholders with either impossible values for > numeric gauges (such as index size -1) or empty maps / strings for other > non-numeric gauges. > [~hossman] noticed that the values for these placeholders may be misleading, > depending on how the user treats them - if the client has no special logic to > treat them as "missing values" it may erroneously treat them as valid data. > E.g. numeric values of -1 or 0 may severely skew averages and produce > misleading peaks / valleys in metrics histories. > On the other hand returning a literal {{null}} value instead of the expected > number may also cause unexpected client issues - although in this case it's > clearer that there's actually no data available, so long-term this may be a > better strategy than returning impossible values, even if it means that the > client should learn to handle {{null}} values appropriately. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] gerlowskija commented on pull request #2056: SOLR-14971: Handle atomic-removes on uncommitted docs
gerlowskija commented on pull request #2056: URL: https://github.com/apache/lucene-solr/pull/2056#issuecomment-722539440 I thought of doing a `toNativeType` conversion on the "original" Collection, but didn't like how it changed SolrInputDocument values for field-vals supposedly unaffected by the operation (e.g. an atomic remove of 1 of 5 values would change all 5 to native-types). I was leery of what problems that might cause downstream. But if there are input edge cases I'm missing in the current iteration, maybe it's worth revisiting that approach. I'll take a look at that and some of the other things you mentioned (esp. add-distinct reproduction). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518269182 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/LTRRescorer.java ## @@ -166,64 +186,77 @@ public void scoreFeatures(IndexSearcher indexSearcher, TopDocs firstPassTopDocs, docBase = readerContext.docBase; scorer = modelWeight.scorer(readerContext); } - // Scorer for a LTRScoringQuery.ModelWeight should never be null since we always have to - // call score - // even if no feature scorers match, since a model might use that info to - // return a - // non-zero score. Same applies for the case of advancing a LTRScoringQuery.ModelWeight.ModelScorer - // past the target - // doc since the model algorithm still needs to compute a potentially - // non-zero score from blank features. - assert (scorer != null); - final int targetDoc = docID - docBase; - scorer.docID(); - scorer.iterator().advance(targetDoc); - - scorer.getDocInfo().setOriginalDocScore(hit.score); - hit.score = scorer.score(); - if (hitUpto < topN) { -reranked[hitUpto] = hit; -// if the heap is not full, maybe I want to log the features for this -// document + scoreSingleHit(indexSearcher, topN, modelWeight, docBase, hitUpto, hit, docID, scoringQuery, scorer, reranked); Review comment: Brilliant observation! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-8673) o.a.s.search.facet classes not public/extendable
[ https://issues.apache.org/jira/browse/SOLR-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226927#comment-17226927 ] Mikhail Khludnev commented on SOLR-8673: I think it's good. However, it's worth to add a dummy descendant class in tests, just to ensure that extension from other package is possible. > o.a.s.search.facet classes not public/extendable > > > Key: SOLR-8673 > URL: https://issues.apache.org/jira/browse/SOLR-8673 > Project: Solr > Issue Type: Improvement > Components: Facet Module >Affects Versions: 5.4.1 >Reporter: Markus Jelsma >Priority: Major > Fix For: 6.2, 7.0 > > Attachments: SOLR-8673.patch > > > It is not easy to create a custom JSON facet function. A simple function > based on AvgAgg quickly results in the following compilation failures: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.3:compile (default-compile) > on project openindex-solr: Compilation failure: Compilation failure: > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[22,36] > org.apache.solr.search.facet.FacetContext is not public in > org.apache.solr.search.facet; cannot be accessed from outside package > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[23,36] > org.apache.solr.search.facet.FacetDoubleMerger is not public in > org.apache.solr.search.facet; cannot be accessed from outside package > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[40,32] > cannot find symbol > [ERROR] symbol: class FacetContext > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[49,39] > cannot find symbol > [ERROR] symbol: class FacetDoubleMerger > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[54,43] > cannot find symbol > [ERROR] symbol: class Context > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg.Merger > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[41,16] > cannot find symbol > [ERROR] symbol: class AvgSlotAcc > [ERROR] location: class i.o.s.search.facet.CustomAvgAgg > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[46,12] > incompatible types: i.o.s.search.facet.CustomAvgAgg.Merger cannot be > converted to org.apache.solr.search.facet.FacetMerger > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[53,5] > method does not override or implement a method from a supertype > [ERROR] > /home/markus/projects/openindex/solr/trunk/src/main/java/i.o.s.search/facet/CustomAvgAgg.java:[60,5] > method does not override or implement a method from a supertype > {code} > It seems lots of classes are tucked away in FacetModule, which we can't reach > from outside. > Originates from this thread: > http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201602.mbox/%3ccab_8yd9ldbg_0zxm_h1igkfm6bqeypd5ilyy7tty8cztscv...@mail.gmail.com%3E > ( also available at > https://lists.apache.org/thread.html/9fddcad3136ec908ce1c57881f8d3069e5d153f08b71f80f3e18d995%401455019826%40%3Csolr-user.lucene.apache.org%3E > ) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-8330) Restrict logger visibility throughout the codebase to private so that only the file that declares it can use it
[ https://issues.apache.org/jira/browse/SOLR-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17226930#comment-17226930 ] Erick Erickson commented on SOLR-8330: -- Without thinking about it too much, I'm in favor of as detailed, relevant information as possible. Having the abstract class name instead of the concrete class seems wrong. I'd be in favor of there being no loggers in abstract classes at all, although I could be talked out of that. The one in RequestHandlerBase is particularly strange. The only usage is in handleRequest, where it's passed as a parameter: {code} SolrException.log(log, e); {code} So it's not even like a subclass is using it except if the subclass calls super.handleRequest(). > Restrict logger visibility throughout the codebase to private so that only > the file that declares it can use it > --- > > Key: SOLR-8330 > URL: https://issues.apache.org/jira/browse/SOLR-8330 > Project: Solr > Issue Type: Sub-task >Affects Versions: 6.0 >Reporter: Jason Gerlowski >Assignee: Anshum Gupta >Priority: Major > Labels: logging > Fix For: 5.4, 6.0 > > Attachments: SOLR-8330-combined.patch, SOLR-8330-detector.patch, > SOLR-8330-detector.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, > SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch, SOLR-8330.patch > > > As Mike Drob pointed out in Solr-8324, many loggers in Solr are > unintentionally shared between classes. Many instances of this are caused by > overzealous copy-paste. This can make debugging tougher, as messages appear > to come from an incorrect location. > As discussed in the comments on SOLR-8324, there also might be legitimate > reasons for sharing loggers between classes. Where any ambiguity exists, > these instances shouldn't be touched. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518290312 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/search/LTRQParserPlugin.java ## @@ -146,93 +149,114 @@ public LTRQParser(String qstr, SolrParams localParams, SolrParams params, @Override public Query parse() throws SyntaxError { // ReRanking Model - final String modelName = localParams.get(LTRQParserPlugin.MODEL); - if ((modelName == null) || modelName.isEmpty()) { + final String[] modelNames = localParams.getParams(LTRQParserPlugin.MODEL); + if ((modelNames == null) || modelNames.length==0 || modelNames[0].isEmpty()) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Must provide model in the request"); } - - final LTRScoringModel ltrScoringModel = mr.getModel(modelName); - if (ltrScoringModel == null) { -throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, -"cannot find " + LTRQParserPlugin.MODEL + " " + modelName); - } - - final String modelFeatureStoreName = ltrScoringModel.getFeatureStoreName(); - final boolean extractFeatures = SolrQueryRequestContextUtils.isExtractingFeatures(req); - final String fvStoreName = SolrQueryRequestContextUtils.getFvStoreName(req); - // Check if features are requested and if the model feature store and feature-transform feature store are the same - final boolean featuresRequestedFromSameStore = (modelFeatureStoreName.equals(fvStoreName) || fvStoreName == null) ? extractFeatures:false; - if (threadManager != null) { - threadManager.setExecutor(req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor()); - } - final LTRScoringQuery scoringQuery = new LTRScoringQuery(ltrScoringModel, - extractEFIParams(localParams), - featuresRequestedFromSameStore, threadManager); - - // Enable the feature vector caching if we are extracting features, and the features - // we requested are the same ones we are reranking with - if (featuresRequestedFromSameStore) { -scoringQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); + + LTRScoringQuery[] rerankingQueries = new LTRScoringQuery[modelNames.length]; + for (int i = 0; i < modelNames.length; i++) { +final LTRScoringQuery rerankingQuery; +if (!ORIGINAL_RANKING.equals(modelNames[i])) { + final LTRScoringModel ltrScoringModel = mr.getModel(modelNames[i]); + if (ltrScoringModel == null) { +throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, +"cannot find " + LTRQParserPlugin.MODEL + " " + modelNames[i]); + } + final String modelFeatureStoreName = ltrScoringModel.getFeatureStoreName(); + final boolean extractFeatures = SolrQueryRequestContextUtils.isExtractingFeatures(req); + final String fvStoreName = SolrQueryRequestContextUtils.getFvStoreName(req);// Check if features are requested and if the model feature store and feature-transform feature store are the same + final boolean featuresRequestedFromSameStore = (modelFeatureStoreName.equals(fvStoreName) || fvStoreName == null) ? extractFeatures : false; + if (threadManager != null) { + threadManager.setExecutor(req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor()); + } + rerankingQuery = new LTRScoringQuery(ltrScoringModel, + extractEFIParams(localParams), + featuresRequestedFromSameStore, threadManager); + + // Enable the feature vector caching if we are extracting features, and the features + // we requested are the same ones we are reranking with + if (featuresRequestedFromSameStore) { +rerankingQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); + } +}else{ + rerankingQuery = new LTRScoringQuery(null); +} + +// External features +rerankingQuery.setRequest(req); +rerankingQueries[i] = rerankingQuery; } - SolrQueryRequestContextUtils.setScoringQuery(req, scoringQuery); + SolrQueryRequestContextUtils.setScoringQuery(req, rerankingQueries); int reRankDocs = localParams.getInt(RERANK_DOCS, DEFAULT_RERANK_DOCS); if (reRankDocs <= 0) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, - "Must rerank at least 1 document"); +"Must rerank at least 1 document"); + } + if (rerankingQueries.length == 1) { +return new LTRQuery(rerankingQueries[0], reRankDocs); + } else { +return new LTRQuery(rerankingQueries, reRankDocs); } - - // External features - scoringQuery.setRequest(req); - - return new
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518291805 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/response/transform/LTRFeatureLoggerTransformerFactory.java ## @@ -271,17 +287,23 @@ public void transform(SolrDocument doc, int docid) private void implTransform(SolrDocument doc, int docid, Float score) throws IOException { - Object fv = featureLogger.getFeatureVector(docid, scoringQuery, searcher); - if (fv == null) { // FV for this document was not in the cache -fv = featureLogger.makeFeatureVector( -LTRRescorer.extractFeaturesInfo( -modelWeight, -docid, -(docsWereNotReranked ? score : null), -leafContexts)); + LTRScoringQuery rerankingQuery = rerankingQueries[0]; + LTRScoringQuery.ModelWeight rerankingModelWeight = modelWeights[0]; + if (rerankingQueries.length > 1 && rerankingQueries[1].getPickedInterleavingDocIds().contains(docid)) { +rerankingQuery = rerankingQueries[1]; Review comment: Yes, I agree, I'll change that and resolve this later This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518292659 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/LTRInterleavingRescorer.java ## @@ -0,0 +1,158 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.ltr; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Set; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.search.Explanation; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.ScoreMode; +import org.apache.lucene.search.TopDocs; +import org.apache.solr.ltr.interleaving.Interleaving; +import org.apache.solr.ltr.interleaving.InterleavingResult; +import org.apache.solr.ltr.interleaving.TeamDraftInterleaving; + +import static org.apache.solr.ltr.search.LTRQParserPlugin.isOriginalRanking; + +/** + * Implements the rescoring logic. The top documents returned by solr with their + * original scores, will be processed by a {@link LTRScoringQuery} that will assign a + * new score to each document. The top documents will be resorted based on the + * new score. + * */ +public class LTRInterleavingRescorer extends LTRRescorer { + + LTRScoringQuery[] rerankingQueries; + Interleaving interleavingAlgorithm = new TeamDraftInterleaving(); + + public LTRInterleavingRescorer(LTRScoringQuery[] rerankingQueries) { +this.rerankingQueries = rerankingQueries; + } + + /** + * rescores the documents: + * + * @param searcher + * current IndexSearcher + * @param firstPassTopDocs + * documents to rerank; + * @param topN + * documents to return; + */ + @Override + public TopDocs rescore(IndexSearcher searcher, TopDocs firstPassTopDocs, + int topN) throws IOException { +if ((topN == 0) || (firstPassTopDocs.scoreDocs.length == 0)) { + return firstPassTopDocs; +} + +int originalRankingIndex = -1; Review comment: Perfectly splendid! I am just waiting to finalise the decision about the subclasses yes/no, and I'll proceed with this commit This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] alessandrobenedetti commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518293629 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/interleaving/TeamDraftInterleaving.java ## @@ -0,0 +1,87 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.ltr.interleaving; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.LinkedHashSet; +import java.util.Random; +import java.util.Set; + +import org.apache.lucene.search.ScoreDoc; + +public class TeamDraftInterleaving implements Interleaving{ + public static Random RANDOM; + + static { +// We try to make things reproducible in the context of our tests by initializing the random instance +// based on the current seed +String seed = System.getProperty("tests.seed"); Review comment: I spent quite a lot of time on this, to make it compatible with the tests, if anyone else has some better solution I would be more than happy to change it, I was not super happy of what I ended with, but it was the only solution I found to be working at the time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #2063: LUCENE-9599 Make comparator aware of index sorting
mayya-sharipova commented on a change in pull request #2063: URL: https://github.com/apache/lucene-solr/pull/2063#discussion_r518316315 ## File path: lucene/core/src/java/org/apache/lucene/search/comparators/TermOrdValComparator.java ## @@ -0,0 +1,324 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search.comparators; + +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.FieldComparator; +import org.apache.lucene.search.LeafFieldComparator; +import org.apache.lucene.search.Scorable; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.BytesRefBuilder; + +import java.io.IOException; + +/** + * Comparator that sorts by field's natural Term sort order using ordinals. + * This is functionally equivalent to + * {@link org.apache.lucene.search.comparators.TermValComparator}, + * but it first resolves the string to their relative ordinal positions + * (using the index returned by + * {@link org.apache.lucene.index.LeafReader#getSortedDocValues(String)}), + * and does most comparisons using the ordinals. + * For medium to large results, this comparator will be much faster + * than {@link org.apache.lucene.search.comparators.TermValComparator}. + * For very small result sets it may be slower. + * + * The comparator provides an iterator that can efficiently skip + * documents when search sort is done according to the index sort. + */ +public class TermOrdValComparator extends FieldComparator { + private final String field; + private final boolean reverse; + private final int[] ords; // ordinals for each slot + private final BytesRef[] values; // values for each slot + private final BytesRefBuilder[] tempBRs; + /* Which reader last copied a value into the slot. When + we compare two slots, we just compare-by-ord if the + readerGen is the same; else we must compare the + values (slower).*/ + private final int[] readers; + private int currentReader = -1; // index of the current reader we are on + private final int missingSortCmp; // -1 – if missing values are sorted first, 1 – if sorted last + private final int missingOrd; // which ordinal to use for a missing value + + private BytesRef topValue; + private boolean topSameReader; + private int topOrd; + + private BytesRef bottomValue; + boolean bottomSameReader; // true if current bottom slot matches the current reader + int bottomSlot = -1; // bottom slot, or -1 if queue isn't full yet + int bottomOrd; // bottom ord (same as ords[bottomSlot] once bottomSlot is set), cached for faster comparison + + protected boolean hitsThresholdReached; + + public TermOrdValComparator(int numHits, String field, boolean sortMissingLast, boolean reverse) { +this.field = field; +this.reverse = reverse; +this.ords = new int[numHits]; +this.values = new BytesRef[numHits]; +tempBRs = new BytesRefBuilder[numHits]; +readers = new int[numHits]; +if (sortMissingLast) { + missingSortCmp = 1; + missingOrd = Integer.MAX_VALUE; +} else { + missingSortCmp = -1; + missingOrd = -1; +} + } + + @Override + public int compare(int slot1, int slot2) { +if (readers[slot1] == readers[slot2]) { + return ords[slot1] - ords[slot2]; +} +final BytesRef val1 = values[slot1]; +final BytesRef val2 = values[slot2]; +if (val1 == null) { + if (val2 == null) { +return 0; + } + return missingSortCmp; +} else if (val2 == null) { + return -missingSortCmp; +} +return val1.compareTo(val2); + } + + @Override + public void setTopValue(BytesRef value) { +// null is accepted, this means the last doc of the prior search was missing this value +topValue = value; + } + + @Override + public BytesRef value(int slot) { +return values[slot]; + } + + @Override + public LeafFieldComparator getLeafComparator(LeafReaderContext context) throws IOException { +return new TermOrdValLeafComparator(context); + } + + @Override + public int compareValues(BytesRef
[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #2063: LUCENE-9599 Make comparator aware of index sorting
mayya-sharipova commented on a change in pull request #2063: URL: https://github.com/apache/lucene-solr/pull/2063#discussion_r518316774 ## File path: lucene/core/src/test/org/apache/lucene/search/TestFieldSortOptimizationSkipping.java ## @@ -485,4 +491,186 @@ public void testDocSort() throws IOException { dir.close(); } + public void testNumericSortOptimizationIndexSort() throws IOException { Review comment: @msokolov Thanks for the feedback. Addressed in d2909aa This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on a change in pull request #2062: LUCENE-9589 Swedish minimal stemmer
janhoy commented on a change in pull request #2062: URL: https://github.com/apache/lucene-solr/pull/2062#discussion_r518323263 ## File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/sv/SwedishMinimalStemFilterFactory.java ## @@ -0,0 +1,60 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.lucene.analysis.sv; + + +import org.apache.lucene.analysis.TokenFilterFactory; +import org.apache.lucene.analysis.TokenStream; + +import java.util.Map; + +/** + * Factory for {@link SwedishMinimalStemFilter}. + * Review comment: Peraps, but all the other zillion factories use pre, and btw we're going to get rid fo solr xml syntax anyway and replace it with some other way of documenting options. Let's defer all of that to LUCENE-7964. PS. I found a typo in the field-type name in line 28, which I changed from `name="text_svlgtstem"` to `name="text_svminstem"`. I'll also add `@since` tags to the other classes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on a change in pull request #2062: LUCENE-9589 Swedish minimal stemmer
dweiss commented on a change in pull request #2062: URL: https://github.com/apache/lucene-solr/pull/2062#discussion_r518352675 ## File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/sv/SwedishMinimalStemFilterFactory.java ## @@ -0,0 +1,60 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.lucene.analysis.sv; + + +import org.apache.lucene.analysis.TokenFilterFactory; +import org.apache.lucene.analysis.TokenStream; + +import java.util.Map; + +/** + * Factory for {@link SwedishMinimalStemFilter}. + * Review comment: You have to use both, actually. The code tag only helps you to dodge HTML entity escaping (<>) and makes the code easier on the eyes, that's all. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] muse-dev[bot] commented on a change in pull request #2010: SOLR-12182: Don't persist base_url in ZK as the scheme is variable, compute from node_name instead
muse-dev[bot] commented on a change in pull request #2010: URL: https://github.com/apache/lucene-solr/pull/2010#discussion_r518357932 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkController.java ## @@ -1401,8 +1420,7 @@ public ZkCoreNodeProps getLeaderProps(final String collection, byte[] data = zkClient.getData( ZkStateReader.getShardLeadersPath(collection, slice), null, null, true); -ZkCoreNodeProps leaderProps = new ZkCoreNodeProps( -ZkNodeProps.load(data)); +ZkCoreNodeProps leaderProps = new ZkCoreNodeProps(ZkNodeProps.load(data)); Review comment: *THREAD_SAFETY_VIOLATION:* Unprotected write. Non-private method `ZkController.getLeaderProps(...)` indirectly writes to field `noggit.JSONParser.devNull.buf` outside of synchronization. Reporting because another access to the same memory occurs on a background thread, although this access may not. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] thelabdude commented on pull request #2010: SOLR-12182: Don't persist base_url in ZK as the scheme is variable, compute from node_name instead
thelabdude commented on pull request #2010: URL: https://github.com/apache/lucene-solr/pull/2010#issuecomment-722639840 I think this is ready for review again ... still have to create CHANGES.txt entry and update the ref guide but let's get through another round of reviews, esp. around the approach I'm taking here. Once we're in agreement on the overall design, I'll update the docs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1571: SOLR-14560: Interleaving for Learning To Rank
cpoerschke commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r518364159 ## File path: solr/contrib/ltr/src/java/org/apache/solr/ltr/search/LTRQParserPlugin.java ## @@ -146,93 +149,114 @@ public LTRQParser(String qstr, SolrParams localParams, SolrParams params, @Override public Query parse() throws SyntaxError { // ReRanking Model - final String modelName = localParams.get(LTRQParserPlugin.MODEL); - if ((modelName == null) || modelName.isEmpty()) { + final String[] modelNames = localParams.getParams(LTRQParserPlugin.MODEL); + if ((modelNames == null) || modelNames.length==0 || modelNames[0].isEmpty()) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Must provide model in the request"); } - - final LTRScoringModel ltrScoringModel = mr.getModel(modelName); - if (ltrScoringModel == null) { -throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, -"cannot find " + LTRQParserPlugin.MODEL + " " + modelName); - } - - final String modelFeatureStoreName = ltrScoringModel.getFeatureStoreName(); - final boolean extractFeatures = SolrQueryRequestContextUtils.isExtractingFeatures(req); - final String fvStoreName = SolrQueryRequestContextUtils.getFvStoreName(req); - // Check if features are requested and if the model feature store and feature-transform feature store are the same - final boolean featuresRequestedFromSameStore = (modelFeatureStoreName.equals(fvStoreName) || fvStoreName == null) ? extractFeatures:false; - if (threadManager != null) { - threadManager.setExecutor(req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor()); - } - final LTRScoringQuery scoringQuery = new LTRScoringQuery(ltrScoringModel, - extractEFIParams(localParams), - featuresRequestedFromSameStore, threadManager); - - // Enable the feature vector caching if we are extracting features, and the features - // we requested are the same ones we are reranking with - if (featuresRequestedFromSameStore) { -scoringQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); + + LTRScoringQuery[] rerankingQueries = new LTRScoringQuery[modelNames.length]; + for (int i = 0; i < modelNames.length; i++) { +final LTRScoringQuery rerankingQuery; +if (!ORIGINAL_RANKING.equals(modelNames[i])) { + final LTRScoringModel ltrScoringModel = mr.getModel(modelNames[i]); + if (ltrScoringModel == null) { +throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, +"cannot find " + LTRQParserPlugin.MODEL + " " + modelNames[i]); + } + final String modelFeatureStoreName = ltrScoringModel.getFeatureStoreName(); + final boolean extractFeatures = SolrQueryRequestContextUtils.isExtractingFeatures(req); + final String fvStoreName = SolrQueryRequestContextUtils.getFvStoreName(req);// Check if features are requested and if the model feature store and feature-transform feature store are the same + final boolean featuresRequestedFromSameStore = (modelFeatureStoreName.equals(fvStoreName) || fvStoreName == null) ? extractFeatures : false; + if (threadManager != null) { + threadManager.setExecutor(req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor()); + } + rerankingQuery = new LTRScoringQuery(ltrScoringModel, + extractEFIParams(localParams), + featuresRequestedFromSameStore, threadManager); + + // Enable the feature vector caching if we are extracting features, and the features + // we requested are the same ones we are reranking with + if (featuresRequestedFromSameStore) { +rerankingQuery.setFeatureLogger( SolrQueryRequestContextUtils.getFeatureLogger(req) ); + } +}else{ + rerankingQuery = new LTRScoringQuery(null); +} + +// External features +rerankingQuery.setRequest(req); +rerankingQueries[i] = rerankingQuery; } - SolrQueryRequestContextUtils.setScoringQuery(req, scoringQuery); + SolrQueryRequestContextUtils.setScoringQuery(req, rerankingQueries); int reRankDocs = localParams.getInt(RERANK_DOCS, DEFAULT_RERANK_DOCS); if (reRankDocs <= 0) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, - "Must rerank at least 1 document"); +"Must rerank at least 1 document"); + } + if (rerankingQueries.length == 1) { +return new LTRQuery(rerankingQueries[0], reRankDocs); + } else { +return new LTRQuery(rerankingQueries, reRankDocs); } - - // External features - scoringQuery.setRequest(req); - - return new LTRQuery
[GitHub] [lucene-solr] msokolov commented on a change in pull request #2063: LUCENE-9599 Make comparator aware of index sorting
msokolov commented on a change in pull request #2063: URL: https://github.com/apache/lucene-solr/pull/2063#discussion_r518351139 ## File path: lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ## @@ -139,6 +140,7 @@ void collectAnyHit(int doc, int hitsCollected) throws IOException { @Override public void setScorer(Scorable scorer) throws IOException { + if (canEarlyTerminate) comparator.usesIndexSort(); Review comment: Let's use curly braces for every if statement; I think that is the prevailing style here at least ## File path: lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ## @@ -100,6 +100,7 @@ boolean thresholdCheck(int doc) throws IOException { // since docs are visited in doc Id order, if compare is 0, it means // this document is largest than anything else in the queue, and // therefore not competitive. +// TODO: remove early termination in TopFieldCollector, as this should be managed by comparators Review comment: so, with this change does this essentially become a no-op? Because we will already early-terminate due to the comparator? ## File path: lucene/core/src/test/org/apache/lucene/search/TestFieldSortOptimizationSkipping.java ## @@ -485,4 +491,249 @@ public void testDocSort() throws IOException { dir.close(); } + public void testNumericSortOptimizationIndexSort() throws IOException { +final Directory dir = newDirectory(); +IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random())); +boolean reverseSort = randomBoolean(); +final SortField sortField = new SortField("field1", SortField.Type.LONG, reverseSort); +Sort indexSort = new Sort(sortField); +iwc.setIndexSort(indexSort); +final IndexWriter writer = new IndexWriter(dir, iwc); + +final int numDocs = atLeast(50); +int[] sortedValues = initializeNumericValues(numDocs, reverseSort, 0); +int[] randomIdxs = randomIdxs(numDocs); + +for (int i = 0; i < numDocs; i++) { + final Document doc = new Document(); + if (sortedValues[randomIdxs[i]] > 0) { +doc.add(new NumericDocValuesField("field1", sortedValues[randomIdxs[i]])); + } + writer.addDocument(doc); + if (i == 30) { +writer.flush(); + } +} +final IndexReader reader = DirectoryReader.open(writer); +writer.close(); + +IndexSearcher searcher = newSearcher(reader); +final int numHits = randomIntBetween(1, numDocs - 10); +final int totalHitsThreshold = randomIntBetween(1, numDocs - 10); +{ Review comment: why the extra block? ## File path: lucene/core/src/test/org/apache/lucene/search/TestFieldSortOptimizationSkipping.java ## @@ -485,4 +491,186 @@ public void testDocSort() throws IOException { dir.close(); } + public void testNumericSortOptimizationIndexSort() throws IOException { Review comment: thanks! ## File path: lucene/core/src/test/org/apache/lucene/search/TestFieldSortOptimizationSkipping.java ## @@ -485,4 +491,249 @@ public void testDocSort() throws IOException { dir.close(); } + public void testNumericSortOptimizationIndexSort() throws IOException { +final Directory dir = newDirectory(); +IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random())); +boolean reverseSort = randomBoolean(); +final SortField sortField = new SortField("field1", SortField.Type.LONG, reverseSort); +Sort indexSort = new Sort(sortField); +iwc.setIndexSort(indexSort); +final IndexWriter writer = new IndexWriter(dir, iwc); + +final int numDocs = atLeast(50); +int[] sortedValues = initializeNumericValues(numDocs, reverseSort, 0); +int[] randomIdxs = randomIdxs(numDocs); + +for (int i = 0; i < numDocs; i++) { + final Document doc = new Document(); + if (sortedValues[randomIdxs[i]] > 0) { +doc.add(new NumericDocValuesField("field1", sortedValues[randomIdxs[i]])); + } + writer.addDocument(doc); + if (i == 30) { +writer.flush(); + } +} +final IndexReader reader = DirectoryReader.open(writer); +writer.close(); Review comment: If we use try-with-resources, we won't need the close() and we will safely close in case of unexpected failures ## File path: lucene/core/src/java/org/apache/lucene/search/FieldComparator.java ## @@ -387,125 +346,225 @@ protected SortedDocValues getSortedDocValues(LeafReaderContext context, String f @Override public LeafFieldComparator getLeafComparator(LeafReaderContext context) throws IOException { - termsIndex = getSortedDocValues(context, field); - currentReaderGen++; + return new TermOrdValLeafComparator(context); +} - if (topValue != null) { -// Recompute topOrd/SameReader -int ord = termsIndex.lookupTerm(topValue);
[jira] [Commented] (SOLR-9188) BlockUnknown property makes inter-node communication impossible
[ https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17227004#comment-17227004 ] Jan Høydahl commented on SOLR-9188: --- If you believe you have found a similar bug in 8.6, then please first discuss it on the solr-user@ list, and if it looks like a new bug or a regression, then we'll file a *new* Jira issue for it, with proper documentation, stack traces and reproduction steps. > BlockUnknown property makes inter-node communication impossible > --- > > Key: SOLR-9188 > URL: https://issues.apache.org/jira/browse/SOLR-9188 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 6.0 >Reporter: Piotr Tempes >Assignee: Noble Paul >Priority: Critical > Labels: BasicAuth, Security > Fix For: 6.2.1, 6.3, 7.0 > > Attachments: SOLR-9188.patch, solr9188-errorlog.txt > > > When I setup my solr cloud without blockUnknown property it works as > expected. When I want to block non authenticated requests I get following > stacktrace during startup (see attached file). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14928) Remove Overseer ClusterStateUpdater
[ https://issues.apache.org/jira/browse/SOLR-14928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17227087#comment-17227087 ] Ilan Ginzburg commented on SOLR-14928: -- I pushed a first [commit|https://github.com/murblanc/lucene-solr/commit/18339ce837a5bef0aec780842dee84257cc6d713] to [https://github.com/murblanc/lucene-solr/tree/SOLR-14928_ClusterStateUpdater] to share the general approach I'm taking here (this is really really far from a PR at this stage). I use the [implementation notes section|https://docs.google.com/document/d/1u4QHsIHuIxlglIW6hekYlXGNOP0HjLGVX5N6inkj6Ok/edit#heading=h.33zr975cdvb4] of the removing overseer doc to explicit (essentially to myself) what I'm doing. Might help for looking at the code. As usual, things are more complex than anticipated. [~gezapeti] FYI since you expressed interest. > Remove Overseer ClusterStateUpdater > --- > > Key: SOLR-14928 > URL: https://issues.apache.org/jira/browse/SOLR-14928 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Ilan Ginzburg >Assignee: Ilan Ginzburg >Priority: Major > Labels: cluster, collection-api, overseer > > Remove the Overseer {{ClusterStateUpdater}} thread and associated Zookeeper > queue at {{<_chroot_>/overseer/queue}}. > Change cluster state updates so that each (Collection API) command execution > does the update directly in Zookeeper using optimistic locking (Compare and > Swap on the {{state.json}} Zookeeper files). > Following this change cluster state updates would still be happening only > from the Overseer node (that's where Collection API commands are executing), > but the code will be ready for distribution once such commands can be > executed by any node (other work done in the context of parent task > SOLR-14927). > See the [Cluster State > Updater|https://docs.google.com/document/d/1u4QHsIHuIxlglIW6hekYlXGNOP0HjLGVX5N6inkj6Ok/edit#heading=h.ymtfm3p518c] > section in the Removing Overseer doc. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul opened a new pull request #2065: SOLR-14977 : ContainerPlugins should be configurable
noblepaul opened a new pull request #2065: URL: https://github.com/apache/lucene-solr/pull/2065 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org