[GitHub] [lucene-solr] iverase commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732735040 @jpountz @dweiss I tried again by wrapping the IndexOutput / IndexInput unsuccessfully. I think the reason is that in some cases we use temporary buffers via `ByteBuffersDataOutput#newResettableInstance` which I cannot wrap as the class is final. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
dweiss commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732744204 Thanks @iverase ! I admit I grew up with big endian (assembly on M68k) and little endian always confused the hell out of me looking at memory hex dumps... but even my personal bias aside the patch you created scatters those reversing static method calls all over the place - this seems counterintuitive and against of what the goal of having little endian was (which is less code, fewer ops on LE architectures)? I may be naive but maybe if all DataOutput classes accepted ByteOrder (including ByteBuffersDataOutput) then the rest of the code could live with either byte order transparently on those paired writeints/ readints, etc? If these methods are not properly paired (like you pointed out is the case) then I'd try to fix that unpaired optimizations first... Otherwise it'll be mind-bending to follow what the code actually does. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732765973 @dweiss This should be only a temporary situation. The idea is to create new codecs that would not wrap those calls. The current codecs will be move to backwards codecs and only be used for indexes before 9.0.0. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] alessandrobenedetti merged pull request #2096: SOLR-15015: added support to parametric Interleaving algorithm
alessandrobenedetti merged pull request #2096: URL: https://github.com/apache/lucene-solr/pull/2096 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
dweiss commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732797761 But these methods are all over classes that are reused across codecs - not just codec-specific ones (CodecUtil, etc.)... once you commit this in something tells me they'll remain in the code for a lng time... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-15015) Add support for Interleaving Algorithm parameter in Learning To Rank
[ https://issues.apache.org/jira/browse/SOLR-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238013#comment-17238013 ] ASF subversion and git services commented on SOLR-15015: Commit 4d05e72eba3cc83717754e6ddaec014a7782fb20 in lucene-solr's branch refs/heads/master from Alessandro Benedetti [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4d05e72 ] [SOLR-15015] added support to parametric Interleaving algorithm + tests (#2096) > Add support for Interleaving Algorithm parameter in Learning To Rank > > > Key: SOLR-15015 > URL: https://issues.apache.org/jira/browse/SOLR-15015 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Alessandro Benedetti >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Interleaving has been contributed with SOLR-14560 and it now supports just > one algorithm ( Team Draft) > To facilitate contributions of new algorithm the scope of this issue is to > support a new parameter : 'interleavingAlgorithm' (tentative) > Default value will be team draft interleaving. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-15015) Add support for Interleaving Algorithm parameter in Learning To Rank
[ https://issues.apache.org/jira/browse/SOLR-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238017#comment-17238017 ] ASF subversion and git services commented on SOLR-15015: Commit ca040402d9470969d7f8fe81c5bf4125e9344cde in lucene-solr's branch refs/heads/master from Alessandro Benedetti [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ca04040 ] SOLR-15015: added support to parametric Interleaving algorithm (#2096) > Add support for Interleaving Algorithm parameter in Learning To Rank > > > Key: SOLR-15015 > URL: https://issues.apache.org/jira/browse/SOLR-15015 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Alessandro Benedetti >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Interleaving has been contributed with SOLR-14560 and it now supports just > one algorithm ( Team Draft) > To facilitate contributions of new algorithm the scope of this issue is to > support a new parameter : 'interleavingAlgorithm' (tentative) > Default value will be team draft interleaving. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-15015) Add support for Interleaving Algorithm parameter in Learning To Rank
[ https://issues.apache.org/jira/browse/SOLR-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238023#comment-17238023 ] ASF subversion and git services commented on SOLR-15015: Commit 6dbb4d5a6413f7cf0b81af3a988699a1d4a369ef in lucene-solr's branch refs/heads/branch_8x from Alessandro Benedetti [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6dbb4d5 ] SOLR-15015: added support to parametric Interleaving algorithm (#2096) (cherry picked from commit ca040402d9470969d7f8fe81c5bf4125e9344cde) > Add support for Interleaving Algorithm parameter in Learning To Rank > > > Key: SOLR-15015 > URL: https://issues.apache.org/jira/browse/SOLR-15015 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Alessandro Benedetti >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Interleaving has been contributed with SOLR-14560 and it now supports just > one algorithm ( Team Draft) > To facilitate contributions of new algorithm the scope of this issue is to > support a new parameter : 'interleavingAlgorithm' (tentative) > Default value will be team draft interleaving. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-15015) Add support for Interleaving Algorithm parameter in Learning To Rank
[ https://issues.apache.org/jira/browse/SOLR-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti updated SOLR-15015: Fix Version/s: 8.8 master (9.0) > Add support for Interleaving Algorithm parameter in Learning To Rank > > > Key: SOLR-15015 > URL: https://issues.apache.org/jira/browse/SOLR-15015 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Alessandro Benedetti >Priority: Major > Fix For: master (9.0), 8.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Interleaving has been contributed with SOLR-14560 and it now supports just > one algorithm ( Team Draft) > To facilitate contributions of new algorithm the scope of this issue is to > support a new parameter : 'interleavingAlgorithm' (tentative) > Default value will be team draft interleaving. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-15015) Add support for Interleaving Algorithm parameter in Learning To Rank
[ https://issues.apache.org/jira/browse/SOLR-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti resolved SOLR-15015. - Resolution: Fixed Merged in master / 8.x branch > Add support for Interleaving Algorithm parameter in Learning To Rank > > > Key: SOLR-15015 > URL: https://issues.apache.org/jira/browse/SOLR-15015 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Alessandro Benedetti >Priority: Major > Fix For: master (9.0), 8.8 > > Time Spent: 20m > Remaining Estimate: 0h > > Interleaving has been contributed with SOLR-14560 and it now supports just > one algorithm ( Team Draft) > To facilitate contributions of new algorithm the scope of this issue is to > support a new parameter : 'interleavingAlgorithm' (tentative) > Default value will be team draft interleaving. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jpountz commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
jpountz commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529369260 ## File path: lucene/core/src/java/org/apache/lucene/codecs/CodecUtil.java ## @@ -570,6 +571,23 @@ static void writeCRC(IndexOutput output) throws IOException { if ((value & 0xL) != 0) { throw new IllegalStateException("Illegal CRC-32 checksum: " + value + " (resource=" + output + ")"); } -output.writeLong(value); +writeChecksum(output, value); } + + private static int readVersion(DataInput in) throws IOException { +return EndiannessReverserUtil.readInt(in); + } + + private static void writeVersion(DataOutput out, int version) throws IOException { +EndiannessReverserUtil.writeInt(out, version); + } + + private static long readChecksum(DataInput in) throws IOException { +return EndiannessReverserUtil.readLong(in); + } + + private static void writeChecksum(DataOutput out, long checksum) throws IOException { +EndiannessReverserUtil.writeLong(out, checksum); + } Review comment: My mental model for `EndiannessReverserUtil` is that after this PR is merged, we should look into no longer using it in current codecs. So maybe for CodecUtil we should avoid using it and directly do e.g. `long foo = Long.reverseBytes(in.readLong())`. ## File path: lucene/core/src/java/org/apache/lucene/search/SortedNumericSortField.java ## @@ -106,20 +107,21 @@ public Provider() { @Override public SortField readSortField(DataInput in) throws IOException { Review comment: likewise here ## File path: lucene/core/src/java/org/apache/lucene/index/SegmentInfos.java ## @@ -303,34 +306,37 @@ public static final SegmentInfos readCommit(Directory directory, String segmentF } /** Read the commit from the provided {@link ChecksumIndexInput}. */ - public static final SegmentInfos readCommit(Directory directory, ChecksumIndexInput input, long generation) throws IOException { + public static final SegmentInfos readCommit(Directory directory, ChecksumIndexInput inputCodec, long generation) throws IOException { Throwable priorE = null; int format = -1; try { // NOTE: as long as we want to throw indexformattooold (vs corruptindexexception), we need // to read the magic ourselves. - int magic = input.readInt(); + int magic = inputCodec.readInt(); if (magic != CodecUtil.CODEC_MAGIC) { -throw new IndexFormatTooOldException(input, magic, CodecUtil.CODEC_MAGIC, CodecUtil.CODEC_MAGIC); +throw new IndexFormatTooOldException(inputCodec, magic, CodecUtil.CODEC_MAGIC, CodecUtil.CODEC_MAGIC); } - format = CodecUtil.checkHeaderNoMagic(input, "segments", VERSION_70, VERSION_CURRENT); + format = CodecUtil.checkHeaderNoMagic(inputCodec, "segments", VERSION_70, VERSION_CURRENT); byte id[] = new byte[StringHelper.ID_LENGTH]; - input.readBytes(id, 0, id.length); - CodecUtil.checkIndexHeaderSuffix(input, Long.toString(generation, Character.MAX_RADIX)); + inputCodec.readBytes(id, 0, id.length); + CodecUtil.checkIndexHeaderSuffix(inputCodec, Long.toString(generation, Character.MAX_RADIX)); - Version luceneVersion = Version.fromBits(input.readVInt(), input.readVInt(), input.readVInt()); - int indexCreatedVersion = input.readVInt(); + Version luceneVersion = Version.fromBits(inputCodec.readVInt(), inputCodec.readVInt(), inputCodec.readVInt()); + int indexCreatedVersion = inputCodec.readVInt(); if (luceneVersion.major < indexCreatedVersion) { throw new CorruptIndexException("Creation version [" + indexCreatedVersion -+ ".x] can't be greater than the version that wrote the segment infos: [" + luceneVersion + "]" , input); ++ ".x] can't be greater than the version that wrote the segment infos: [" + luceneVersion + "]" , inputCodec); } if (indexCreatedVersion < Version.LATEST.major - 1) { -throw new IndexFormatTooOldException(input, "This index was initially created with Lucene " +throw new IndexFormatTooOldException(inputCodec, "This index was initially created with Lucene " + indexCreatedVersion + ".x while the current version is " + Version.LATEST + " and Lucene only supports reading the current and previous major versions."); } + // Wrap IndexInput for Big endian indexes + final DataInput input = format < VERSION_90 ? new EndiannessReverserIndexInput(inputCodec) : inputCodec; Review comment: maybe move the below logic to a method so that we could do something like `return parseSegmentInfos( format < VERSION_90 ? new EndiannessReverserIndexInput(input) : input);` and avoid renaming `input` to `inptCodec` above. It would also reduce chances of mistakes as `input` and `inputCodec` would never be available at the
[GitHub] [lucene-solr] jpountz commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
jpountz commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732827520 > But these methods are all over classes that are reused across codecs - not just codec-specific ones (CodecUtil, etc.)... once you commit this in something tells me they'll remain in the code for a lng time... We will need to retain this until Lucene 11 indeed, though hopefully we should be able to move the vast majority of the code that swaps the byte order to `lucene/backward-codecs` in the near future as we migrate our file formats to the little endian byte order? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
dweiss commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732846144 Sure, Adrien. Guys, don't get me wrong - I'm not vetoing the change, I'm just saying it looks terrible with all those calls all over the place. :) But I also can't sit on this now as my backlog is only increasing and Ignacio worked on a large patch to get this done so my complaints can be safely ignored... I'll just live with it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] s1monw merged pull request #2085: LUCENE-9508: Fix DocumentsWriter to block threads until unstalled
s1monw merged pull request #2085: URL: https://github.com/apache/lucene-solr/pull/2085 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9508) DocumentsWriter doesn't check for BlockedFlushes in stall mode``
[ https://issues.apache.org/jira/browse/LUCENE-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238072#comment-17238072 ] ASF subversion and git services commented on LUCENE-9508: - Commit c71f119e9ac0a179b0f2d1741306bb0046e12dac in lucene-solr's branch refs/heads/master from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c71f119 ] LUCENE-9508: Fix DocumentsWriter to block threads until unstalled (#2085) DWStallControl expects the caller to loop on top of the wait call to make progress with flushing if the DW is stalled. This logic wasn't applied such that DW only stalled for one second and then released the indexing thread. This can cause OOM if for instance during a full flush one DWPT gets stuck and onther threads keep on indexing. > DocumentsWriter doesn't check for BlockedFlushes in stall mode`` > > > Key: LUCENE-9508 > URL: https://issues.apache.org/jira/browse/LUCENE-9508 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 8.5.1 >Reporter: Sorabh Hamirwasia >Priority: Major > Labels: IndexWriter > Time Spent: 1h > Remaining Estimate: 0h > > Hi, > I was investigating an issue where the memory usage by a single Lucene > IndexWriter went up to ~23GB. Lucene has a concept of stalling in case the > memory used by each index breaches the 2 X ramBuffer limit (10% of JVM heap, > this case ~3GB). So ideally memory usage should not go above that limit. I > looked into the heap dump and found that the fullFlush thread when enters > *markForFullFlush* method, it tries to take lock on the ThreadStates of all > the DWPT thread sequentially. If lock on one of the ThreadState is blocked > then it will block indefinitely. This is what happened in my case, where one > of the DWPT thread was stuck in indexing process. Due to this fullFlush > thread was unable to populate the flush queue even though the stall mode was > detected. This caused the new indexing request which came on indexing thread > to continue after sleeping for a second, and continue with indexing. In > **preUpdate()** method it looks for the stalled case and see if there is any > pending flushes (based on flush queue), if not then sleep and continue. > Question: > 1) Should **preUpdate** look into the blocked flushes information as well > instead of just flush queue ? > 2) Should the fullFlush thread wait indefinitely for the lock on ThreadStates > ? Since single blocking writing thread can block the full flush here. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-9508) DocumentsWriter doesn't check for BlockedFlushes in stall mode``
[ https://issues.apache.org/jira/browse/LUCENE-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer resolved LUCENE-9508. - Fix Version/s: 8.8 master (9.0) Lucene Fields: New,Patch Available (was: New) Resolution: Fixed > DocumentsWriter doesn't check for BlockedFlushes in stall mode`` > > > Key: LUCENE-9508 > URL: https://issues.apache.org/jira/browse/LUCENE-9508 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 8.5.1 >Reporter: Sorabh Hamirwasia >Priority: Major > Labels: IndexWriter > Fix For: master (9.0), 8.8 > > Time Spent: 1h > Remaining Estimate: 0h > > Hi, > I was investigating an issue where the memory usage by a single Lucene > IndexWriter went up to ~23GB. Lucene has a concept of stalling in case the > memory used by each index breaches the 2 X ramBuffer limit (10% of JVM heap, > this case ~3GB). So ideally memory usage should not go above that limit. I > looked into the heap dump and found that the fullFlush thread when enters > *markForFullFlush* method, it tries to take lock on the ThreadStates of all > the DWPT thread sequentially. If lock on one of the ThreadState is blocked > then it will block indefinitely. This is what happened in my case, where one > of the DWPT thread was stuck in indexing process. Due to this fullFlush > thread was unable to populate the flush queue even though the stall mode was > detected. This caused the new indexing request which came on indexing thread > to continue after sleeping for a second, and continue with indexing. In > **preUpdate()** method it looks for the stalled case and see if there is any > pending flushes (based on flush queue), if not then sleep and continue. > Question: > 1) Should **preUpdate** look into the blocked flushes information as well > instead of just flush queue ? > 2) Should the fullFlush thread wait indefinitely for the lock on ThreadStates > ? Since single blocking writing thread can block the full flush here. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9508) DocumentsWriter doesn't check for BlockedFlushes in stall mode``
[ https://issues.apache.org/jira/browse/LUCENE-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238077#comment-17238077 ] ASF subversion and git services commented on LUCENE-9508: - Commit 9057bb927bef9b11ab70c612b3efa60fd4a0477d in lucene-solr's branch refs/heads/branch_8x from Simon Willnauer [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9057bb9 ] LUCENE-9508: Fix DocumentsWriter to block threads until unstalled (#2085) DWStallControl expects the caller to loop on top of the wait call to make progress with flushing if the DW is stalled. This logic wasn't applied such that DW only stalled for one second and then released the indexing thread. This can cause OOM if for instance during a full flush one DWPT gets stuck and onther threads keep on indexing. > DocumentsWriter doesn't check for BlockedFlushes in stall mode`` > > > Key: LUCENE-9508 > URL: https://issues.apache.org/jira/browse/LUCENE-9508 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 8.5.1 >Reporter: Sorabh Hamirwasia >Priority: Major > Labels: IndexWriter > Fix For: master (9.0), 8.8 > > Time Spent: 1h > Remaining Estimate: 0h > > Hi, > I was investigating an issue where the memory usage by a single Lucene > IndexWriter went up to ~23GB. Lucene has a concept of stalling in case the > memory used by each index breaches the 2 X ramBuffer limit (10% of JVM heap, > this case ~3GB). So ideally memory usage should not go above that limit. I > looked into the heap dump and found that the fullFlush thread when enters > *markForFullFlush* method, it tries to take lock on the ThreadStates of all > the DWPT thread sequentially. If lock on one of the ThreadState is blocked > then it will block indefinitely. This is what happened in my case, where one > of the DWPT thread was stuck in indexing process. Due to this fullFlush > thread was unable to populate the flush queue even though the stall mode was > detected. This caused the new indexing request which came on indexing thread > to continue after sleeping for a second, and continue with indexing. In > **preUpdate()** method it looks for the stalled case and see if there is any > pending flushes (based on flush queue), if not then sleep and continue. > Question: > 1) Should **preUpdate** look into the blocked flushes information as well > instead of just flush queue ? > 2) Should the fullFlush thread wait indefinitely for the lock on ThreadStates > ? Since single blocking writing thread can block the full flush here. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529536245 ## File path: lucene/core/src/java/org/apache/lucene/codecs/CodecUtil.java ## @@ -570,6 +571,23 @@ static void writeCRC(IndexOutput output) throws IOException { if ((value & 0xL) != 0) { throw new IllegalStateException("Illegal CRC-32 checksum: " + value + " (resource=" + output + ")"); } -output.writeLong(value); +writeChecksum(output, value); } + + private static int readVersion(DataInput in) throws IOException { +return EndiannessReverserUtil.readInt(in); + } + + private static void writeVersion(DataOutput out, int version) throws IOException { +EndiannessReverserUtil.writeInt(out, version); + } + + private static long readChecksum(DataInput in) throws IOException { +return EndiannessReverserUtil.readLong(in); + } + + private static void writeChecksum(DataOutput out, long checksum) throws IOException { +EndiannessReverserUtil.writeLong(out, checksum); + } Review comment: yes, that make sense. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529536955 ## File path: lucene/core/src/java/org/apache/lucene/search/SortField.java ## @@ -148,12 +149,12 @@ public Provider() { @Override public SortField readSortField(DataInput in) throws IOException { Review comment: I introduce a wrapper to the DataOutput as well and now this classes remain unchanged. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529537082 ## File path: lucene/core/src/java/org/apache/lucene/search/SortedNumericSortField.java ## @@ -106,20 +107,21 @@ public Provider() { @Override public SortField readSortField(DataInput in) throws IOException { Review comment: Same as above This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529537298 ## File path: lucene/core/src/java/org/apache/lucene/store/DataInput.java ## @@ -155,26 +160,25 @@ public int readZInt() throws IOException { * @see DataOutput#writeLong(long) */ public long readLong() throws IOException { -return (((long)readInt()) << 32) | (readInt() & 0xL); +final byte b1 = readByte(); +final byte b2 = readByte(); +final byte b3 = readByte(); +final byte b4 = readByte(); +final byte b5 = readByte(); +final byte b6 = readByte(); +final byte b7 = readByte(); +final byte b8 = readByte(); +return ((b8 & 0xFFL) << 56) | (b7 & 0xFFL) << 48 | (b6 & 0xFFL) << 40 | (b5 & 0xFFL) << 32 +| (b4 & 0xFFL) << 24 | (b3 & 0xFFL) << 16 | (b2 & 0xFFL) << 8 | (b1 & 0xFFL); } /** - * Read a specified number of longs with the little endian byte order. - * This method can be used to read longs whose bytes have been - * {@link Long#reverseBytes reversed} at write time: - * - * for (long l : longs) { - * output.writeLong(Long.reverseBytes(l)); - * } - * - * @lucene.experimental Review comment: yes This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529539404 ## File path: lucene/replicator/src/java/org/apache/lucene/replicator/nrt/CopyOneFile.java ## @@ -96,11 +96,12 @@ public boolean visit() throws IOException { // Paranoia: make sure the primary node is not smoking crack, by somehow sending us an already corrupted file whose checksum (in its // footer) disagrees with reality: long actualChecksumIn = in.readLong(); -if (actualChecksumIn != checksum) { +// CheckSum is written in Big Endian so we need to reverse bytes +if (actualChecksumIn != Long.reverseBytes(checksum)) { Review comment: yes, just need to make sure we revert bytes when writing the checksum again. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529539703 ## File path: lucene/core/src/java/org/apache/lucene/util/bkd/OfflinePointWriter.java ## @@ -54,7 +54,10 @@ public void append(byte[] packedValue, int docID) throws IOException { assert closed == false : "Point writer is already closed"; assert packedValue.length == config.packedBytesLength : "[packedValue] must have length [" + config.packedBytesLength + "] but was [" + packedValue.length + "]"; out.writeBytes(packedValue, 0, packedValue.length); -out.writeInt(docID); +out.writeByte((byte) (docID >> 24)); +out.writeByte((byte) (docID >> 16)); +out.writeByte((byte) (docID >> 8)); +out.writeByte((byte) (docID >> 0)); Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529540139 ## File path: lucene/core/src/java/org/apache/lucene/store/DataOutput.java ## @@ -210,8 +210,14 @@ public final void writeZInt(int i) throws IOException { * @see DataInput#readLong() */ public void writeLong(long i) throws IOException { -writeInt((int) (i >> 32)); -writeInt((int) i); +writeByte((byte) i); +writeByte((byte)(i >> 8)); +writeByte((byte)(i >> 16)); +writeByte((byte)(i >> 24)); +writeByte((byte)(i >> 32)); +writeByte((byte)(i >> 40)); +writeByte((byte)(i >> 48)); +writeByte((byte)(i >> 56)); Review comment: Just to keep my sanity :) done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jpountz commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
jpountz commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732970250 @dweiss I don't think I had got you wrong, I wanted to make sure I had understood what you meant and that I was not missing a simplification for the backward-compatibility logic. This will certainly be a bit challenging to maintain, similarly to the surrogate dance we did when changing the order of the terms dictionary years ago. But I think it's still worth doing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-732975608 I addressed Adrien's comments and introduce a `EndiannessReverserIndexOutput`. This allows to remove all the reverse dancing from the `SortFields` and those classes remain unchanged. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian
iverase commented on a change in pull request #2094: URL: https://github.com/apache/lucene-solr/pull/2094#discussion_r529553632 ## File path: lucene/core/src/java/org/apache/lucene/store/DataInput.java ## @@ -92,15 +92,20 @@ public void readBytes(byte[] b, int offset, int len, boolean useBuffer) * @see DataOutput#writeByte(byte) */ public short readShort() throws IOException { -return (short) (((readByte() & 0xFF) << 8) | (readByte() & 0xFF)); +final byte b1 = readByte(); +final byte b2 = readByte(); +return (short) ((b2 << 8) | (b1 & 0xFF)); Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9614) Implement KNN Query
[ https://issues.apache.org/jira/browse/LUCENE-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238133#comment-17238133 ] Michael Sokolov commented on LUCENE-9614: - > I wonder if the Query could be just a map from N doc IDs to scores, and the > KNN search would actually be run to construct the Query, not as part of > running the Query I think that captures basically how such a Query would work - one pass over the index to gather documents, then participate in "normal" query processing. I did write such a GlobalDocIdQuery to support this at one point during experimentation. One drawback there is you need a special query-formation process; it wouldn't be enough to run a query parser over an incoming serialized query. We could also consider performing the knnSearch during Query.rewrite() somewhat analogous to how MultiTermQueries work. I guess I'm not sure why we need to be concerned about the issue that the Query's matches are dependent on the index content. Is that an invariant that we rely on anywhere? Caching maybe? > Implement KNN Query > --- > > Key: LUCENE-9614 > URL: https://issues.apache.org/jira/browse/LUCENE-9614 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Michael Sokolov >Priority: Major > > Now we have a vector index format, and one vector indexing/KNN search > implementation, but the interface is low-level: you can search across a > single segment only. We would like to expose a Query implementation. > Initially, we want to support a usage where the KnnVectorQuery selects the > k-nearest neighbors without regard to any other constraints, and these can > then be filtered as part of an enclosing Boolean or other query. > Later we will want to explore some kind of filtering *while* performing > vector search, or a re-entrant search process that can yield further results. > Because of the nature of knn search (all documents having any vector value > match), it is more like a ranking than a filtering operation, and it doesn't > really make sense to provide an iterator interface that can be merged in the > usual way, in docid order, skipping ahead. It's not yet clear how to satisfy > a query that is "k nearest neighbors satsifying some arbitrary Query", at > least not without realizing a complete bitset for the Query. But this is for > a later issue; *this* issue is just about performing the knn search in > isolation, computing a set of (some given) K nearest neighbors, and providing > an iterator over those. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9614) Implement KNN Query
[ https://issues.apache.org/jira/browse/LUCENE-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238144#comment-17238144 ] Adrien Grand commented on LUCENE-9614: -- Yes, caching relies on it though we always use rewritten queries as cache keys so I might be over-zealous as it doesn't look like what you are considering would be worse than all the rewrite methods we have that select terms to keep based on global index statistics. > Implement KNN Query > --- > > Key: LUCENE-9614 > URL: https://issues.apache.org/jira/browse/LUCENE-9614 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Michael Sokolov >Priority: Major > > Now we have a vector index format, and one vector indexing/KNN search > implementation, but the interface is low-level: you can search across a > single segment only. We would like to expose a Query implementation. > Initially, we want to support a usage where the KnnVectorQuery selects the > k-nearest neighbors without regard to any other constraints, and these can > then be filtered as part of an enclosing Boolean or other query. > Later we will want to explore some kind of filtering *while* performing > vector search, or a re-entrant search process that can yield further results. > Because of the nature of knn search (all documents having any vector value > match), it is more like a ranking than a filtering operation, and it doesn't > really make sense to provide an iterator interface that can be merged in the > usual way, in docid order, skipping ahead. It's not yet clear how to satisfy > a query that is "k nearest neighbors satsifying some arbitrary Query", at > least not without realizing a complete bitset for the Query. But this is for > a later issue; *this* issue is just about performing the knn search in > isolation, computing a set of (some given) K nearest neighbors, and providing > an iterator over those. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14990) Reproducing test failure for TestRandomDVFaceting, trunk and 8x
[ https://issues.apache.org/jira/browse/SOLR-14990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-14990. --- Resolution: Cannot Reproduce Apparently fixed by LUCENE-9536 > Reproducing test failure for TestRandomDVFaceting, trunk and 8x > --- > > Key: SOLR-14990 > URL: https://issues.apache.org/jira/browse/SOLR-14990 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Priority: Major > > gradlew test --tests TestRandomDVFaceting.testRandomFaceting > -Dtests.seed=9F7CC98B1FF87CC4 -Dtests.multiplier=2 -Dtests.slow=true > -Dtests.locale=sr-Cyrl-XK -Dtests.timezone=Africa/Lubumbashi > -Dtests.asserts=true -Dtests.file.encoding=UTF-8 > ant test -Dtestcase=TestRandomDVFaceting -Dtests.method=testRandomFaceting > -Dtests.seed=CE974B5B87D84785 -Dtests.multiplier=2 -Dtests.slow=true > -Dtests.locale=ja -Dtests.timezone=UCT -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > > I haven't investigated this at all... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-15010) Missing jstack warning is alarming, when using bin/solr as client interface to solr
[ https://issues.apache.org/jira/browse/SOLR-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238332#comment-17238332 ] David Smiley commented on SOLR-15010: - The reason this is happening is very very likely because you are running in Solr's docker image, which only runs with a JRE (no jstack etc. utilities). [~janhoy] made this change and he supplied the small "jattach" utility as a substitute. I think instead of making this warning go away, let's make add the equivalent functionality via jattach? The equivalent syntax is "jattach threaddump". We can't do this via an alias but maybe a bash function? > Missing jstack warning is alarming, when using bin/solr as client interface > to solr > --- > > Key: SOLR-15010 > URL: https://issues.apache.org/jira/browse/SOLR-15010 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.7 >Reporter: David Eric Pugh >Priority: Minor > > In SOLR-14442 we added a warning if jstack wasn't found. I notice that I > use the bin/solr command a lot as a client, so bin solr zk or bin solr > healthcheck. > For example: > {{docker exec solr1 solr zk cp /security.json zk:security.json -z zoo1:2181}} > All of these emit the message: > The currently defined JAVA_HOME (/usr/local/openjdk-11) refers to a location > where java was found but jstack was not found. Continuing. > This is somewhat alarming, and then becomes annoying. Thoughts on maybe > only conducting this check if you are running {{bin/solr start}} or one of > the other commands that is actually starting Solr as a process? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-15010) Missing jstack warning is alarming, when using bin/solr as client interface to solr
[ https://issues.apache.org/jira/browse/SOLR-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238340#comment-17238340 ] David Eric Pugh commented on SOLR-15010: This would solve my frustration with seeing this message! > Missing jstack warning is alarming, when using bin/solr as client interface > to solr > --- > > Key: SOLR-15010 > URL: https://issues.apache.org/jira/browse/SOLR-15010 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.7 >Reporter: David Eric Pugh >Priority: Minor > > In SOLR-14442 we added a warning if jstack wasn't found. I notice that I > use the bin/solr command a lot as a client, so bin solr zk or bin solr > healthcheck. > For example: > {{docker exec solr1 solr zk cp /security.json zk:security.json -z zoo1:2181}} > All of these emit the message: > The currently defined JAVA_HOME (/usr/local/openjdk-11) refers to a location > where java was found but jstack was not found. Continuing. > This is somewhat alarming, and then becomes annoying. Thoughts on maybe > only conducting this check if you are running {{bin/solr start}} or one of > the other commands that is actually starting Solr as a process? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase merged pull request #2093: LUCENE-9606: Wrap boolean queries generated by shape fields with a Constant score query
iverase merged pull request #2093: URL: https://github.com/apache/lucene-solr/pull/2093 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9606) Wrap boolean queries generated by shape fields with a Constant score query
[ https://issues.apache.org/jira/browse/LUCENE-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238581#comment-17238581 ] ASF subversion and git services commented on LUCENE-9606: - Commit b63c37d1e80afe3ce40df54e1cbba03100ae3144 in lucene-solr's branch refs/heads/master from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b63c37d ] LUCENE-9606: Wrap boolean queries generated by shape fields with a Constant score query (#2093) > Wrap boolean queries generated by shape fields with a Constant score query > -- > > Key: LUCENE-9606 > URL: https://issues.apache.org/jira/browse/LUCENE-9606 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > When querying a shape field with a Geometry collection and a CONTAINS spatial > relationship, the query is rewritten as a boolean query. We should wrap the > resulting query with a ConstantScoreQuery. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9606) Wrap boolean queries generated by shape fields with a Constant score query
[ https://issues.apache.org/jira/browse/LUCENE-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238582#comment-17238582 ] ASF subversion and git services commented on LUCENE-9606: - Commit b8239d2f799a668548d10a75ce8be3e652fef804 in lucene-solr's branch refs/heads/branch_8x from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b8239d2 ] LUCENE-9606: Wrap boolean queries generated by shape fields with a Constant score query (#2093) > Wrap boolean queries generated by shape fields with a Constant score query > -- > > Key: LUCENE-9606 > URL: https://issues.apache.org/jira/browse/LUCENE-9606 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > When querying a shape field with a Geometry collection and a CONTAINS spatial > relationship, the query is rewritten as a boolean query. We should wrap the > resulting query with a ConstantScoreQuery. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on pull request #2093: LUCENE-9606: Wrap boolean queries generated by shape fields with a Constant score query
iverase commented on pull request #2093: URL: https://github.com/apache/lucene-solr/pull/2093#issuecomment-733529325 Thanks @dsmiley! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-9606) Wrap boolean queries generated by shape fields with a Constant score query
[ https://issues.apache.org/jira/browse/LUCENE-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ignacio Vera resolved LUCENE-9606. -- Fix Version/s: 8.8 Assignee: Ignacio Vera Resolution: Fixed > Wrap boolean queries generated by shape fields with a Constant score query > -- > > Key: LUCENE-9606 > URL: https://issues.apache.org/jira/browse/LUCENE-9606 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Major > Fix For: 8.8 > > Time Spent: 0.5h > Remaining Estimate: 0h > > When querying a shape field with a Geometry collection and a CONTAINS spatial > relationship, the query is rewritten as a boolean query. We should wrap the > resulting query with a ConstantScoreQuery. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org