[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1685: LUCENE-9433: Remove Ant support from trunk
mocobeta commented on a change in pull request #1685: URL: https://github.com/apache/lucene-solr/pull/1685#discussion_r458618400 ## File path: lucene/BUILD.md ## @@ -86,7 +80,4 @@ Please join the Lucene-User mailing list by visiting this site: Please post suggestions, questions, corrections or additions to this document to the lucene-user mailing list. -This file was originally written by Steven J. Owens . -This file was modified by Jon S. Stevens . - Copyright (c) 2001-2020 The Apache Software Foundation. All rights reserved. Review comment: Do we need copyright notice here? (I recently modified this line at https://github.com/apache/lucene-solr/pull/1449/files for the first time since 2005.) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1685: LUCENE-9433: Remove Ant support from trunk
mocobeta commented on a change in pull request #1685: URL: https://github.com/apache/lucene-solr/pull/1685#discussion_r458628616 ## File path: lucene/BUILD.md ## @@ -2,78 +2,72 @@ ## Basic steps: - 0. Install OpenJDK 11 (or greater), Ant 1.8.2+, Ivy 2.2.0 - 1. Download Lucene from Apache and unpack it - 2. Connect to the top-level of your Lucene installation + 0. Install OpenJDK 11 (or greater), Gradle 6.4.1 Review comment: We have gradle wrapper in the repo, installing Gradle itself is still required? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14676) solr-test-framework update/remove old commons-collections dependency
[ https://issues.apache.org/jira/browse/SOLR-14676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bernd Wahlen updated SOLR-14676: Summary: solr-test-framework update/remove old commons-collections dependency (was: solr-test-framework update/remove commons-collections dependency) > solr-test-framework update/remove old commons-collections dependency > > > Key: SOLR-14676 > URL: https://issues.apache.org/jira/browse/SOLR-14676 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java >Affects Versions: 8.6 >Reporter: Bernd Wahlen >Priority: Minor > > solr-test-framework has dependency to commons-collections:commons-collections > which moved to org.apache.commons:commons-collections4 some years ago. > It would be nice to replace or remove this old dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14676) solr-test-framework update/remove commons-collections dependency
Bernd Wahlen created SOLR-14676: --- Summary: solr-test-framework update/remove commons-collections dependency Key: SOLR-14676 URL: https://issues.apache.org/jira/browse/SOLR-14676 Project: Solr Issue Type: Wish Security Level: Public (Default Security Level. Issues are Public) Components: clients - java Affects Versions: 8.6 Reporter: Bernd Wahlen solr-test-framework has dependency to commons-collections:commons-collections which moved to org.apache.commons:commons-collections4 some years ago. It would be nice to replace or remove this old dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dweiss commented on a change in pull request #1685: LUCENE-9433: Remove Ant support from trunk
dweiss commented on a change in pull request #1685: URL: https://github.com/apache/lucene-solr/pull/1685#discussion_r458636209 ## File path: lucene/BUILD.md ## @@ -2,78 +2,72 @@ ## Basic steps: - 0. Install OpenJDK 11 (or greater), Ant 1.8.2+, Ivy 2.2.0 - 1. Download Lucene from Apache and unpack it - 2. Connect to the top-level of your Lucene installation + 0. Install OpenJDK 11 (or greater), Gradle 6.4.1 Review comment: No, it isn't. Gradle wrapper will download itself. We should *not* mention any external installation requirements because different branches will be on different gradle version (and may not be interoperable). The wrapper handles this transparently. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9437) Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly accessible
[ https://issues.apache.org/jira/browse/LUCENE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162614#comment-17162614 ] Michael McCandless commented on LUCENE-9437: Thanks [~goankur], it looks great, and we do not need a new test case (as the qabot is insisting) just for improving javadocs. I will push soon! > Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly > accessible > - > > Key: LUCENE-9437 > URL: https://issues.apache.org/jira/browse/LUCENE-9437 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 8.6 >Reporter: Ankur >Priority: Trivial > Attachments: LUCENE-9437.patch > > > Visibility of _DocValuesOrdinalsReader.decode(BytesRef, IntsRef)_ method is > set to 'protected'. This prevents the method from being used outside this > class in a setting where BinaryDocValues reader is instantiated outside the > class and binary payload containing ordinals still needs to be decoded. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9437) Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly accessible
[ https://issues.apache.org/jira/browse/LUCENE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162615#comment-17162615 ] Michael McCandless commented on LUCENE-9437: {quote}just for improving javadocs {quote} Woops, I mean just for making this method public *and* improving its javadocs ;) It is early here! > Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly > accessible > - > > Key: LUCENE-9437 > URL: https://issues.apache.org/jira/browse/LUCENE-9437 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 8.6 >Reporter: Ankur >Priority: Trivial > Attachments: LUCENE-9437.patch > > > Visibility of _DocValuesOrdinalsReader.decode(BytesRef, IntsRef)_ method is > set to 'protected'. This prevents the method from being used outside this > class in a setting where BinaryDocValues reader is instantiated outside the > class and binary payload containing ordinals still needs to be decoded. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9437) Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly accessible
[ https://issues.apache.org/jira/browse/LUCENE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162619#comment-17162619 ] Michael McCandless commented on LUCENE-9437: {{./gradlew test}} was angry about the {{{@link org.apache.lucene.facet.taxonomy.OrdinalsSegmentReader}}} I inserted the missing {{.OrdinalsReader}} and it looks happy ... I'll do that before pushing. > Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly > accessible > - > > Key: LUCENE-9437 > URL: https://issues.apache.org/jira/browse/LUCENE-9437 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 8.6 >Reporter: Ankur >Priority: Trivial > Attachments: LUCENE-9437.patch > > > Visibility of _DocValuesOrdinalsReader.decode(BytesRef, IntsRef)_ method is > set to 'protected'. This prevents the method from being used outside this > class in a setting where BinaryDocValues reader is instantiated outside the > class and binary payload containing ordinals still needs to be decoded. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] murblanc commented on a change in pull request #1684: SOLR-14613: strongly typed initial proposal for plugin interface
murblanc commented on a change in pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#discussion_r458733624 ## File path: solr/core/src/java/org/apache/solr/cloud/gumi/CreateCollectionRequest.java ## @@ -0,0 +1,35 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.cloud.gumi; + +/** + * Request for creating a new collection with a given number of shards and replication factor for various replica types. + * + * Note there is no need at this stage to allow the request to convey each shard hash range for example, this can be handled + * by the Solr side implementation without needing the plugin to worry about it. + * + */ +public interface CreateCollectionRequest extends Request { + String getCollectionName(); + + int getShardCount(); Review comment: Shard index can likely go away altogether and we only keep shard names (+ matching routing info). I'm trying to understand if naming the shards should be the responsibility of the Autoscaling plugin or rather of Solr (would prefer the plugin to always deal with preexisting shard names, makes things simpler). If it's the responsibility of Solr, when the API collection create call does not specify names, Solr first creates the corresponding names and then calls the plugin. I'm interested in comments for or against this approach. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14671) NumberFormatException when accessing ZK Status page in 8.6.0
[ https://issues.apache.org/jira/browse/SOLR-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162778#comment-17162778 ] Aliaksei Zhynhiarouski commented on SOLR-14671: --- Hi, seeing the same issue. Running zk 3.5.7 and sole 8.6.0. Everything works fine but zk status returns null. Important, I did upgrade at the same servers for zk from 3.4.8 to 3.5.7, /zookeeper/config is empty and exists. {noformat} java.lang.NumberFormatException: null at java.lang.Integer.parseInt(Integer.java:542) at java.lang.Integer.parseInt(Integer.java:615) at org.apache.solr.common.cloud.ZkDynamicConfig$Server.parseLine(ZkDynamicConfig.java:142) at org.apache.solr.common.cloud.ZkDynamicConfig.lambda$parseLines$0(ZkDynamicConfig.java:58) at java.util.Iterator.forEachRemaining(Iterator.java:116) at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at org.apache.solr.common.cloud.ZkDynamicConfig.parseLines(ZkDynamicConfig.java:53) at org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214) at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854) at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221) at org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748) {noformat} > NumberFormatException when accessing ZK Status page in 8.6.0 > >
[jira] [Commented] (SOLR-14669) Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null
[ https://issues.apache.org/jira/browse/SOLR-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162779#comment-17162779 ] Aliaksei Zhynhiarouski commented on SOLR-14669: --- Hi, seeing the same issue. Running zk 3.5.7 and sole 8.6.0. Everything works fine but zk status returns null. Important, I did upgrade at the same servers for zk from 3.4.8 to 3.5.7, /zookeeper/config is empty and exists. {code:java} java.lang.NumberFormatException: nulljava.lang.NumberFormatException: null at java.lang.Integer.parseInt(Integer.java:542) at java.lang.Integer.parseInt(Integer.java:615) at org.apache.solr.common.cloud.ZkDynamicConfig$Server.parseLine(ZkDynamicConfig.java:142) at org.apache.solr.common.cloud.ZkDynamicConfig.lambda$parseLines$0(ZkDynamicConfig.java:58) at java.util.Iterator.forEachRemaining(Iterator.java:116) at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at org.apache.solr.common.cloud.ZkDynamicConfig.parseLines(ZkDynamicConfig.java:53) at org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214) at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854) at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221) at org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748) {code} > Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null > ---
[jira] [Comment Edited] (SOLR-14671) NumberFormatException when accessing ZK Status page in 8.6.0
[ https://issues.apache.org/jira/browse/SOLR-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162778#comment-17162778 ] Aliaksei Zhynhiarouski edited comment on SOLR-14671 at 7/22/20, 1:03 PM: - Hi, seeing the same issue. Running zk 3.5.7 and sole 8.6.0. Everything works fine but zk status returns null. Important, I did upgrade at the same servers for zk from 3.4.8 to 3.5.7, /zookeeper/config is empty and exists. {code:java} java.lang.NumberFormatException: nulljava.lang.NumberFormatException: null at java.lang.Integer.parseInt(Integer.java:542) at java.lang.Integer.parseInt(Integer.java:615) at org.apache.solr.common.cloud.ZkDynamicConfig$Server.parseLine(ZkDynamicConfig.java:142) at org.apache.solr.common.cloud.ZkDynamicConfig.lambda$parseLines$0(ZkDynamicConfig.java:58) at java.util.Iterator.forEachRemaining(Iterator.java:116) at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) at org.apache.solr.common.cloud.ZkDynamicConfig.parseLines(ZkDynamicConfig.java:53) at org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214) at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854) at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221) at org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748) {code} was (Author: aliaksei): Hi, seeing the
[jira] [Created] (LUCENE-9439) Matches API should enumerate hit fields that have no positions (no iterator)
Dawid Weiss created LUCENE-9439: --- Summary: Matches API should enumerate hit fields that have no positions (no iterator) Key: LUCENE-9439 URL: https://issues.apache.org/jira/browse/LUCENE-9439 Project: Lucene - Core Issue Type: Improvement Reporter: Dawid Weiss Assignee: Dawid Weiss I have been fiddling with Matches API and it's great. There is one corner case that doesn't work for me though -- queries that affect fields without positions return {{MatchesUtil.MATCH_WITH_NO_TERMS}} but this constant is problematic as it doesn't carry the field name that caused it (returns null). The associated fromSubMatches combines all these constants into one (or swallows them) which is another problem. I think it would be more consistent if MATCH_WITH_NO_TERMS was replaced with a true match (carrying field name) returning an empty iterator (or a constant "empty" iterator NO_TERMS). I have a very compelling use case: I wrote an "auto-highlighter" that runs on top of Matches API and automatically picks up query-relevant fields and snippets. Everything works beautifully except for cases where fields are searchable but don't have any positions (token-like fields). I can work on a patch but wanted to reach out first - [~romseygeek]? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9439) Matches API should enumerate hit fields that have no positions (no iterator)
[ https://issues.apache.org/jira/browse/LUCENE-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162790#comment-17162790 ] Alan Woodward commented on LUCENE-9439: --- +1 - I did try something like this originally, but added some complexity and I didn't think there was a use case for it. If it turns out there is a use case then it's definitely worth it. > Matches API should enumerate hit fields that have no positions (no iterator) > > > Key: LUCENE-9439 > URL: https://issues.apache.org/jira/browse/LUCENE-9439 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > > I have been fiddling with Matches API and it's great. There is one corner > case that doesn't work for me though -- queries that affect fields without > positions return {{MatchesUtil.MATCH_WITH_NO_TERMS}} but this constant is > problematic as it doesn't carry the field name that caused it (returns null). > The associated fromSubMatches combines all these constants into one (or > swallows them) which is another problem. > I think it would be more consistent if MATCH_WITH_NO_TERMS was replaced with > a true match (carrying field name) returning an empty iterator (or a constant > "empty" iterator NO_TERMS). > I have a very compelling use case: I wrote an "auto-highlighter" that runs on > top of Matches API and automatically picks up query-relevant fields and > snippets. Everything works beautifully except for cases where fields are > searchable but don't have any positions (token-like fields). > I can work on a patch but wanted to reach out first - [~romseygeek]? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14066) Deprecate DIH and migrate to a community supported package
[ https://issues.apache.org/jira/browse/SOLR-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162788#comment-17162788 ] Erich Siffert commented on SOLR-14066: -- Ishan ChattopadhyayaI just made a quick & dirty test: I cloned [https://github.com/rohitbemax/dataimporthandler] and was building the _data-import-handler-8.6.0.jar_. I then removed _solr-dataimporthandler-8.6.0.jar_ and _solr-dataimporthandler-extras-8.6.0.jar_ from my Solr 8.6.0 installation and added the _data-import-handler-8.6.0.jar_ which was build from the dataimporthandler project. I was then running a dataimport from a postgresql db into Solr – without a glitch! :) I did not change anything in the dataimport.xml configuration, everything worked fine so far, more or less out of the box! > Deprecate DIH and migrate to a community supported package > -- > > Key: SOLR-14066 > URL: https://issues.apache.org/jira/browse/SOLR-14066 > Project: Solr > Issue Type: Improvement > Components: contrib - DataImportHandler >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya >Priority: Blocker > Fix For: 8.6 > > Attachments: image-2019-12-14-19-58-39-314.png > > Time Spent: 50m > Remaining Estimate: 0h > > DIH doesn't need to remain inside Solr anymore. Plan is to deprecate DIH in > 8.6, remove from 9.0. A community supported version of DIH (which can be used > with Solr's package manager) can be found here > https://github.com/rohitbemax/dataimporthandler. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9439) Matches API should enumerate hit fields that have no positions (no iterator)
[ https://issues.apache.org/jira/browse/LUCENE-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162797#comment-17162797 ] Dawid Weiss commented on LUCENE-9439: - Thanks Alan. Let me see if I can make it work. > Matches API should enumerate hit fields that have no positions (no iterator) > > > Key: LUCENE-9439 > URL: https://issues.apache.org/jira/browse/LUCENE-9439 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > > I have been fiddling with Matches API and it's great. There is one corner > case that doesn't work for me though -- queries that affect fields without > positions return {{MatchesUtil.MATCH_WITH_NO_TERMS}} but this constant is > problematic as it doesn't carry the field name that caused it (returns null). > The associated fromSubMatches combines all these constants into one (or > swallows them) which is another problem. > I think it would be more consistent if MATCH_WITH_NO_TERMS was replaced with > a true match (carrying field name) returning an empty iterator (or a constant > "empty" iterator NO_TERMS). > I have a very compelling use case: I wrote an "auto-highlighter" that runs on > top of Matches API and automatically picks up query-relevant fields and > snippets. Everything works beautifully except for cases where fields are > searchable but don't have any positions (token-like fields). > I can work on a patch but wanted to reach out first - [~romseygeek]? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14066) Deprecate DIH and migrate to a community supported package
[ https://issues.apache.org/jira/browse/SOLR-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162788#comment-17162788 ] Erich Siffert edited comment on SOLR-14066 at 7/22/20, 1:54 PM: Ishan ChattopadhyayaI: I just made a quick & dirty test: I cloned [https://github.com/rohitbemax/dataimporthandler] and was building the _data-import-handler-8.6.0.jar_. I then removed _solr-dataimporthandler-8.6.0.jar_ and _solr-dataimporthandler-extras-8.6.0.jar_ from my Solr 8.6.0 installation and added the _data-import-handler-8.6.0.jar_ which was build from the dataimporthandler project. I was then running a dataimport from a postgresql db into Solr – without a glitch! :) I did not change anything in the dataimport.xml configuration, everything worked fine so far, more or less out of the box! was (Author: sniff): Ishan ChattopadhyayaI just made a quick & dirty test: I cloned [https://github.com/rohitbemax/dataimporthandler] and was building the _data-import-handler-8.6.0.jar_. I then removed _solr-dataimporthandler-8.6.0.jar_ and _solr-dataimporthandler-extras-8.6.0.jar_ from my Solr 8.6.0 installation and added the _data-import-handler-8.6.0.jar_ which was build from the dataimporthandler project. I was then running a dataimport from a postgresql db into Solr – without a glitch! :) I did not change anything in the dataimport.xml configuration, everything worked fine so far, more or less out of the box! > Deprecate DIH and migrate to a community supported package > -- > > Key: SOLR-14066 > URL: https://issues.apache.org/jira/browse/SOLR-14066 > Project: Solr > Issue Type: Improvement > Components: contrib - DataImportHandler >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya >Priority: Blocker > Fix For: 8.6 > > Attachments: image-2019-12-14-19-58-39-314.png > > Time Spent: 50m > Remaining Estimate: 0h > > DIH doesn't need to remain inside Solr anymore. Plan is to deprecate DIH in > 8.6, remove from 9.0. A community supported version of DIH (which can be used > with Solr's package manager) can be found here > https://github.com/rohitbemax/dataimporthandler. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9437) Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly accessible
[ https://issues.apache.org/jira/browse/LUCENE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-9437: --- Fix Version/s: 8.7 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~goankur]! > Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly > accessible > - > > Key: LUCENE-9437 > URL: https://issues.apache.org/jira/browse/LUCENE-9437 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 8.6 >Reporter: Ankur >Priority: Trivial > Fix For: 8.7 > > Attachments: LUCENE-9437.patch > > > Visibility of _DocValuesOrdinalsReader.decode(BytesRef, IntsRef)_ method is > set to 'protected'. This prevents the method from being used outside this > class in a setting where BinaryDocValues reader is instantiated outside the > class and binary payload containing ordinals still needs to be decoded. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9437) Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly accessible
[ https://issues.apache.org/jira/browse/LUCENE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162833#comment-17162833 ] ASF subversion and git services commented on LUCENE-9437: - Commit 03a03b34a468f8095c7f0b87ceeaf4ba0d4aeaec in lucene-solr's branch refs/heads/master from Michael McCandless [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=03a03b3 ] LUCENE-9437: make DocValuesOrdinalsReader.decode public > Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly > accessible > - > > Key: LUCENE-9437 > URL: https://issues.apache.org/jira/browse/LUCENE-9437 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 8.6 >Reporter: Ankur >Priority: Trivial > Attachments: LUCENE-9437.patch > > > Visibility of _DocValuesOrdinalsReader.decode(BytesRef, IntsRef)_ method is > set to 'protected'. This prevents the method from being used outside this > class in a setting where BinaryDocValues reader is instantiated outside the > class and binary payload containing ordinals still needs to be decoded. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9437) Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly accessible
[ https://issues.apache.org/jira/browse/LUCENE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162836#comment-17162836 ] ASF subversion and git services commented on LUCENE-9437: - Commit 17041d4028180c48237694637c9ec1796315e7dc in lucene-solr's branch refs/heads/branch_8x from Michael McCandless [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=17041d4 ] LUCENE-9437: make DocValuesOrdinalsReader.decode public > Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly > accessible > - > > Key: LUCENE-9437 > URL: https://issues.apache.org/jira/browse/LUCENE-9437 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 8.6 >Reporter: Ankur >Priority: Trivial > Attachments: LUCENE-9437.patch > > > Visibility of _DocValuesOrdinalsReader.decode(BytesRef, IntsRef)_ method is > set to 'protected'. This prevents the method from being used outside this > class in a setting where BinaryDocValues reader is instantiated outside the > class and binary payload containing ordinals still needs to be decoded. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14677) DIH doesnt close JdbcDataSource when import encounters errors
Jason Gerlowski created SOLR-14677: -- Summary: DIH doesnt close JdbcDataSource when import encounters errors Key: SOLR-14677 URL: https://issues.apache.org/jira/browse/SOLR-14677 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: contrib - DataImportHandler Affects Versions: 7.5, master (9.0) Reporter: Jason Gerlowski DIH imports don't close DataSource's (which can hold db connections, etc.) in all cases. Specifically, if an import runs into an unexpected error forwarding processed docs to other nodes, it will neglect to close the DataSource's when it finishes. This problem goes back to at least 7.5, but is partially mitigated in older versions by a "finalize" hook in JdbcDataSource, which ensures that any db-connections are closed when the DataSource object is garbage-collected. In practice, this means that connections might be held open longer than necessary but will be closed within a few seconds or minutes by GC. In master/9.0, which requires a minimum of Java 11 and doesn't have the finalize-hook, the connections are never cleaned up when an error is encountered during DIH. DIH will likely be removed for the 9.0 release, but if it isn't this bug should be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14677) DIH doesnt close JdbcDataSource when import encounters errors
[ https://issues.apache.org/jira/browse/SOLR-14677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-14677: --- Description: DIH imports don't close DataSource's (which can hold db connections, etc.) in all cases. Specifically, if an import runs into an unexpected error forwarding processed docs to other nodes, it will neglect to close the DataSource's when it finishes. This problem goes back to at least 7.5. This is partially mitigated in older versions of some DataSource implementations (e.g. JdbcDataSource) by means of a "finalize" hook which invokes "close()" when the DataSource object is garbage-collected. In practice, this means that resources might be held open longer than necessary but will be closed within a few seconds or minutes by GC. This only helps JdbcDataSource though - all other DataSource impl's risk leaking resources. In master/9.0, which requires a minimum of Java 11 and doesn't have the finalize-hook, the connections are never cleaned up when an error is encountered during DIH. DIH will likely be removed for the 9.0 release, but if it isn't this bug should be fixed. was: DIH imports don't close DataSource's (which can hold db connections, etc.) in all cases. Specifically, if an import runs into an unexpected error forwarding processed docs to other nodes, it will neglect to close the DataSource's when it finishes. This problem goes back to at least 7.5, but is partially mitigated in older versions by a "finalize" hook in JdbcDataSource, which ensures that any db-connections are closed when the DataSource object is garbage-collected. In practice, this means that connections might be held open longer than necessary but will be closed within a few seconds or minutes by GC. In master/9.0, which requires a minimum of Java 11 and doesn't have the finalize-hook, the connections are never cleaned up when an error is encountered during DIH. DIH will likely be removed for the 9.0 release, but if it isn't this bug should be fixed. > DIH doesnt close JdbcDataSource when import encounters errors > - > > Key: SOLR-14677 > URL: https://issues.apache.org/jira/browse/SOLR-14677 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.5, master (9.0) >Reporter: Jason Gerlowski >Priority: Minor > > DIH imports don't close DataSource's (which can hold db connections, etc.) in > all cases. Specifically, if an import runs into an unexpected error > forwarding processed docs to other nodes, it will neglect to close the > DataSource's when it finishes. > This problem goes back to at least 7.5. This is partially mitigated in older > versions of some DataSource implementations (e.g. JdbcDataSource) by means of > a "finalize" hook which invokes "close()" when the DataSource object is > garbage-collected. In practice, this means that resources might be held open > longer than necessary but will be closed within a few seconds or minutes by > GC. This only helps JdbcDataSource though - all other DataSource impl's risk > leaking resources. > In master/9.0, which requires a minimum of Java 11 and doesn't have the > finalize-hook, the connections are never cleaned up when an error is > encountered during DIH. DIH will likely be removed for the 9.0 release, but > if it isn't this bug should be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14677) DIH doesnt close JdbcDataSource when import encounters errors
[ https://issues.apache.org/jira/browse/SOLR-14677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162853#comment-17162853 ] Jason Gerlowski commented on SOLR-14677: I have a branch [here|https://github.com/gerlowskija/lucene-solr-1/tree/jdbcdatasource_connection_leak] with a few edits that help reproduce the problem. What that branch has is: * logging in JdbcDataSource.close() to record if/when it is called. * a script "solr/reproduce-dih-jdbc-leak.sh", which can be run from in the solr directory to reproduce the problem. * some changes to ConcurrentUpdateHttp2SolrClient which optionally fake an IOException to trigger the DIH issue downstream. To reproduce the issue, run the included script from the "solr" directory. It spins up two solr nodes based out of a "cloud-dih" example directory, and kicks off a DIH run. When the DIH run finishes, the logs (in {{solr/example/cloud-dih/node1/logs/solr.log}}) will be missing the line that gets printed when JdbcDataSource closes: {{In JdbcDataSource.close}}. To see the "normal" behavior, edit the "reproduce-dih-jdbc-leak.sh" script, changing the {{CAUSE_DIH_IO_ERROR}} variable from "true" to "false". Following a DIH run, the logs will show that JdbcDataSource was closed correctly. > DIH doesnt close JdbcDataSource when import encounters errors > - > > Key: SOLR-14677 > URL: https://issues.apache.org/jira/browse/SOLR-14677 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.5, master (9.0) >Reporter: Jason Gerlowski >Priority: Minor > > DIH imports don't close DataSource's (which can hold db connections, etc.) in > all cases. Specifically, if an import runs into an unexpected error > forwarding processed docs to other nodes, it will neglect to close the > DataSource's when it finishes. > This problem goes back to at least 7.5. This is partially mitigated in older > versions of some DataSource implementations (e.g. JdbcDataSource) by means of > a "finalize" hook which invokes "close()" when the DataSource object is > garbage-collected. In practice, this means that resources might be held open > longer than necessary but will be closed within a few seconds or minutes by > GC. This only helps JdbcDataSource though - all other DataSource impl's risk > leaking resources. > In master/9.0, which requires a minimum of Java 11 and doesn't have the > finalize-hook, the connections are never cleaned up when an error is > encountered during DIH. DIH will likely be removed for the 9.0 release, but > if it isn't this bug should be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14677) DIH doesnt close JdbcDataSource when import encounters errors
[ https://issues.apache.org/jira/browse/SOLR-14677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-14677: --- Attachment: no-error-solr.log error-solr.log > DIH doesnt close JdbcDataSource when import encounters errors > - > > Key: SOLR-14677 > URL: https://issues.apache.org/jira/browse/SOLR-14677 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.5, master (9.0) >Reporter: Jason Gerlowski >Priority: Minor > Attachments: error-solr.log, no-error-solr.log > > > DIH imports don't close DataSource's (which can hold db connections, etc.) in > all cases. Specifically, if an import runs into an unexpected error > forwarding processed docs to other nodes, it will neglect to close the > DataSource's when it finishes. > This problem goes back to at least 7.5. This is partially mitigated in older > versions of some DataSource implementations (e.g. JdbcDataSource) by means of > a "finalize" hook which invokes "close()" when the DataSource object is > garbage-collected. In practice, this means that resources might be held open > longer than necessary but will be closed within a few seconds or minutes by > GC. This only helps JdbcDataSource though - all other DataSource impl's risk > leaking resources. > In master/9.0, which requires a minimum of Java 11 and doesn't have the > finalize-hook, the connections are never cleaned up when an error is > encountered during DIH. DIH will likely be removed for the 9.0 release, but > if it isn't this bug should be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14677) DIH doesnt close JdbcDataSource when import encounters errors
[ https://issues.apache.org/jira/browse/SOLR-14677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162853#comment-17162853 ] Jason Gerlowski edited comment on SOLR-14677 at 7/22/20, 2:42 PM: -- I have a branch [here|https://github.com/gerlowskija/lucene-solr-1/tree/jdbcdatasource_connection_leak] with a few edits that help reproduce the problem. What that branch has is: * logging in JdbcDataSource.close() to record if/when it is called. * a script "solr/reproduce-dih-jdbc-leak.sh", which can be run from in the solr directory to reproduce the problem. * some changes to ConcurrentUpdateHttp2SolrClient which optionally fake an IOException to trigger the DIH issue downstream. To reproduce the issue, run the included script from the "solr" directory. It spins up two solr nodes based out of a "cloud-dih" example directory, and kicks off a DIH run. When the DIH run finishes, the logs (in {{solr/example/cloud-dih/node1/logs/solr.log}}) will be missing the line that gets printed when JdbcDataSource closes: {{In JdbcDataSource.close}}. To see the "normal" behavior, edit the "reproduce-dih-jdbc-leak.sh" script, changing the {{CAUSE_DIH_IO_ERROR}} variable from "true" to "false". Following a DIH run, the logs will show that JdbcDataSource was closed correctly. For the lazy, I've attached the log file from the DIH node from two runs: one that shows the normal good behavior and one that shows the leak when an IOError pops up during the import. was (Author: gerlowskija): I have a branch [here|https://github.com/gerlowskija/lucene-solr-1/tree/jdbcdatasource_connection_leak] with a few edits that help reproduce the problem. What that branch has is: * logging in JdbcDataSource.close() to record if/when it is called. * a script "solr/reproduce-dih-jdbc-leak.sh", which can be run from in the solr directory to reproduce the problem. * some changes to ConcurrentUpdateHttp2SolrClient which optionally fake an IOException to trigger the DIH issue downstream. To reproduce the issue, run the included script from the "solr" directory. It spins up two solr nodes based out of a "cloud-dih" example directory, and kicks off a DIH run. When the DIH run finishes, the logs (in {{solr/example/cloud-dih/node1/logs/solr.log}}) will be missing the line that gets printed when JdbcDataSource closes: {{In JdbcDataSource.close}}. To see the "normal" behavior, edit the "reproduce-dih-jdbc-leak.sh" script, changing the {{CAUSE_DIH_IO_ERROR}} variable from "true" to "false". Following a DIH run, the logs will show that JdbcDataSource was closed correctly. > DIH doesnt close JdbcDataSource when import encounters errors > - > > Key: SOLR-14677 > URL: https://issues.apache.org/jira/browse/SOLR-14677 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.5, master (9.0) >Reporter: Jason Gerlowski >Priority: Minor > Attachments: error-solr.log, no-error-solr.log > > > DIH imports don't close DataSource's (which can hold db connections, etc.) in > all cases. Specifically, if an import runs into an unexpected error > forwarding processed docs to other nodes, it will neglect to close the > DataSource's when it finishes. > This problem goes back to at least 7.5. This is partially mitigated in older > versions of some DataSource implementations (e.g. JdbcDataSource) by means of > a "finalize" hook which invokes "close()" when the DataSource object is > garbage-collected. In practice, this means that resources might be held open > longer than necessary but will be closed within a few seconds or minutes by > GC. This only helps JdbcDataSource though - all other DataSource impl's risk > leaking resources. > In master/9.0, which requires a minimum of Java 11 and doesn't have the > finalize-hook, the connections are never cleaned up when an error is > encountered during DIH. DIH will likely be removed for the 9.0 release, but > if it isn't this bug should be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14677) DIH doesnt close DataSource when import encounters errors
[ https://issues.apache.org/jira/browse/SOLR-14677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-14677: --- Summary: DIH doesnt close DataSource when import encounters errors (was: DIH doesnt close JdbcDataSource when import encounters errors) > DIH doesnt close DataSource when import encounters errors > - > > Key: SOLR-14677 > URL: https://issues.apache.org/jira/browse/SOLR-14677 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.5, master (9.0) >Reporter: Jason Gerlowski >Priority: Minor > Attachments: error-solr.log, no-error-solr.log > > > DIH imports don't close DataSource's (which can hold db connections, etc.) in > all cases. Specifically, if an import runs into an unexpected error > forwarding processed docs to other nodes, it will neglect to close the > DataSource's when it finishes. > This problem goes back to at least 7.5. This is partially mitigated in older > versions of some DataSource implementations (e.g. JdbcDataSource) by means of > a "finalize" hook which invokes "close()" when the DataSource object is > garbage-collected. In practice, this means that resources might be held open > longer than necessary but will be closed within a few seconds or minutes by > GC. This only helps JdbcDataSource though - all other DataSource impl's risk > leaking resources. > In master/9.0, which requires a minimum of Java 11 and doesn't have the > finalize-hook, the connections are never cleaned up when an error is > encountered during DIH. DIH will likely be removed for the 9.0 release, but > if it isn't this bug should be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (SOLR-14613) Provide a clean API for pluggable replica assignment implementations
[ https://issues.apache.org/jira/browse/SOLR-14613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilan Ginzburg reassigned SOLR-14613: Assignee: Ilan Ginzburg > Provide a clean API for pluggable replica assignment implementations > > > Key: SOLR-14613 > URL: https://issues.apache.org/jira/browse/SOLR-14613 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Andrzej Bialecki >Assignee: Ilan Ginzburg >Priority: Major > Time Spent: 7h 40m > Remaining Estimate: 0h > > As described in SIP-8 the current autoscaling Policy implementation has > several limitations that make it difficult to use for very large clusters and > very large collections. SIP-8 also mentions the possible migration path by > providing alternative implementations of the placement strategies that are > less complex but more efficient in these very large environments. > We should review the existing APIs that the current autoscaling engine uses > ({{SolrCloudManager}} , {{AssignStrategy}} , {{Suggester}} and related > interfaces) to see if they provide a sufficient and minimal API for plugging > in alternative autoscaling placement strategies, and if necessary refactor > the existing APIs. > Since these APIs are internal it should be possible to do this without > breaking back-compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14613) Provide a clean API for pluggable replica assignment implementations
[ https://issues.apache.org/jira/browse/SOLR-14613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162864#comment-17162864 ] Ilan Ginzburg commented on SOLR-14613: -- BTW, as part of that clean API for plugins, in my opinion we need to provide a couple of functional, useful production quality plugins implementations that perform placement. The initial onboarding cost to this new "autoscaling" framework can't include having to write a functional plugin, or nobody will onboard. > Provide a clean API for pluggable replica assignment implementations > > > Key: SOLR-14613 > URL: https://issues.apache.org/jira/browse/SOLR-14613 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Andrzej Bialecki >Assignee: Ilan Ginzburg >Priority: Major > Time Spent: 7h 40m > Remaining Estimate: 0h > > As described in SIP-8 the current autoscaling Policy implementation has > several limitations that make it difficult to use for very large clusters and > very large collections. SIP-8 also mentions the possible migration path by > providing alternative implementations of the placement strategies that are > less complex but more efficient in these very large environments. > We should review the existing APIs that the current autoscaling engine uses > ({{SolrCloudManager}} , {{AssignStrategy}} , {{Suggester}} and related > interfaces) to see if they provide a sufficient and minimal API for plugging > in alternative autoscaling placement strategies, and if necessary refactor > the existing APIs. > Since these APIs are internal it should be possible to do this without > breaking back-compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9439) Matches API should enumerate hit fields that have no positions (no iterator)
[ https://issues.apache.org/jira/browse/LUCENE-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162863#comment-17162863 ] Dawid Weiss commented on LUCENE-9439: - Ok, I have a better understanding of the problems here. Changing anything in Weight affects lots of code. I did a dirty trick to see if it'd work if I remove the hardcoded constant and it *almost* works... assertions trigger in TestMatchesIterator. Will have to return to it tomorrow though. https://github.com/apache/lucene-solr/compare/master...dweiss:LUCENE-9439?expand=1 > Matches API should enumerate hit fields that have no positions (no iterator) > > > Key: LUCENE-9439 > URL: https://issues.apache.org/jira/browse/LUCENE-9439 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > > I have been fiddling with Matches API and it's great. There is one corner > case that doesn't work for me though -- queries that affect fields without > positions return {{MatchesUtil.MATCH_WITH_NO_TERMS}} but this constant is > problematic as it doesn't carry the field name that caused it (returns null). > The associated fromSubMatches combines all these constants into one (or > swallows them) which is another problem. > I think it would be more consistent if MATCH_WITH_NO_TERMS was replaced with > a true match (carrying field name) returning an empty iterator (or a constant > "empty" iterator NO_TERMS). > I have a very compelling use case: I wrote an "auto-highlighter" that runs on > top of Matches API and automatically picks up query-relevant fields and > snippets. Everything works beautifully except for cases where fields are > searchable but don't have any positions (token-like fields). > I can work on a patch but wanted to reach out first - [~romseygeek]? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
Yauheni Putsykovich created LUCENE-9440: --- Summary: FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read Key: LUCENE-9440 URL: https://issues.apache.org/jira/browse/LUCENE-9440 Project: Lucene - Core Issue Type: Improvement Components: core/index Affects Versions: master (9.0) Reporter: Yauheni Putsykovich Reviewing code I noticed that we do call _infos[i].checkConsistency();_ method twice: first time inside the _FiledInfo_'s constructor and a second one just after we've created an object. org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 {code:java} infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, false); infos[i].checkConsistency(); {code} _FileInfo_'s constructor(notice the last line) {code:java} public FieldInfo(String name, int number, boolean storeTermVector, boolean omitNorms, boolean storePayloads, IndexOptions indexOptions, DocValuesType docValues, long dvGen, Map attributes, int pointDimensionCount, int pointIndexDimensionCount, int pointNumBytes, boolean softDeletesField) { this.name = Objects.requireNonNull(name); this.number = number; this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must not be null (field: \"" + name + "\")"); this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must not be null (field: \"" + name + "\")"); if (indexOptions != IndexOptions.NONE) { this.storeTermVector = storeTermVector; this.storePayloads = storePayloads; this.omitNorms = omitNorms; } else { // for non-indexed fields, leave defaults this.storeTermVector = false; this.storePayloads = false; this.omitNorms = false; } this.dvGen = dvGen; this.attributes = Objects.requireNonNull(attributes); this.pointDimensionCount = pointDimensionCount; this.pointIndexDimensionCount = pointIndexDimensionCount; this.pointNumBytes = pointNumBytes; this.softDeletesField = softDeletesField; assert checkConsistency(); } {code} By this patch, I will remove the second call and leave only one in the constructor. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Status: Patch Available (was: Open) > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Attachment: LUCENE-9440 > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Description: Reviewing code I noticed that we do call _infos[i].checkConsistency();_ method twice: first time inside the _FiledInfo_'s constructor and a second one just after we've created an object. org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 {code:java} infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, false); infos[i].checkConsistency(); {code} _FileInfo_'s constructor(notice the last line) {code:java} public FieldInfo(String name, int number, boolean storeTermVector, boolean omitNorms, boolean storePayloads, IndexOptions indexOptions, DocValuesType docValues, long dvGen, Map attributes, int pointDimensionCount, int pointIndexDimensionCount, int pointNumBytes, boolean softDeletesField) { this.name = Objects.requireNonNull(name); this.number = number; this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must not be null (field: \"" + name + "\")"); this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must not be null (field: \"" + name + "\")"); if (indexOptions != IndexOptions.NONE) { this.storeTermVector = storeTermVector; this.storePayloads = storePayloads; this.omitNorms = omitNorms; } else { // for non-indexed fields, leave defaults this.storeTermVector = false; this.storePayloads = false; this.omitNorms = false; } this.dvGen = dvGen; this.attributes = Objects.requireNonNull(attributes); this.pointDimensionCount = pointDimensionCount; this.pointIndexDimensionCount = pointIndexDimensionCount; this.pointNumBytes = pointNumBytes; this.softDeletesField = softDeletesField; assert checkConsistency(); } {code} By this patch, I will remove the second call and leave only one in the constructor. was: Reviewing code I noticed that we do call _infos[i].checkConsistency();_ method twice: first time inside the _FiledInfo_'s constructor and a second one just after we've created an object. org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 {code:java} infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, false); infos[i].checkConsistency(); {code} _FileInfo_'s constructor(notice the last line) {code:java} public FieldInfo(String name, int number, boolean storeTermVector, boolean omitNorms, boolean storePayloads, IndexOptions indexOptions, DocValuesType docValues, long dvGen, Map attributes, int pointDimensionCount, int pointIndexDimensionCount, int pointNumBytes, boolean softDeletesField) { this.name = Objects.requireNonNull(name); this.number = number; this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must not be null (field: \"" + name + "\")"); this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must not be null (field: \"" + name + "\")"); if (indexOptions != IndexOptions.NONE) { this.storeTermVector = storeTermVector; this.storePayloads = storePayloads; this.omitNorms = omitNorms; } else { // for non-indexed fields, leave defaults this.storeTermVector = false; this.storePayloads = false; this.omitNorms = false; } this.dvGen = dvGen; this.attributes = Objects.requireNonNull(attributes); this.pointDimensionCount = pointDimensionCount; this.pointIndexDimensionCount = pointIndexDimensionCount; this.pointNumBytes = pointNumBytes; this.softDeletesField = softDeletesField; assert checkConsistency(); } {code} By this patch, I will remove the second call and leave only one in the constructor. > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Lucene Fields: New,Patch Available (was: New) > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14678) [child] DocTransformer doesn't work unless (request) fl inlcudes '*'
Chris M. Hostetter created SOLR-14678: - Summary: [child] DocTransformer doesn't work unless (request) fl inlcudes '*' Key: SOLR-14678 URL: https://issues.apache.org/jira/browse/SOLR-14678 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Chris M. Hostetter Discovered this while working on SOLR-14383, affects at least 8.6, not sure how far back. Whatever the problem is, it seems specific to {{[child]}} transformer, doesn't affect things like {{[value]}} Here's some quick example queries showing the discrepancy in behavior... {noformat} hossman@slate:~$ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED' { "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ { "id":"P11!S21", "color_s":"RED", "price_i":42, "_version_":1672933421950697472}, { "id":"P22!S22", "color_s":"RED", "price_i":89, "_version_":1672933422124761088}] }} hossman@slate:~$ curl --globoff 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED&fl=id,price_i,[value+v=777]' { "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ { "id":"P11!S21", "price_i":42, "[value]":"777"}, { "id":"P22!S22", "price_i":89, "[value]":"777"}] }} hossman@slate:~$ curl --globoff 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED&fl=id,price_i,[value+v=777],[child]' { "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ { "id":"P11!S21", "price_i":42, "[value]":"777"}, { "id":"P22!S22", "price_i":89, "[value]":"777"}] }} hossman@slate:~$ curl --globoff 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED&fl=*,[value+v=777],[child]' { "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ { "id":"P11!S21", "color_s":"RED", "price_i":42, "_version_":1672933421950697472, "[value]":"777", "docs":[ { "id":"P11!D41", "name_s":"Red Swingline Brochure", "content_t":"...", "_version_":1672933421950697472, "[value]":"777"}]}, { "id":"P22!S22", "color_s":"RED", "price_i":89, "_version_":1672933422124761088, "[value]":"777", "docs":[ { "id":"P21!D41", "name_s":"Red Mont Blanc Brochure", "content_t":"...", "_version_":1672933422124761088, "[value]":"777"}]}] }} {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14678) [child] DocTransformer doesn't work unless (request) fl inlcudes '*'
[ https://issues.apache.org/jira/browse/SOLR-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162939#comment-17162939 ] Chris M. Hostetter commented on SOLR-14678: --- Source data for the sample queries to reproduce... {noformat} bin/solr -e cloud -noprompt curl -sS 'http://localhost:8983/solr/gettingstarted/update' -H 'Content-Type: application/json' --data-binary '[ { "id": "P11!prod", "name_s": "Swingline Stapler", "description_t": "The Cadillac of office staplers ...", "skus": [ { "id": "P11!S21", "color_s": "RED", "price_i": 42, "docs": [ { "id": "P11!D41", "name_s": "Red Swingline Brochure", "content_t": "...", } ], }, { "id": "P11!S31", "color_s": "BLACK", "price_i": 3, } ], "docs": [ { "id": "P11!D51", "name_s": "Quick Reference Guide", "content_t": "How to use your stapler ...", }, { "id": "P11!D61", "name_s": "Warranty Details", "content_t": "... lifetime garuntee ...", } ] }, { "id": "P22!prod", "name_s": "Mont Blanc Fountain Pen", "description_t": "A Premium Writing Instrument ...", "skus": [ { "id": "P22!S22", "color_s": "RED", "price_i": 89, "docs": [ { "id": "P21!D41", "name_s": "Red Mont Blanc Brochure", "content_t": "...", } ], }, { "id": "P22!S32", "color_s": "BLACK", "price_i": 67, } ], "docs": [ { "id": "P22!D52", "name_s": "How To Use A Pen", "content_t": "Start by removing the cap ...", } ] } ]' {noformat} > [child] DocTransformer doesn't work unless (request) fl inlcudes '*' > > > Key: SOLR-14678 > URL: https://issues.apache.org/jira/browse/SOLR-14678 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Chris M. Hostetter >Priority: Major > > Discovered this while working on SOLR-14383, affects at least 8.6, not sure > how far back. > Whatever the problem is, it seems specific to {{[child]}} transformer, > doesn't affect things like {{[value]}} > Here's some quick example queries showing the discrepancy in behavior... > {noformat} > hossman@slate:~$ curl > 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED' > { > > "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ > { > "id":"P11!S21", > "color_s":"RED", > "price_i":42, > "_version_":1672933421950697472}, > { > "id":"P22!S22", > "color_s":"RED", > "price_i":89, > "_version_":1672933422124761088}] > }} > hossman@slate:~$ curl --globoff > 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED&fl=id,price_i,[value+v=777]' > { > > "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ > { > "id":"P11!S21", > "price_i":42, > "[value]":"777"}, > { > "id":"P22!S22", > "price_i":89, > "[value]":"777"}] > }} > hossman@slate:~$ curl --globoff > 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED&fl=id,price_i,[value+v=777],[child]' > { > > "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ > { > "id":"P11!S21", > "price_i":42, > "[value]":"777"}, > { > "id":"P22!S22", > "price_i":89, > "[value]":"777"}] > }} > hossman@slate:~$ curl --globoff > 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=color_s:RED&fl=*,[value+v=777],[child]' > { > > "response":{"numFound":2,"start":0,"maxScore":0.31506687,"numFoundExact":true,"docs":[ > { > "id":"P11!S21", > "color_s":"RED", > "price_i":42, > "_version_":1672933421950697472, > "[value]":"777", > "docs":[ > { > "id":"P11!D41", > "name_s":"Red Swingline Brochure", > "content_t":"...", > "_version_":1672933421950697472, > "[value]":"777"}]}, > { > "id":"P22!S22", > "color_s":"RED", > "price_i":89, > "_version_":1672933422124761088, > "[value]":"777", > "docs":[ > { > "id":"P21!D41", > "name_s":"Red Mont Blanc Brochure", > "content_t":"...", > "_
[GitHub] [lucene-solr] alessandrobenedetti commented on pull request #1571: SOLR-14560: Interleaving for Learning To Rank
alessandrobenedetti commented on pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#issuecomment-662577644 The Pull Request is in a good status now, reviews can start. I will keep adding some tests and responding to review feedbacks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] andyvuong opened a new pull request #1687: SOLR-14658: Fix colstatus solrj interfaces to return one or all collections
andyvuong opened a new pull request #1687: URL: https://github.com/apache/lucene-solr/pull/1687 # Description This fixes a bug in the solrj interface for the [COLSTATUS collection api](https://lucene.apache.org/solr/guide/8_6/collection-management.html#colstatus) as detailed in the [JIRA](https://issues.apache.org/jira/browse/SOLR-14658) such that using the class CollectionAdminRequest.collectionStatus(collectionName) to issue colstatus commands would return all collections vs just the one specified. This is an issue as the operation sends a request to all collections' shard leaders. # Solution CollectionAdminRequest.collectionStatus(collectionName) will now only return one collection. To support getting the status for all collections, I've added CollectionAdminRequest.collectionStatuses(). The documented API still works exactly as it is detailed. # Tests I've added a new test case to test these new interfaces. I've also provided a snippet of code and steps for manual validation in the JIRA (to repro the bug) but i can be used to test the fix as well: ``` String host = "http://localhost:8983/solr";; HttpSolrClient.Builder builder = new HttpSolrClient.Builder(host); HttpSolrClient solrClient = builder.build(); String collection = "tes"; final NamedList response = //solrClient.request(CollectionAdminRequest.collectionStatus(collection)); solrClient.request(CollectionAdminRequest.collectionStatuses()); response._forEachEntry((k,v) -> { System.out.println(k); }); System.out.println(response); ``` Using just the API and not solrj for verification: - http://localhost:8983/solr/admin/collections?action=COLSTATUS&collection=tester1 - return one collection - http://localhost:8983/solr/admin/collections?action=COLSTATUS - return all collections # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. - [x] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [x] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] andyvuong commented on a change in pull request #1687: SOLR-14658: Fix colstatus solrj interfaces to return one or all collections
andyvuong commented on a change in pull request #1687: URL: https://github.com/apache/lucene-solr/pull/1687#discussion_r459012987 ## File path: solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java ## @@ -863,13 +863,23 @@ public SolrParams getParams() { } /** - * Return a SolrRequest for low-level detailed status of the collection. + * Return a SolrRequest for low-level detailed status of the specified collection. + * @param collection the collection to get the status of. */ public static ColStatus collectionStatus(String collection) { +checkNotNull(CoreAdminParams.COLLECTION, collection); return new ColStatus(collection); } + + /** + * Return a SolrRequest for low-level detailed status of all collections on the cluster. + */ + public static ColStatus collectionStatuses() { +return new ColStatus(); + } - public static class ColStatus extends AsyncCollectionSpecificAdminRequest { Review comment: AsyncCollectionSpecificAdminRequest enforces a non-null collection name passed in which doesn't work if the solrj colstatus interface's intention is to allow either one collection return or all collections returned as documented in the collection api doc. If no collection = null, i.e. empty, then collectionshandler will treat it as returning all collections staying consistent with the documented api. ## File path: solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java ## @@ -517,10 +517,7 @@ private static void addStatusToResponse(NamedList results, RequestStatus ColStatus.RAW_SIZE_DETAILS_PROP, ColStatus.RAW_SIZE_SAMPLING_PERCENT_PROP, ColStatus.SIZE_INFO_PROP); - // make sure we can get the name if there's "name" but not "collection" - if (props.containsKey(CoreAdminParams.NAME) && !props.containsKey(COLLECTION_PROP)) { Review comment: Removed this because it's not documented in the API anyway and the CoreAdminParams.NAME isn't copied anyway. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163025#comment-17163025 ] Robert Muir commented on LUCENE-9440: - I don't think its a simple case of duplicate call. The one in the constructor is only called with "assert", so only when assertions are enabled. It happens for any fieldinfo creation. I don't remember the reason why its only called under assert, presumably to avoid some performance hit when fieldinfos are created by indexwriter. On the other hand the second call (after deserializing bytes from the index) is called even in "production". So if we remove it, we remove the sanity checks alltogether for production code. > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163029#comment-17163029 ] Yauheni Putsykovich commented on LUCENE-9440: - It makes sense to me, thanks for your reply. Anyway, it seems a bit odd to me as I have to remember to additionally _checkConsistency_ every time when I create _FieldInfo_, because assertion could be turned off, isn't it > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-9492) Request status API returns a completed status even if the collection API call failed
[ https://issues.apache.org/jira/browse/SOLR-9492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett updated SOLR-9492: Fix Version/s: (was: 6.7) (was: 7.0) > Request status API returns a completed status even if the collection API call > failed > > > Key: SOLR-9492 > URL: https://issues.apache.org/jira/browse/SOLR-9492 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.5.2, 6.2 >Reporter: Shalin Shekhar Mangar >Priority: Major > Labels: difficulty-medium, impact-high > > A failed split shard response is: > {code} > {success={127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=2}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:43245_hfnp%2Fbq={responseHeader={status=0,QTime=0}},127.0.0.1:50948_hfnp%2Fbq={responseHeader={status=0,QTime=0}}},c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId: > c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044 webapp=null > path=/admin/cores > params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883644576126044&qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_0_replica1&action=CREATE&collection=collection1&shard=shard1_0&wt=javabin&version=2} > status=0 > QTime=2},c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId: > c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004 webapp=null > path=/admin/cores > params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883647597130004&qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_1_replica1&action=CREATE&collection=collection1&shard=shard1_1&wt=javabin&version=2} > status=0 > QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId: > c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904 webapp=null > path=/admin/cores > params={nodeName=127.0.0.1:43245_hfnp%252Fbq&core=collection1_shard1_1_replica1&async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883649607943904&qt=/admin/cores&coreNodeName=core_node6&action=PREPRECOVERY&checkLive=true&state=active&onlyIfLeader=true&wt=javabin&version=2} > status=0 > QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId: > c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003 webapp=null > path=/admin/cores > params={core=collection1&async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883649612565003&qt=/admin/cores&action=SPLIT&targetCore=collection1_shard1_0_replica1&targetCore=collection1_shard1_1_replica1&wt=javabin&version=2} > status=0 > QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId: > c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632 webapp=null > path=/admin/cores > params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883650618358632&qt=/admin/cores&name=collection1_shard1_1_replica1&action=REQUESTAPPLYUPDATES&wt=javabin&version=2} > status=0 > QTime=0},c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900={responseHeader={status=0,QTime=0},STATUS=completed,Response=TaskId: > c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900 webapp=null > path=/admin/cores > params={async=c32001ed-3bca-4ae0-baae-25a3c99e35e65883650636428900&qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_0_replica0&action=CREATE&collection=collection1&shard=shard1_0&wt=javabin&version=2} > status=0 > QTime=0},failure={127.0.0.1:43245_hfnp%2Fbq=org.apache.solr.client.solrj.SolrServerException:IOException > occured when talking to server at: http://127.0.0.1:43245/hfnp/bq},Operation > splitshard caused exception:=org.apache.solr.common.SolrException: ADDREPLICA > failed to create replica,exception={msg=ADDREPLICA failed to create > replica,rspCode=500}} > {code} > Note the "failure" bit. The split shard couldn't add a replica. But when you > use the request status API, it returns a "completed" status. > Apparently, completed doesn't mean it was successful! In any case, it is very > misleading and makes it very hard to properly use the Collection APIs. We > need more investigation to figure out what other Collection APIs might be > affected. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe,
[jira] [Commented] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163088#comment-17163088 ] Lucene/Solr QA commented on LUCENE-9440: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 3s{color} | {color:blue} The patch file was not named according to LUCENE's naming conventions. Please see https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work for instructions. {color} | || || || || {color:brown} Prechecks {color} || | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 5s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 5m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | LUCENE-9440 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13008180/LUCENE-9440 | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.15.0-108-generic #109-Ubuntu SMP Fri Jun 19 11:33:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 03a03b34a46 | | ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-LUCENE-Build/287/testReport/ | | modules | C: lucene lucene/core U: lucene | | Console output | https://builds.apache.org/job/PreCommit-LUCENE-Build/287/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != In
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Attachment: SOLR-9440 > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: SOLR-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Attachment: (was: LUCENE-9440) > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: SOLR-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163103#comment-17163103 ] Yauheni Putsykovich commented on LUCENE-9440: - I've checked the code and found a lot of places where we create _FieldInfo_ and do not _checkConsistency_ which is quite weird > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: SOLR-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14560) Learning To Rank Interleaving
[ https://issues.apache.org/jira/browse/SOLR-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163109#comment-17163109 ] Alessandro Benedetti commented on SOLR-14560: - The review is not open. The pull request is in a much better state and it is ready to be reviewed. I will add a few more tests and respond to review feedbacks. Then, once we have an acceptable patch I will proceed with the commit. > Learning To Rank Interleaving > - > > Key: SOLR-14560 > URL: https://issues.apache.org/jira/browse/SOLR-14560 > Project: Solr > Issue Type: New Feature > Components: contrib - LTR >Affects Versions: 8.5.2 >Reporter: Alessandro Benedetti >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Interleaving is an approach to Online Search Quality evaluation that can be > very useful for Learning To Rank models: > [https://sease.io/2020/05/online-testing-for-learning-to-rank-interleaving.html|https://sease.io/2020/05/online-testing-for-learning-to-rank-interleaving.html] > Scope of this issue is to introduce the ability to the LTR query parser of > accepting multiple models (2 to start with). > If one model is passed, normal reranking happens. > If two models are passed, reranking happens for both models and the final > reranked list is the interleaved sequence of results coming from the two > models lists. > As a first step it is going to be implemented through: > TeamDraft Interleaving with two models in input. > In the future, we can expand the functionality adding the interleaving > algorithm as a parameter. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14560) Learning To Rank Interleaving
[ https://issues.apache.org/jira/browse/SOLR-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163109#comment-17163109 ] Alessandro Benedetti edited comment on SOLR-14560 at 7/22/20, 10:46 PM: The review is not open. The pull request is in a much better state and it is ready to be reviewed. I will add a few more tests and respond to review feedbacks. Then, once we have an acceptable patch I will proceed with the commit. *Known Limitations*: no support in sharded mode yet was (Author: alessandro.benedetti): The review is not open. The pull request is in a much better state and it is ready to be reviewed. I will add a few more tests and respond to review feedbacks. Then, once we have an acceptable patch I will proceed with the commit. > Learning To Rank Interleaving > - > > Key: SOLR-14560 > URL: https://issues.apache.org/jira/browse/SOLR-14560 > Project: Solr > Issue Type: New Feature > Components: contrib - LTR >Affects Versions: 8.5.2 >Reporter: Alessandro Benedetti >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Interleaving is an approach to Online Search Quality evaluation that can be > very useful for Learning To Rank models: > [https://sease.io/2020/05/online-testing-for-learning-to-rank-interleaving.html|https://sease.io/2020/05/online-testing-for-learning-to-rank-interleaving.html] > Scope of this issue is to introduce the ability to the LTR query parser of > accepting multiple models (2 to start with). > If one model is passed, normal reranking happens. > If two models are passed, reranking happens for both models and the final > reranked list is the interleaved sequence of results coming from the two > models lists. > As a first step it is going to be implemented through: > TeamDraft Interleaving with two models in input. > In the future, we can expand the functionality adding the interleaving > algorithm as a parameter. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] rishisankar opened a new pull request #1688: SOLR-14675: CloudHttp2SolrClient async request method
rishisankar opened a new pull request #1688: URL: https://github.com/apache/lucene-solr/pull/1688 # Description As of now, the [Http2SolrClient](https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java) and the [LBHttp2SolrClient](https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttp2SolrClient.java) both have support for making async requests, but there isn't a way to use these methods from a [CloudHttp2SolrClient](https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudHttp2SolrClient.java). My change introduces an async request method to the CloudHttp2SolrClient which internally calls the LBHttp2SolrClient's async request method, and returns a Cancellable. # Solution I created a new method `CloudHttp2SolrClient#asyncRequest(SolrRequest request, String collection, AsyncListener asyncListener)` which makes an async request. In BaseCloudSolrClient, I modified `requestWithRetryOnStaleState` and `sendRequest` to take in an AsyncListener parameter, and they check if the listener is null or not to determine whether or not to make an async request. To return a Cancellable in the asyncRequest without changing the return type of the BaseCloudSolrClient request methods, the `BaseCloudSolrClient#makeRequest` method returns a NamedList containing the Cancellable object to be returned by the CloudHttp2SolrClient's asyncRequest method. # Tests I have added a test to [CloudHttp2SolrClientTest.java](https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/test/org/apache/solr/client/solrj/impl/CloudHttp2SolrClientTest.java) that confirms that successful async requests result in an `onSuccess()` callback method being run. This test only serves to confirm that async requests properly branch to the `LBHttp2SolrClient.asyncReq()` method; it doesn't check the correctness of the response (i.e. it assumes that `LBHttp2SolrClient.asyncReq()` works correctly). # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. - [x] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [x] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1685: LUCENE-9433: Remove Ant support from trunk
mocobeta commented on a change in pull request #1685: URL: https://github.com/apache/lucene-solr/pull/1685#discussion_r459201621 ## File path: lucene/BUILD.md ## @@ -2,78 +2,72 @@ ## Basic steps: - 0. Install OpenJDK 11 (or greater), Ant 1.8.2+, Ivy 2.2.0 - 1. Download Lucene from Apache and unpack it - 2. Connect to the top-level of your Lucene installation + 0. Install OpenJDK 11 (or greater), Gradle 6.4.1 Review comment: @ErickErickson I opened a PR on your repository to refine lucene/BUILD.md: https://github.com/ErickErickson/lucene-solr/pull/1 Could you check it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9441) Fix ant-specific relative links in Javadocs in accordance with Gradle build
Tomoko Uchida created LUCENE-9441: - Summary: Fix ant-specific relative links in Javadocs in accordance with Gradle build Key: LUCENE-9441 URL: https://issues.apache.org/jira/browse/LUCENE-9441 Project: Lucene - Core Issue Type: Task Components: general/javadocs Reporter: Tomoko Uchida Assignee: Tomoko Uchida -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9441) Fix ant-specific relative links in Javadocs in accordance with Gradle build
[ https://issues.apache.org/jira/browse/LUCENE-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomoko Uchida updated LUCENE-9441: -- Description: Some relative links in javadocs which contain ant module names needs to be fixed, since they will be broken with gradle build. > Fix ant-specific relative links in Javadocs in accordance with Gradle build > --- > > Key: LUCENE-9441 > URL: https://issues.apache.org/jira/browse/LUCENE-9441 > Project: Lucene - Core > Issue Type: Task > Components: general/javadocs >Reporter: Tomoko Uchida >Assignee: Tomoko Uchida >Priority: Minor > > Some relative links in javadocs which contain ant module names needs to be > fixed, since they will be broken with gradle build. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163221#comment-17163221 ] Lucene/Solr QA commented on LUCENE-9440: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 5s{color} | {color:blue} The patch file was not named according to LUCENE's naming conventions. Please see https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work for instructions. {color} | || || || || {color:brown} Prechecks {color} || | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 5s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 5m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | LUCENE-9440 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13008202/SOLR-9440 | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene2-us-west.apache.org 4.4.0-170-generic #199-Ubuntu SMP Thu Nov 14 01:45:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 03a03b3 | | ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-LUCENE-Build/288/testReport/ | | modules | C: lucene lucene/core U: lucene | | Console output | https://builds.apache.org/job/PreCommit-LUCENE-Build/288/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: SOLR-9440 > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != In
[jira] [Commented] (SOLR-14383) Fix indexing-nested-documents.adoc XML/JSON examples to be accurate, consistent, and clear
[ https://issues.apache.org/jira/browse/SOLR-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163222#comment-17163222 ] David Smiley commented on SOLR-14383: - Hoss, I really appreciate you working on making the documentation here better! I took a look at the patch, applied it in IntelliJ and looked at the diff. Overall I have no major concerns. You have some nocommits that I should respond to and maybe some other things. I'm technically on vacation this week so maybe next week I can give more input. > Fix indexing-nested-documents.adoc XML/JSON examples to be accurate, > consistent, and clear > -- > > Key: SOLR-14383 > URL: https://issues.apache.org/jira/browse/SOLR-14383 > Project: Solr > Issue Type: Sub-task >Reporter: Chris M. Hostetter >Assignee: Chris M. Hostetter >Priority: Major > Attachments: SOLR-14383.patch > > > As reported on solr-user@lucene by Peter Pimley... > {noformat} > The page "Indexing Nested Documents" has an XML example showing two > different ways of adding nested documents: > https://lucene.apache.org/solr/guide/8_5/indexing-nested-documents.html#xml-examples > The text says: > "It illustrates two styles of adding child documents: the first is > associated via a field "comment" (preferred), and the second is done > in the classic way now referred to as an "anonymous" or "unlabelled" > child document." > However in the XML directly below there is no field named "comment". > There is one named "content" and another named "comments" (plural), > but no field named "comment". In fact, looking at the Json example > immediately below, I wonder if the XML element currently named > "content" should be named "comments", and what is currently marked > "comments" should be "content"? > Secondly, in the Json example it says: > "The labelled relationship here is one child document but could have > been wrapped in array brackets." > However in the actual Json, the parent document (ID=1) with a labelled > relationship has two child documents (IDs 2 and 3), and they are > already in array brackets. > {noformat} > * The 2 examples (XML and JSON) should be updated to contains *structurally* > identical content, (ie: same number of documents, with same field values, and > same hierarchical relationships) to focus on demonstrating the syntax > differences (ie: things like the special {{\_childDocuments\_}} key in json) > * The paragraphs describing the examples should be updated to: > ** refer to the correct field names -- since both "comments" and "contents" > fields exist in the examples, it's impossible for novice users to even > udnerstand where th "typo" might be in the descriptions (I'm pretty > knowledgeable about Solr and even i'm second guessing myself as to what the > intent in these paragraphs are) > ** refer to documents by {{"id"}} value, not just descriptors like "first" > and "second" > * it might be worth considering rewriting this section to use "callouts": > https://asciidoctor.org/docs/user-manual/#callouts -- similar to how we use > them in other sections like this: > https://lucene.apache.org/solr/guide/8_5/uploading-data-with-index-handlers.html#sending-json-update-commands -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on pull request #1684: SOLR-14613: strongly typed initial proposal for plugin interface
noblepaul commented on pull request #1684: URL: https://github.com/apache/lucene-solr/pull/1684#issuecomment-662837142 I don't believe we should have a set of interfaces that duplicate existing classes just for this functionality. This is a common mistake that we all do. When we design a feature we think that is the most important thing. We endup over designing and over engineering things. This feature will remain a tiny part of Solr. Anyone who wishes to implement this should not require to learn a lot before even getting started. Let's try to have a minimal set of interfaces so that people who try to implement them do not have a huge learning cure. Let's try to understand the requirement * Solr wants a set of positions to place a few replicas * The implementation wants to know what is the current state of the cluster so that it can make those decisions 24 interfaces to do this is definitely over engineering This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Attachment: LUCENE-9440.patch > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440.patch > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9440) FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read
[ https://issues.apache.org/jira/browse/LUCENE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yauheni Putsykovich updated LUCENE-9440: Attachment: (was: SOLR-9440) > FieldInfo#checkConsistency called twice from Lucene50FieldInfosFormat#read > -- > > Key: LUCENE-9440 > URL: https://issues.apache.org/jira/browse/LUCENE-9440 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Affects Versions: master (9.0) >Reporter: Yauheni Putsykovich >Priority: Trivial > Attachments: LUCENE-9440.patch > > > Reviewing code I noticed that we do call _infos[i].checkConsistency();_ > method twice: first time inside the _FiledInfo_'s constructor and a second > one just after we've created an object. > org/apache/lucene/codecs/lucene50/Lucene50FieldInfosFormat.java:150 > {code:java} > infos[i] = new FieldInfo(name, fieldNumber, storeTermVector, omitNorms, > storePayloads, indexOptions, docValuesType, dvGen, attributes, 0, 0, 0, > false); > infos[i].checkConsistency(); > {code} > _FileInfo_'s constructor(notice the last line) > {code:java} > public FieldInfo(String name, int number, boolean storeTermVector, boolean > omitNorms, boolean storePayloads, > IndexOptions indexOptions, DocValuesType docValues, long > dvGen, Map attributes, > int pointDimensionCount, int pointIndexDimensionCount, int > pointNumBytes, boolean softDeletesField) { > this.name = Objects.requireNonNull(name); > this.number = number; > this.docValuesType = Objects.requireNonNull(docValues, "DocValuesType must > not be null (field: \"" + name + "\")"); > this.indexOptions = Objects.requireNonNull(indexOptions, "IndexOptions must > not be null (field: \"" + name + "\")"); > if (indexOptions != IndexOptions.NONE) { > this.storeTermVector = storeTermVector; > this.storePayloads = storePayloads; > this.omitNorms = omitNorms; > } else { // for non-indexed fields, leave defaults > this.storeTermVector = false; > this.storePayloads = false; > this.omitNorms = false; > } > this.dvGen = dvGen; > this.attributes = Objects.requireNonNull(attributes); > this.pointDimensionCount = pointDimensionCount; > this.pointIndexDimensionCount = pointIndexDimensionCount; > this.pointNumBytes = pointNumBytes; > this.softDeletesField = softDeletesField; > assert checkConsistency(); > } > {code} > > By this patch, I will remove the second call and leave only one in the > constructor. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org