[GitHub] [lucene-solr] mocobeta opened a new pull request #1388: Use -linkoffline instead of relative paths to make links to other projects
mocobeta opened a new pull request #1388: Use -linkoffline instead of relative paths to make links to other projects URL: https://github.com/apache/lucene-solr/pull/1388 For gradle build, use absolute urls instead of relative paths to generate inter-project links so that javadoc destination directory can be moved under `_project_/build/docs`. Also, this directly calls javadoc tool rather than Ant javadoc task. Ant task doesn't recognize `element-list`, the successor to `package-list`, so `offline=true` option doesn't correctly work with JDK10+ (https://issues.apache.org/jira/browse/SOLR-14352). See also https://issues.apache.org/jira/browse/LUCENE-9278 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14367) Upgrade Tika to 1.24
[ https://issues.apache.org/jira/browse/SOLR-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070323#comment-17070323 ] ASF subversion and git services commented on SOLR-14367: Commit 5c2011a6fb7d3c8331dc261e505f058ed64016bc in lucene-solr's branch refs/heads/master from Erick Erickson [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5c2011a ] SOLR-14367: Upgrade Tika to 1.24 > Upgrade Tika to 1.24 > > > Key: SOLR-14367 > URL: https://issues.apache.org/jira/browse/SOLR-14367 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.5 >Reporter: mibo >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Apache Tika to new released 1.24 to handle > [CVE-2020-1950|https://nvd.nist.gov/vuln/detail/CVE-2020-1950]. > Created [PR #1383|https://github.com/apache/lucene-solr/pull/1383] but > afterwards I found https://issues.apache.org/jira/browse/SOLR-14054 and it > looks like an update is much more complicated. > I someone support me I will update my contribution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (SOLR-14367) Upgrade Tika to 1.24
[ https://issues.apache.org/jira/browse/SOLR-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-14367: - Assignee: Erick Erickson > Upgrade Tika to 1.24 > > > Key: SOLR-14367 > URL: https://issues.apache.org/jira/browse/SOLR-14367 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.5 >Reporter: mibo >Assignee: Erick Erickson >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Apache Tika to new released 1.24 to handle > [CVE-2020-1950|https://nvd.nist.gov/vuln/detail/CVE-2020-1950]. > Created [PR #1383|https://github.com/apache/lucene-solr/pull/1383] but > afterwards I found https://issues.apache.org/jira/browse/SOLR-14054 and it > looks like an update is much more complicated. > I someone support me I will update my contribution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9278) Make javadoc folder structure follow Gradle project path
[ https://issues.apache.org/jira/browse/LUCENE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070336#comment-17070336 ] Tomoko Uchida commented on LUCENE-9278: --- I opened another PR: [https://github.com/apache/lucene-solr/pull/1388]. This replaces all relative paths with absolute urls by {{-linkoffline}} option, so that javadoc destination directory is moved under {{_project_/build/}}. - docs of ":lucene:core" prj will go to {{lucene/core/build/docs/javadoc/}} - docs of ":lucene:analysis:common" will go to {{lucene/analysis/common/build/docs/javadoc/}} - docs of ":solr:core" prj will go to {{solr/core/build/docs/javadoc/}} - ... and so on Also, this directly calls [javadoc tool|https://docs.oracle.com/en/java/javase/11/tools/javadoc.html] rather than Ant javadoc task - Ant javadoc task doesn't recognize {{element-list}}, the successor to {{package-list}} up until Java 8, so {{}} no longer correctly work with JDK11 (SOLR-14352). All generated docs are passed the "checkJavaDocs.py" check. In other words, there's no missing package summary (we had some missing package summary problem with gradle default javadoc task). [~dweiss] I think our custom javadoc task (for gradle build) is almost complete with the changes. Would you review it again? > Make javadoc folder structure follow Gradle project path > > > Key: LUCENE-9278 > URL: https://issues.apache.org/jira/browse/LUCENE-9278 > Project: Lucene - Core > Issue Type: Task > Components: general/build >Reporter: Tomoko Uchida >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > Current javadoc folder structure is derived from Ant project name. e.g.: > [https://lucene.apache.org/core/8_4_1/analyzers-icu/index.html] > [https://lucene.apache.org/solr/8_4_1/solr-solrj/index.html] > For Gradle build, it should also follow gradle project structure (path) > instead of ant one, to keep things simple to manage [1]. Hence, it will look > like this: > [https://lucene.apache.org/core/9_0_0/analysis/icu/index.html] > [https://lucene.apache.org/solr/9_0_0/solr/solrj/index.html] > [1] The change was suggested at the conversation between Dawid Weiss and I on > a github pr: [https://github.com/apache/lucene-solr/pull/1304] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-9278) Make javadoc folder structure follow Gradle project path
[ https://issues.apache.org/jira/browse/LUCENE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070336#comment-17070336 ] Tomoko Uchida edited comment on LUCENE-9278 at 3/29/20, 1:20 PM: - I opened another PR: [https://github.com/apache/lucene-solr/pull/1388]. This replaces all relative paths with absolute urls by {{-linkoffline}} option, so that javadoc destination directory is moved under {{_project_/build/}}. - docs of ":lucene:core" prj will go to {{lucene/core/build/docs/javadoc/}} - docs of ":lucene:analysis:common" will go to {{lucene/analysis/common/build/docs/javadoc/}} - docs of ":solr:core" prj will go to {{solr/core/build/docs/javadoc/}} - ... and so on Also, this directly calls [javadoc tool|https://docs.oracle.com/en/java/javase/11/tools/javadoc.html] rather than Ant javadoc task - Ant javadoc task doesn't recognize {{element-list}}, the successor to {{package-list}} up until Java 8, so {{}} no longer correctly work with JDK11 (SOLR-14352). All generated docs are passed the "checkJavaDocs.py" check. In other words, there's no missing package summary (we had some missing package summary problem with gradle default javadoc task). [~dweiss] I think our custom javadoc task (for gradle build) is almost complete with the changes. Would you review it again? was (Author: tomoko uchida): I opened another PR: [https://github.com/apache/lucene-solr/pull/1388]. This replaces all relative paths with absolute urls by {{-linkoffline}} option, so that javadoc destination directory is moved under {{_project_/build/}}. - docs of ":lucene:core" prj will go to {{lucene/core/build/docs/javadoc/}} - docs of ":lucene:analysis:common" will go to {{lucene/analysis/common/build/docs/javadoc/}} - docs of ":solr:core" prj will go to {{solr/core/build/docs/javadoc/}} - ... and so on Also, this directly calls [javadoc tool|https://docs.oracle.com/en/java/javase/11/tools/javadoc.html] rather than Ant javadoc task - Ant javadoc task doesn't recognize {{element-list}}, the successor to {{package-list}} up until Java 8, so {{}} no longer correctly work with JDK11 (SOLR-14352). All generated docs are passed the "checkJavaDocs.py" check. In other words, there's no missing package summary (we had some missing package summary problem with gradle default javadoc task). [~dweiss] I think our custom javadoc task (for gradle build) is almost complete with the changes. Would you review it again? > Make javadoc folder structure follow Gradle project path > > > Key: LUCENE-9278 > URL: https://issues.apache.org/jira/browse/LUCENE-9278 > Project: Lucene - Core > Issue Type: Task > Components: general/build >Reporter: Tomoko Uchida >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > Current javadoc folder structure is derived from Ant project name. e.g.: > [https://lucene.apache.org/core/8_4_1/analyzers-icu/index.html] > [https://lucene.apache.org/solr/8_4_1/solr-solrj/index.html] > For Gradle build, it should also follow gradle project structure (path) > instead of ant one, to keep things simple to manage [1]. Hence, it will look > like this: > [https://lucene.apache.org/core/9_0_0/analysis/icu/index.html] > [https://lucene.apache.org/solr/9_0_0/solr/solrj/index.html] > [1] The change was suggested at the conversation between Dawid Weiss and I on > a github pr: [https://github.com/apache/lucene-solr/pull/1304] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-9278) Make javadoc folder structure follow Gradle project path
[ https://issues.apache.org/jira/browse/LUCENE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070336#comment-17070336 ] Tomoko Uchida edited comment on LUCENE-9278 at 3/29/20, 1:25 PM: - I opened another PR: [https://github.com/apache/lucene-solr/pull/1388]. This replaces all relative paths with absolute urls by {{-linkoffline}} option, so that javadoc destination directory is moved under {{_project_/build/}}. - docs of ":lucene:core" prj will go to {{lucene/core/build/docs/javadoc/}} - docs of ":lucene:analysis:common" will go to {{lucene/analysis/common/build/docs/javadoc/}} - docs of ":solr:core" prj will go to {{solr/core/build/docs/javadoc/}} - ... and so on Also, this directly calls [javadoc tool|https://docs.oracle.com/en/java/javase/11/tools/javadoc.html] rather than Ant javadoc task - Ant javadoc task doesn't recognize {{element-list}}, the successor to {{package-list}} up until Java 8, so {{}} no longer correctly work with JDK11 (SOLR-14352). All generated docs passed the "checkJavaDocs.py" check. In other words, there's no missing package summary (we had some missing package summary problem with gradle default javadoc task). [~dweiss] I think our custom javadoc task (for gradle build) is almost complete with the changes. Would you review it again? was (Author: tomoko uchida): I opened another PR: [https://github.com/apache/lucene-solr/pull/1388]. This replaces all relative paths with absolute urls by {{-linkoffline}} option, so that javadoc destination directory is moved under {{_project_/build/}}. - docs of ":lucene:core" prj will go to {{lucene/core/build/docs/javadoc/}} - docs of ":lucene:analysis:common" will go to {{lucene/analysis/common/build/docs/javadoc/}} - docs of ":solr:core" prj will go to {{solr/core/build/docs/javadoc/}} - ... and so on Also, this directly calls [javadoc tool|https://docs.oracle.com/en/java/javase/11/tools/javadoc.html] rather than Ant javadoc task - Ant javadoc task doesn't recognize {{element-list}}, the successor to {{package-list}} up until Java 8, so {{}} no longer correctly work with JDK11 (SOLR-14352). All generated docs are passed the "checkJavaDocs.py" check. In other words, there's no missing package summary (we had some missing package summary problem with gradle default javadoc task). [~dweiss] I think our custom javadoc task (for gradle build) is almost complete with the changes. Would you review it again? > Make javadoc folder structure follow Gradle project path > > > Key: LUCENE-9278 > URL: https://issues.apache.org/jira/browse/LUCENE-9278 > Project: Lucene - Core > Issue Type: Task > Components: general/build >Reporter: Tomoko Uchida >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > Current javadoc folder structure is derived from Ant project name. e.g.: > [https://lucene.apache.org/core/8_4_1/analyzers-icu/index.html] > [https://lucene.apache.org/solr/8_4_1/solr-solrj/index.html] > For Gradle build, it should also follow gradle project structure (path) > instead of ant one, to keep things simple to manage [1]. Hence, it will look > like this: > [https://lucene.apache.org/core/9_0_0/analysis/icu/index.html] > [https://lucene.apache.org/solr/9_0_0/solr/solrj/index.html] > [1] The change was suggested at the conversation between Dawid Weiss and I on > a github pr: [https://github.com/apache/lucene-solr/pull/1304] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14367) Upgrade Tika to 1.24
[ https://issues.apache.org/jira/browse/SOLR-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070345#comment-17070345 ] ASF subversion and git services commented on SOLR-14367: Commit e4b3fae75b846aa465813bff2bbdb732746ec104 in lucene-solr's branch refs/heads/branch_8x from Erick Erickson [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e4b3fae7 ] SOLR-14367: Upgrade Tika to 1.24 (cherry picked from commit 5c2011a6fb7d3c8331dc261e505f058ed64016bc) > Upgrade Tika to 1.24 > > > Key: SOLR-14367 > URL: https://issues.apache.org/jira/browse/SOLR-14367 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.5 >Reporter: mibo >Assignee: Erick Erickson >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Apache Tika to new released 1.24 to handle > [CVE-2020-1950|https://nvd.nist.gov/vuln/detail/CVE-2020-1950]. > Created [PR #1383|https://github.com/apache/lucene-solr/pull/1383] but > afterwards I found https://issues.apache.org/jira/browse/SOLR-14054 and it > looks like an update is much more complicated. > I someone support me I will update my contribution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14367) Upgrade Tika to 1.24
[ https://issues.apache.org/jira/browse/SOLR-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-14367. --- Fix Version/s: 8.6 Resolution: Fixed > Upgrade Tika to 1.24 > > > Key: SOLR-14367 > URL: https://issues.apache.org/jira/browse/SOLR-14367 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.5 >Reporter: mibo >Assignee: Erick Erickson >Priority: Minor > Fix For: 8.6 > > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Apache Tika to new released 1.24 to handle > [CVE-2020-1950|https://nvd.nist.gov/vuln/detail/CVE-2020-1950]. > Created [PR #1383|https://github.com/apache/lucene-solr/pull/1383] but > afterwards I found https://issues.apache.org/jira/browse/SOLR-14054 and it > looks like an update is much more complicated. > I someone support me I will update my contribution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1388: LUCENE-9278: Use -linkoffline instead of relative paths to make links to other projects
mocobeta commented on a change in pull request #1388: LUCENE-9278: Use -linkoffline instead of relative paths to make links to other projects URL: https://github.com/apache/lucene-solr/pull/1388#discussion_r399800512 ## File path: gradle/render-javadoc.gradle ## @@ -15,93 +15,105 @@ * limitations under the License. */ -// generate javadocs by using Ant javadoc task +// generate javadocs by calling javadoc tool +// see https://docs.oracle.com/en/java/javase/11/tools/javadoc.html + +// utility function to convert project path to document output dir +// e.g.: ':lucene:analysis:common' => 'analysis/common' +def pathToDocdir = { path -> path.split(':').drop(2).join('/') } allprojects { plugins.withType(JavaPlugin) { -ext { - javadocRoot = project.path.startsWith(':lucene') ? project(':lucene').file("build/docs") : project(':solr').file("build/docs") - javadocDestDir = "${javadocRoot}/${project.name}" -} - task renderJavadoc { - description "Generates Javadoc API documentation for the main source code. This invokes Ant Javadoc Task." + description "Generates Javadoc API documentation for the main source code. This directly invokes javadoc tool." group "documentation" ext { -linksource = "no" +linksource = false linkJUnit = false -linkHref = [] +linkLuceneProjects = [] +linkSorlProjects = [] } dependsOn sourceSets.main.compileClasspath inputs.files { sourceSets.main.java.asFileTree } - outputs.dir project.javadocRoot + outputs.dir project.javadoc.destinationDir def libName = project.path.startsWith(":lucene") ? "Lucene" : "Solr" def title = "${libName} ${project.version} ${project.name} API".toString() + // absolute urls for "-linkoffline" option + def javaSEDocUrl = "https://docs.oracle.com/en/java/javase/11/docs/api/"; + def junitDocUrl = "https://junit.org/junit4/javadoc/4.12/"; + def luceneDocUrl = "https://lucene.apache.org/core/${project.version.replace(".", "_")}".toString() + def solrDocUrl = "https://lucene.apache.org/solr/${project.version.replace(".", "_")}".toString() + + def javadocCmd = org.gradle.internal.jvm.Jvm.current().getJavadocExecutable() + doFirst { def srcDirs = sourceSets.main.java.srcDirs.findAll { dir -> dir.exists() } -ant.javadoc( -overview: file("src/java/overview.html"), -packagenames: "org.apache.lucene.*,org.apache.solr.*", -destDir: javadocDestDir, -access: "protected", -encoding: "UTF-8", -charset: "UTF-8", -docencoding: "UTF-8", -noindex: "true", -includenosourcepackages: "true", -author: "true", -version: "true", -linksource: linksource, -use: "true", -failonerror: "true", -locale: "en_US", -windowtitle: title, -doctitle: title, -maxmemory: "512m", -classpath: sourceSets.main.compileClasspath.asPath, -bottom: "Copyright © 2000-${buildYear} Apache Software Foundation. All Rights Reserved." -) { - srcDirs.collect { srcDir -> -packageset(dir: srcDir) +project.exec { + executable javadocCmd + + args += [ "-overview", file("src/java/overview.html").toString() ] + args += [ "-sourcepath", srcDirs.join(" ") ] + args += [ "-subpackages", project.path.startsWith(":lucene") ? "org.apache.lucene" : "org.apache.solr"] + args += [ "-d", project.javadoc.destinationDir.toString() ] Review comment: I reused the output destination dir for the gradle default "javadoc" task here. This can be moved to another location if needed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14372) Required operator (+) is being ignored when using default conjunction operator AND
Eran Buchnick created SOLR-14372: Summary: Required operator (+) is being ignored when using default conjunction operator AND Key: SOLR-14372 URL: https://issues.apache.org/jira/browse/SOLR-14372 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: query parsers Affects Versions: 8.3.1 Environment: rel 7.4 cluster mode Reporter: Eran Buchnick Using solr 8.3.0 it seems like required operator isn't functioning properly when default conjunction operator is AND. Steps to reproduce: * 20 docs * all have text field * 17 have the value A * 13 have the value B * 10 have both A and B (the intersection) * default operator is set to AND * my query is: {code:java} +A OR B{code} * the result is all 20 docs (as if I searched {code:java} A OR B{code} ) * when I change my query to be {code:java} {!q.op=OR} +A OR B{code} I get my expected result which is "A is required B is optional" (which in this case happened to be all 17 docs that includes the value A, some of them includes the value B but none of the not include A!) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents
msokolov commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r399829998 ## File path: lucene/core/src/java/org/apache/lucene/search/LongDocValuesPointComparator.java ## @@ -0,0 +1,210 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search; + +import org.apache.lucene.document.LongPoint; +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.PointValues; +import org.apache.lucene.util.DocIdSetBuilder; + +import java.io.IOException; +import java.util.Arrays; + +import static org.apache.lucene.search.FieldComparator.IterableComparator; + +/** + * Expert: a FieldComparator class for long types corresponding to + * {@link LongDocValuesPointSortField}. + * This comparator provides {@code iterator} over competitive documents, + * that are stronger than the current {@code bottom} value. + */ +public class LongDocValuesPointComparator extends IterableComparator { +private final String field; +private final boolean reverse; +private final long missingValue; +private final long[] values; +private long bottom; +private long topValue; +protected NumericDocValues docValues; +private DocIdSetIterator iterator; +private PointValues pointValues; +private int maxDoc; +private int maxDocVisited; +private int updateCounter = 0; +private byte[] cmaxValueAsBytes = null; Review comment: Can these be final, and allocated only in the constructor? I think it might be clearer to add a boolean "hasTopValues" and set that in setTopValue, rather than use the existence of these byte[]? Then you could make these final and eliminate the local variables where they get copied below This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14372) Required operator (+) is being ignored when using default conjunction operator AND
[ https://issues.apache.org/jira/browse/SOLR-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-14372. --- Resolution: Invalid First, please raise questions like this on the user's list, we try to reserve JIRAs for known bugs/enhancements rather than usage questions. See: http://lucene.apache.org/solr/community.html#mailing-lists-irc there are links to both Lucene and Solr mailing lists there. A _lot_ more people will see your question on that list and may be able to help more quickly. If it's determined that this really is a code issue or enhancement to Lucene or Solr and not a configuration/usage problem, we can raise a new JIRA or reopen this one. When you do post to the user's list, you should include the results of adding &deubg=query to the post. I suspect that what you're hitting here is URL escaping; the "+" in URL-escape land is a space. Which is really confusing when you use a browser .vs. Solrj for instance. Try %2BA OR B, the URL encoding for a + is %2B. > Required operator (+) is being ignored when using default conjunction > operator AND > -- > > Key: SOLR-14372 > URL: https://issues.apache.org/jira/browse/SOLR-14372 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 8.3.1 > Environment: rel 7.4 > cluster mode >Reporter: Eran Buchnick >Priority: Major > > Using solr 8.3.0 it seems like required operator isn't functioning properly > when default conjunction operator is AND. > Steps to reproduce: > * 20 docs > * all have text field > * 17 have the value A > * 13 have the value B > * 10 have both A and B (the intersection) > * default operator is set to AND > > * my query is: > {code:java} > +A OR B{code} > > * the result is all 20 docs (as if I searched > {code:java} > A OR B{code} > ) > > * when I change my query to be > {code:java} > {!q.op=OR} +A OR B{code} > I get my expected result which is "A is required B is optional" (which in > this case happened to be all 17 docs that includes the value A, some of them > includes the value B but none of the not include A!) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates
[ https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070498#comment-17070498 ] Eugene Tenkaev commented on SOLR-8030: -- We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the default chain. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. > Transaction log does not store the update chain (or req params?) used for > updates > - > > Key: SOLR-8030 > URL: https://issues.apache.org/jira/browse/SOLR-8030 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Ludovic Boutros >Priority: Major > Attachments: SOLR-8030.patch > > > Transaction Log does not store the update chain, or any other details from > the original update request such as the request params, used during updates. > Therefore tLog uses the default update chain, and a synthetic request, during > log replay. > If we implement custom update logic with multiple distinct update chains that > use custom processors after DistributedUpdateProcessor, or if the default > chain uses processors whose behavior depends on other request params, then > log replay may be incorrect. > Potentially problematic scenerios (need test cases): > * DBQ where the main query string uses local param variables that refer to > other request params > * custom Update chain set as {{default="true"}} using something like > StatelessScriptUpdateProcessorFactory after DUP where the script depends on > request params. > * multiple named update chains with diff processors configured after DUP and > specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom > formats configured after DUP in some special chains, but not in the default > chain -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14367) Upgrade Tika to 1.24
[ https://issues.apache.org/jira/browse/SOLR-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070511#comment-17070511 ] mibo commented on SOLR-14367: - Hi [~erickerickson] and [~tallison], Wow, that was a quick response and fast fix. It was that fast that I could not answer your questions ;o) Thanks a lot for this. > ...avoid using the Tika integration with Solr. Will definitely do and in addition let my colleagues know. Wish you a nice time, mibo > Upgrade Tika to 1.24 > > > Key: SOLR-14367 > URL: https://issues.apache.org/jira/browse/SOLR-14367 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.5 >Reporter: mibo >Assignee: Erick Erickson >Priority: Minor > Fix For: 8.6 > > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Apache Tika to new released 1.24 to handle > [CVE-2020-1950|https://nvd.nist.gov/vuln/detail/CVE-2020-1950]. > Created [PR #1383|https://github.com/apache/lucene-solr/pull/1383] but > afterwards I found https://issues.apache.org/jira/browse/SOLR-14054 and it > looks like an update is much more complicated. > I someone support me I will update my contribution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates
[ https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070498#comment-17070498 ] Eugene Tenkaev edited comment on SOLR-8030 at 3/29/20, 8:25 PM: We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. was (Author: hronom): We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the default chain. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. > Transaction log does not store the update chain (or req params?) used for > updates > - > > Key: SOLR-8030 > URL: https://issues.apache.org/jira/browse/SOLR-8030 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Ludovic Boutros >Priority: Major > Attachments: SOLR-8030.patch > > > Transaction Log does not store the update chain, or any other details from > the original update request such as the request params, used during updates. > Therefore tLog uses the default update chain, and a synthetic request, during > log replay. > If we implement custom update logic with multiple distinct update chains that > use custom processors after DistributedUpdateProcessor, or if the default > chain uses processors whose behavior depends on other request params, then > log replay may be incorrect. > Potentially problematic scenerios (need test cases): > * DBQ where the main query string uses local param variables that refer to > other request params > * custom Update chain set as {{default="true"}} using something like > StatelessScriptUpdateProcessorFactory after DUP where the script depends on > request params. > * multiple named update chains with diff processors configured after DUP and > specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom > formats configured after DUP in some special chains, but not in the default > chain -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14210) Introduce Node-level status handler for replicas
[ https://issues.apache.org/jira/browse/SOLR-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070166#comment-17070166 ] Jan Høydahl edited comment on SOLR-14210 at 3/29/20, 8:31 PM: -- See https://github.com/apache/lucene-solr/pull/1387 for a first attempt of this. If param {{&failWhenRecovering=true}} is passed to {{/api/node/health}} then it will return 503 if one or more replicas on the node is in state {{DOWN}} or {{RECOVERING}}. was (Author: janhoy): See https://github.com/apache/lucene-solr/pull/1387 for a first attempt of this. If param {{&failWhenRecovering=true}} is passed to {{/api/node/health}} then it will return 503 if one or more cores on the node are in states {{RECOVERY}} or {{CONSTRUCTION}}. > Introduce Node-level status handler for replicas > > > Key: SOLR-14210 > URL: https://issues.apache.org/jira/browse/SOLR-14210 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0), 8.5 >Reporter: Houston Putman >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > h2. Background > As was brought up in SOLR-13055, in order to run Solr in a more cloud-native > way, we need some additional features around node-level healthchecks. > {quote}Like in Kubernetes we need 'liveliness' and 'readiness' probe > explained in > [https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n] > determine if a node is live and ready to serve live traffic. > {quote} > > However there are issues around kubernetes managing it's own rolling > restarts. With the current healthcheck setup, it's easy to envision a > scenario in which Solr reports itself as "healthy" when all of its replicas > are actually recovering. Therefore kubernetes, seeing a healthy pod would > then go and restart the next Solr node. This can happen until all replicas > are "recovering" and none are healthy. (maybe the last one restarted will be > "down", but still there are no "active" replicas) > h2. Proposal > I propose we make an additional healthcheck handler that returns whether all > replicas hosted by that Solr node are healthy and "active". That way we will > be able to use the [default kubernetes rolling restart > logic|https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies] > with Solr. > To add on to [Jan's point > here|https://issues.apache.org/jira/browse/SOLR-13055?focusedCommentId=16716559&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16716559], > this handler should be more friendly for other Content-Types and should use > bettter HTTP response statuses. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates
[ https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070498#comment-17070498 ] Eugene Tenkaev edited comment on SOLR-8030 at 3/29/20, 9:13 PM: We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? If so we can introduce workaround when we add special technical field in schema that will contain command for removing old set of dynamic fields. But we will not index this technical field. So our post-processor will work only with data from *SolrInputDocument*. Is this workaround will handle current situation around replaying of updates? Or there some cases when all post-processors completely ignored even in default chain? To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. was (Author: hronom): We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. > Transaction log does not store the update chain (or req params?) used for > updates > - > > Key: SOLR-8030 > URL: https://issues.apache.org/jira/browse/SOLR-8030 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Ludovic Boutros >Priority: Major > Attachments: SOLR-8030.patch > > > Transaction Log does not store the update chain, or any other details from > the original update request such as the request params, used during updates. > Therefore tLog uses the default update chain, and a synthetic request, during > log replay. > If we implement custom update logic with multiple distinct update chains that > use custom processors after DistributedUpdateProcessor, or if the default > chain uses processors whose behavior depends on other request params, then > log replay may be incorrect. > Potentially problematic scenerios (need test cases): > * DBQ where the main query string uses local param variables that refer to > other request params > * custom Update chain set as {{default="true"}} using something like > StatelessScriptUpdateProcessorFactory after DUP where the script depends on > request params. > * multiple named update chains with diff processors configured after DUP and > specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom > formats configured after DUP in some special chains, but not in the default > chain -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates
[ https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070498#comment-17070498 ] Eugene Tenkaev edited comment on SOLR-8030 at 3/29/20, 9:14 PM: We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? h4. Possible workaround for our case: If so we can introduce workaround when we add special technical field in schema that will contain command for removing old set of dynamic fields. But we will not index this technical field. So our post-processor will work only with data from *SolrInputDocument*. Is this workaround will handle current situation around replaying of updates? Or there some cases when all post-processors completely ignored even in default chain? h3. Additionally To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. was (Author: hronom): We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? If so we can introduce workaround when we add special technical field in schema that will contain command for removing old set of dynamic fields. But we will not index this technical field. So our post-processor will work only with data from *SolrInputDocument*. Is this workaround will handle current situation around replaying of updates? Or there some cases when all post-processors completely ignored even in default chain? To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. > Transaction log does not store the update chain (or req params?) used for > updates > - > > Key: SOLR-8030 > URL: https://issues.apache.org/jira/browse/SOLR-8030 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Ludovic Boutros >Priority: Major > Attachments: SOLR-8030.patch > > > Transaction Log does not store the update chain, or any other details from > the original update request such as the request params, used during updates. > Therefore tLog uses the default update chain, and a synthetic request, during > log replay. > If we implement custom update logic with multiple distinct update chains that > use custom processors after DistributedUpdateProcessor, or if the default > chain uses processors whose behavior depends on other request params, then > log replay may be incorrect. > Potentially problematic scenerios (need test cases): > * DBQ where the main query string uses local param variables that refer to > other request params > * custom Update chain set as {{default="true"}} using something like > StatelessScriptUpdateProcessorFactory after DUP where the script depends on > request params. > * multiple named update chains with diff processors configured after DUP and > specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom > format
[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates
[ https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070498#comment-17070498 ] Eugene Tenkaev edited comment on SOLR-8030 at 3/29/20, 9:15 PM: We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? h4. Possible workaround for our case: We can introduce workaround, when we add special technical field in schema that will contain command for removing old set of dynamic fields. But we will not index this technical field. So our post-processor will work only with data from *SolrInputDocument* and this technical field. Is this workaround will handle current situation around replaying of updates? Or there some cases when all post-processors completely ignored even in default chain? h3. Additionally To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. was (Author: hronom): We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? h4. Possible workaround for our case: If so we can introduce workaround when we add special technical field in schema that will contain command for removing old set of dynamic fields. But we will not index this technical field. So our post-processor will work only with data from *SolrInputDocument*. Is this workaround will handle current situation around replaying of updates? Or there some cases when all post-processors completely ignored even in default chain? h3. Additionally To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. > Transaction log does not store the update chain (or req params?) used for > updates > - > > Key: SOLR-8030 > URL: https://issues.apache.org/jira/browse/SOLR-8030 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Ludovic Boutros >Priority: Major > Attachments: SOLR-8030.patch > > > Transaction Log does not store the update chain, or any other details from > the original update request such as the request params, used during updates. > Therefore tLog uses the default update chain, and a synthetic request, during > log replay. > If we implement custom update logic with multiple distinct update chains that > use custom processors after DistributedUpdateProcessor, or if the default > chain uses processors whose behavior depends on other request params, then > log replay may be incorrect. > Potentially problematic scenerios (need test cases): > * DBQ where the main query string uses local param variables that refer to > other request params > * custom Update chain set as {{default="true"}} using something like > StatelessScriptUpdateProcessorFactory after DUP where the script depends on > request params. > * multiple named update chains with diff processors configured after DUP and > specific
[jira] [Commented] (SOLR-14210) Introduce Node-level status handler for replicas
[ https://issues.apache.org/jira/browse/SOLR-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070548#comment-17070548 ] Jan Høydahl commented on SOLR-14210: Renamed param to {{requireHealthyCores=true}}. Added unit test and CHANGES entry. Still no test for the request param parsing. > Introduce Node-level status handler for replicas > > > Key: SOLR-14210 > URL: https://issues.apache.org/jira/browse/SOLR-14210 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0), 8.5 >Reporter: Houston Putman >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > h2. Background > As was brought up in SOLR-13055, in order to run Solr in a more cloud-native > way, we need some additional features around node-level healthchecks. > {quote}Like in Kubernetes we need 'liveliness' and 'readiness' probe > explained in > [https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n] > determine if a node is live and ready to serve live traffic. > {quote} > > However there are issues around kubernetes managing it's own rolling > restarts. With the current healthcheck setup, it's easy to envision a > scenario in which Solr reports itself as "healthy" when all of its replicas > are actually recovering. Therefore kubernetes, seeing a healthy pod would > then go and restart the next Solr node. This can happen until all replicas > are "recovering" and none are healthy. (maybe the last one restarted will be > "down", but still there are no "active" replicas) > h2. Proposal > I propose we make an additional healthcheck handler that returns whether all > replicas hosted by that Solr node are healthy and "active". That way we will > be able to use the [default kubernetes rolling restart > logic|https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies] > with Solr. > To add on to [Jan's point > here|https://issues.apache.org/jira/browse/SOLR-13055?focusedCommentId=16716559&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16716559], > this handler should be more friendly for other Content-Types and should use > bettter HTTP response statuses. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14362) Tests no longer run with whitespace in Solr's checkout directory
[ https://issues.apache.org/jira/browse/SOLR-14362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070566#comment-17070566 ] Uwe Schindler commented on SOLR-14362: -- The problem in Log4j2 will be fixed in version 2.13.2. I will try the fix with a local snapshot of this version tomorrow. Once the 2.13.2 release is out we should be able to simple move to that version. Log4j2 is already quite outdated in Solr (not only in branch_8x, also in master). > Tests no longer run with whitespace in Solr's checkout directory > > > Key: SOLR-14362 > URL: https://issues.apache.org/jira/browse/SOLR-14362 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: master (9.0), 8.5 >Reporter: Uwe Schindler >Assignee: Uwe Schindler >Priority: Major > > When trying to run the test suite from a directory with white-space in the > path name, SolrTestCase does not load at all: > {noformat} >[junit4] ERROR 0.00s J3 | SearchHandlerTest (suite) <<< >[junit4]> Throwable #1: java.security.AccessControlException: access > denied ("java.io.FilePermission" > "C:\Users\Uwe%20Schindler\Projects\lucene\trunk-lusolr1\solr\core\src\test-files\log4j2.xml" > "read") >[junit4]>at > java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) >[junit4]>at > java.base/java.security.AccessController.checkPermission(AccessController.java:895) >[junit4]>at > java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:322) >[junit4]>at > java.base/java.lang.SecurityManager.checkRead(SecurityManager.java:661) >[junit4]>at java.base/java.io.File.exists(File.java:815) >[junit4]>at > org.apache.logging.log4j.core.util.FileUtils.fileFromUri(FileUtils.java:88) >[junit4]>at > org.apache.logging.log4j.core.config.ConfigurationSource.fromResource(ConfigurationSource.java:281) >[junit4]>at > org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:449) >[junit4]>at > org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:386) >[junit4]>at > org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:261) >[junit4]>at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:616) >[junit4]>at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:637) >[junit4]>at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231) >[junit4]>at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) >[junit4]>at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) >[junit4]>at > org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) >[junit4]>at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:121) >[junit4]>at > org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43) >[junit4]>at > org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46) >[junit4]>at > org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29) >[junit4]>at > org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358) >[junit4]>at > org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383) >[junit4]>at > org.apache.solr.SolrTestCase.(SolrTestCase.java:66) >[junit4]>at java.base/java.lang.Class.forName0(Native Method) >[junit4]>at java.base/java.lang.Class.forName(Class.java:398) >[junit4] Completed [1/907 (1!)] on J3 in 1.73s, 0 tests, 1 error <<< > FAILURES! > {noformat} > This is a new issue and seems to be introduced not long ago. The last time I > ran tests, it worked. Does anybody know what changed? To me it looks like > there is some wrong encoding involved. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates
[ https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070498#comment-17070498 ] Eugene Tenkaev edited comment on SOLR-8030 at 3/29/20, 11:13 PM: - We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? h4. Possible workaround for our case: We can introduce workaround, when we add special technical field in schema that will contain command for removing old set of dynamic fields. But we will not index this technical field. So our post-processor will work only with data from *SolrInputDocument* and this technical field. Is this workaround will handle current situation around replaying of updates? Or there some cases when all post-processors completely ignored even in default chain? h3. Additionally To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so, then we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. was (Author: hronom): We need to operate on fully constructed document, to remove set of dynamic fields that replaced by new different set of dynamic fields with different fields names. And here we come up with post-processor. We put this post-processor in the *default chain*. We getting *SolrQueryRequest* from *AddUpdateCommand*: {code} @Override protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) { String value = cmd.getReq().getParams().get(NAME + ".xxx"); {code} and remove old set of dynamic fields from the full document according to the parameters in *SolrQueryRequest* and ignore newly added fields. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? h4. Possible workaround for our case: We can introduce workaround, when we add special technical field in schema that will contain command for removing old set of dynamic fields. But we will not index this technical field. So our post-processor will work only with data from *SolrInputDocument* and this technical field. Is this workaround will handle current situation around replaying of updates? Or there some cases when all post-processors completely ignored even in default chain? h3. Additionally To the idea of [~elyograg], is it possible to move routing code out from *DistributedUpdateProcessor*? So all processors that comes after this routing processor will be executed on the proper node? If so when we can move out Atomic Update processing from *DistributedUpdateProcessor* and it will be executed on node that have proper data. > Transaction log does not store the update chain (or req params?) used for > updates > - > > Key: SOLR-8030 > URL: https://issues.apache.org/jira/browse/SOLR-8030 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Ludovic Boutros >Priority: Major > Attachments: SOLR-8030.patch > > > Transaction Log does not store the update chain, or any other details from > the original update request such as the request params, used during updates. > Therefore tLog uses the default update chain, and a synthetic request, during > log replay. > If we implement custom update logic with multiple distinct update chains that > use custom processors after DistributedUpdateProcessor, or if the default > chain uses processors whose behavior depends on other request params, then > log replay may be incorrect. > Potentially problematic scenerios (need test cases): > * DBQ where the main query string uses local param variables that refer to > other request params > * custom Update chain set as {{default="true"}} using something like > StatelessScriptUpdateProcessorFactory after DUP where the script depends on > request params. > * multiple named update chains with diff processors configured af
[jira] [Commented] (LUCENE-9278) Make javadoc folder structure follow Gradle project path
[ https://issues.apache.org/jira/browse/LUCENE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070574#comment-17070574 ] Tomoko Uchida commented on LUCENE-9278: --- {quote}This replaces all relative paths with absolute urls {quote} To be accurate, this replaces all relative paths generated by javadoc tool's "-link" option with absolute urls ("-linkoffline" & element-list file). There still remain some hand-written relative paths in lucene-core Javadocs, we need to not to forget to rewrite them after completely switching to gradle (as I noted in previous comments)... > Make javadoc folder structure follow Gradle project path > > > Key: LUCENE-9278 > URL: https://issues.apache.org/jira/browse/LUCENE-9278 > Project: Lucene - Core > Issue Type: Task > Components: general/build >Reporter: Tomoko Uchida >Priority: Major > Time Spent: 2h > Remaining Estimate: 0h > > Current javadoc folder structure is derived from Ant project name. e.g.: > [https://lucene.apache.org/core/8_4_1/analyzers-icu/index.html] > [https://lucene.apache.org/solr/8_4_1/solr-solrj/index.html] > For Gradle build, it should also follow gradle project structure (path) > instead of ant one, to keep things simple to manage [1]. Hence, it will look > like this: > [https://lucene.apache.org/core/9_0_0/analysis/icu/index.html] > [https://lucene.apache.org/solr/9_0_0/solr/solrj/index.html] > [1] The change was suggested at the conversation between Dawid Weiss and I on > a github pr: [https://github.com/apache/lucene-solr/pull/1304] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-9278) Make javadoc folder structure follow Gradle project path
[ https://issues.apache.org/jira/browse/LUCENE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070336#comment-17070336 ] Tomoko Uchida edited comment on LUCENE-9278 at 3/29/20, 11:18 PM: -- I opened another PR: [https://github.com/apache/lucene-solr/pull/1388]. This replaces all relative paths with absolute urls by {{-linkoffline}} option, so that javadoc destination directory is moved under {{_project_/build/}}. - docs of ":lucene:core" prj will go to {{lucene/core/build/docs/javadoc/}} - docs of ":lucene:analysis:common" will go to {{lucene/analysis/common/build/docs/javadoc/}} - docs of ":solr:core" prj will go to {{solr/core/build/docs/javadoc/}} - ... and so on Also, this directly calls [javadoc tool|https://docs.oracle.com/en/java/javase/11/tools/javadoc.html] rather than Ant javadoc task - Ant javadoc task doesn't recognize {{element-list}}, the successor to {{package-list}} that had been used up until Java 8, so {{}} no longer correctly work with JDK11 (SOLR-14352). All generated docs passed the "checkJavaDocs.py" check. In other words, there's no missing package summary (we had some missing package summary problem with gradle default javadoc task). [~dweiss] I think our custom javadoc task (for gradle build) is almost complete with the changes. Would you review it again? was (Author: tomoko uchida): I opened another PR: [https://github.com/apache/lucene-solr/pull/1388]. This replaces all relative paths with absolute urls by {{-linkoffline}} option, so that javadoc destination directory is moved under {{_project_/build/}}. - docs of ":lucene:core" prj will go to {{lucene/core/build/docs/javadoc/}} - docs of ":lucene:analysis:common" will go to {{lucene/analysis/common/build/docs/javadoc/}} - docs of ":solr:core" prj will go to {{solr/core/build/docs/javadoc/}} - ... and so on Also, this directly calls [javadoc tool|https://docs.oracle.com/en/java/javase/11/tools/javadoc.html] rather than Ant javadoc task - Ant javadoc task doesn't recognize {{element-list}}, the successor to {{package-list}} up until Java 8, so {{}} no longer correctly work with JDK11 (SOLR-14352). All generated docs passed the "checkJavaDocs.py" check. In other words, there's no missing package summary (we had some missing package summary problem with gradle default javadoc task). [~dweiss] I think our custom javadoc task (for gradle build) is almost complete with the changes. Would you review it again? > Make javadoc folder structure follow Gradle project path > > > Key: LUCENE-9278 > URL: https://issues.apache.org/jira/browse/LUCENE-9278 > Project: Lucene - Core > Issue Type: Task > Components: general/build >Reporter: Tomoko Uchida >Priority: Major > Time Spent: 2h > Remaining Estimate: 0h > > Current javadoc folder structure is derived from Ant project name. e.g.: > [https://lucene.apache.org/core/8_4_1/analyzers-icu/index.html] > [https://lucene.apache.org/solr/8_4_1/solr-solrj/index.html] > For Gradle build, it should also follow gradle project structure (path) > instead of ant one, to keep things simple to manage [1]. Hence, it will look > like this: > [https://lucene.apache.org/core/9_0_0/analysis/icu/index.html] > [https://lucene.apache.org/solr/9_0_0/solr/solrj/index.html] > [1] The change was suggested at the conversation between Dawid Weiss and I on > a github pr: [https://github.com/apache/lucene-solr/pull/1304] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates
[ https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070623#comment-17070623 ] David Smiley commented on SOLR-8030: bq. Is there possibility that somehow code here will not work during replay and we loose behavior that adds this processor? Yes, sadly, because the original request params are not present in the update log during replay. bq. (Workaround:) add special technical field in schema Yes, you could do something like that. I actually wouldn't touch the schema; instead your URP will remove this meta field after it applies the logic to the document. bq. is it possible to move routing code out from DistributedUpdateProcessor I'd argue routing is very much the job of this URP considering it's name :-) Instead, as indicated in your last sentence, I think Atomic update processing could be separated out. That'd be wonderful for maintainability as well I think; DURP is a complex beast. However, Atomic update processing would be logically done after the DURP and thus would be subject to the same conundrum of this issue -- no access to the original request params. So this doesn't solve your problem. > Transaction log does not store the update chain (or req params?) used for > updates > - > > Key: SOLR-8030 > URL: https://issues.apache.org/jira/browse/SOLR-8030 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 5.3 >Reporter: Ludovic Boutros >Priority: Major > Attachments: SOLR-8030.patch > > > Transaction Log does not store the update chain, or any other details from > the original update request such as the request params, used during updates. > Therefore tLog uses the default update chain, and a synthetic request, during > log replay. > If we implement custom update logic with multiple distinct update chains that > use custom processors after DistributedUpdateProcessor, or if the default > chain uses processors whose behavior depends on other request params, then > log replay may be incorrect. > Potentially problematic scenerios (need test cases): > * DBQ where the main query string uses local param variables that refer to > other request params > * custom Update chain set as {{default="true"}} using something like > StatelessScriptUpdateProcessorFactory after DUP where the script depends on > request params. > * multiple named update chains with diff processors configured after DUP and > specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom > formats configured after DUP in some special chains, but not in the default > chain -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14356) PeerSync with hanging nodes
[ https://issues.apache.org/jira/browse/SOLR-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-14356: Status: Patch Available (was: Open) > PeerSync with hanging nodes > --- > > Key: SOLR-14356 > URL: https://issues.apache.org/jira/browse/SOLR-14356 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Priority: Major > Attachments: SOLR-14356.patch, SOLR-14356.patch > > > Right now in {{PeerSync}} (during leader election), in case of exception on > requesting versions to a node, we will skip that node if exception is one the > following type > * ConnectTimeoutException > * NoHttpResponseException > * SocketException > Sometime the other node basically hang but still accept connection. In that > case SocketTimeoutException is thrown and we consider the {{PeerSync}} > process as failed and the whole shard just basically leaderless forever (as > long as the hang node still there). > We can't just blindly adding {{SocketTimeoutException}} to above list, since > [~shalin] mentioned that sometimes timeout can happen because of genuine > reasons too e.g. temporary GC pause. > I think the general idea here is we obey {{leaderVoteWait}} restriction and > retry doing sync with others in case of connection/timeout exception happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14356) PeerSync with hanging nodes
[ https://issues.apache.org/jira/browse/SOLR-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-14356: Attachment: SOLR-14356.patch > PeerSync with hanging nodes > --- > > Key: SOLR-14356 > URL: https://issues.apache.org/jira/browse/SOLR-14356 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Priority: Major > Attachments: SOLR-14356.patch, SOLR-14356.patch > > > Right now in {{PeerSync}} (during leader election), in case of exception on > requesting versions to a node, we will skip that node if exception is one the > following type > * ConnectTimeoutException > * NoHttpResponseException > * SocketException > Sometime the other node basically hang but still accept connection. In that > case SocketTimeoutException is thrown and we consider the {{PeerSync}} > process as failed and the whole shard just basically leaderless forever (as > long as the hang node still there). > We can't just blindly adding {{SocketTimeoutException}} to above list, since > [~shalin] mentioned that sometimes timeout can happen because of genuine > reasons too e.g. temporary GC pause. > I think the general idea here is we obey {{leaderVoteWait}} restriction and > retry doing sync with others in case of connection/timeout exception happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14356) PeerSync with hanging nodes
[ https://issues.apache.org/jira/browse/SOLR-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-14356: Attachment: SOLR-14356.patch > PeerSync with hanging nodes > --- > > Key: SOLR-14356 > URL: https://issues.apache.org/jira/browse/SOLR-14356 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Priority: Major > Attachments: SOLR-14356.patch, SOLR-14356.patch > > > Right now in {{PeerSync}} (during leader election), in case of exception on > requesting versions to a node, we will skip that node if exception is one the > following type > * ConnectTimeoutException > * NoHttpResponseException > * SocketException > Sometime the other node basically hang but still accept connection. In that > case SocketTimeoutException is thrown and we consider the {{PeerSync}} > process as failed and the whole shard just basically leaderless forever (as > long as the hang node still there). > We can't just blindly adding {{SocketTimeoutException}} to above list, since > [~shalin] mentioned that sometimes timeout can happen because of genuine > reasons too e.g. temporary GC pause. > I think the general idea here is we obey {{leaderVoteWait}} restriction and > retry doing sync with others in case of connection/timeout exception happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14356) PeerSync with hanging nodes
[ https://issues.apache.org/jira/browse/SOLR-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-14356: Status: Patch Available (was: Open) > PeerSync with hanging nodes > --- > > Key: SOLR-14356 > URL: https://issues.apache.org/jira/browse/SOLR-14356 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Priority: Major > Attachments: SOLR-14356.patch, SOLR-14356.patch > > > Right now in {{PeerSync}} (during leader election), in case of exception on > requesting versions to a node, we will skip that node if exception is one the > following type > * ConnectTimeoutException > * NoHttpResponseException > * SocketException > Sometime the other node basically hang but still accept connection. In that > case SocketTimeoutException is thrown and we consider the {{PeerSync}} > process as failed and the whole shard just basically leaderless forever (as > long as the hang node still there). > We can't just blindly adding {{SocketTimeoutException}} to above list, since > [~shalin] mentioned that sometimes timeout can happen because of genuine > reasons too e.g. temporary GC pause. > I think the general idea here is we obey {{leaderVoteWait}} restriction and > retry doing sync with others in case of connection/timeout exception happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14356) PeerSync with hanging nodes
[ https://issues.apache.org/jira/browse/SOLR-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-14356: Status: Open (was: Patch Available) > PeerSync with hanging nodes > --- > > Key: SOLR-14356 > URL: https://issues.apache.org/jira/browse/SOLR-14356 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Priority: Major > Attachments: SOLR-14356.patch, SOLR-14356.patch > > > Right now in {{PeerSync}} (during leader election), in case of exception on > requesting versions to a node, we will skip that node if exception is one the > following type > * ConnectTimeoutException > * NoHttpResponseException > * SocketException > Sometime the other node basically hang but still accept connection. In that > case SocketTimeoutException is thrown and we consider the {{PeerSync}} > process as failed and the whole shard just basically leaderless forever (as > long as the hang node still there). > We can't just blindly adding {{SocketTimeoutException}} to above list, since > [~shalin] mentioned that sometimes timeout can happen because of genuine > reasons too e.g. temporary GC pause. > I think the general idea here is we obey {{leaderVoteWait}} restriction and > retry doing sync with others in case of connection/timeout exception happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14356) PeerSync with hanging nodes
[ https://issues.apache.org/jira/browse/SOLR-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-14356: Attachment: (was: SOLR-14356.patch) > PeerSync with hanging nodes > --- > > Key: SOLR-14356 > URL: https://issues.apache.org/jira/browse/SOLR-14356 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Priority: Major > Attachments: SOLR-14356.patch, SOLR-14356.patch > > > Right now in {{PeerSync}} (during leader election), in case of exception on > requesting versions to a node, we will skip that node if exception is one the > following type > * ConnectTimeoutException > * NoHttpResponseException > * SocketException > Sometime the other node basically hang but still accept connection. In that > case SocketTimeoutException is thrown and we consider the {{PeerSync}} > process as failed and the whole shard just basically leaderless forever (as > long as the hang node still there). > We can't just blindly adding {{SocketTimeoutException}} to above list, since > [~shalin] mentioned that sometimes timeout can happen because of genuine > reasons too e.g. temporary GC pause. > I think the general idea here is we obey {{leaderVoteWait}} restriction and > retry doing sync with others in case of connection/timeout exception happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14367) Upgrade Tika to 1.24
[ https://issues.apache.org/jira/browse/SOLR-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070625#comment-17070625 ] Erick Erickson commented on SOLR-14367: --- Somehow I pushed everything _except_ the change to 8x ivy-versions.properties, pushing momentarily... > Upgrade Tika to 1.24 > > > Key: SOLR-14367 > URL: https://issues.apache.org/jira/browse/SOLR-14367 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.5 >Reporter: mibo >Assignee: Erick Erickson >Priority: Minor > Fix For: 8.6 > > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Apache Tika to new released 1.24 to handle > [CVE-2020-1950|https://nvd.nist.gov/vuln/detail/CVE-2020-1950]. > Created [PR #1383|https://github.com/apache/lucene-solr/pull/1383] but > afterwards I found https://issues.apache.org/jira/browse/SOLR-14054 and it > looks like an update is much more complicated. > I someone support me I will update my contribution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14367) Upgrade Tika to 1.24
[ https://issues.apache.org/jira/browse/SOLR-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070626#comment-17070626 ] ASF subversion and git services commented on SOLR-14367: Commit 70d084c0348eb31e12d45ca74d833a08394f5f44 in lucene-solr's branch refs/heads/branch_8x from Erick Erickson [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=70d084c ] SOLR-14367: Upgrade Tika to 1.24. Somehow omitted ivy-versions.properties > Upgrade Tika to 1.24 > > > Key: SOLR-14367 > URL: https://issues.apache.org/jira/browse/SOLR-14367 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.5 >Reporter: mibo >Assignee: Erick Erickson >Priority: Minor > Fix For: 8.6 > > Time Spent: 10m > Remaining Estimate: 0h > > Upgrade Apache Tika to new released 1.24 to handle > [CVE-2020-1950|https://nvd.nist.gov/vuln/detail/CVE-2020-1950]. > Created [PR #1383|https://github.com/apache/lucene-solr/pull/1383] but > afterwards I found https://issues.apache.org/jira/browse/SOLR-14054 and it > looks like an update is much more complicated. > I someone support me I will update my contribution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13774) Add Lucene/Solr OpenJDK Compatibility Matrix to Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-13774. --- Resolution: Won't Fix I've been thinking about this and the more I think about it the less I think we should include it even in the Wiki. I know Nick put a lot of work into it, and sorry for being so slow. I think the caveats already in the ref guide are sufficient_._ I also don't want to put anything out that could be read as an indication that we're going to support any version of Java that doesn't have LTS. That is, 9, 10, 12, 13 are all these point releases. If someone comes in with "I have this problem under Java 10 and Solr 8.4, but Java 11 works" for instance, I doubt anyone will actually try to address it and would rather not set any expectations in that direction. If someone else wants to take it over, please feel free to re-open. > Add Lucene/Solr OpenJDK Compatibility Matrix to Ref Guide > - > > Key: SOLR-13774 > URL: https://issues.apache.org/jira/browse/SOLR-13774 > Project: Solr > Issue Type: Task > Components: documentation >Affects Versions: 8.1.1 > Environment: EC2 t2.2xlarge > Ubuntu 16.04.2 LTS > Solr source downloaded from: [https://archive.apache.org/dist/lucene/solr] > OpenJDK binaries downloaded from: [https://jdk.java.net|https://jdk.java.net/] > OpenJDK version information is included in the documentation. > >Reporter: Nick >Assignee: Erick Erickson >Priority: Minor > Time Spent: 1h 10m > Remaining Estimate: 0h > > Create a reusable build system to run Lucene/Solr ant test source code suite > against different versions of OpenJDK binaries. Generate a table with the > results of BUILD SUCCESSFUL or BUILD FAILED and incorporate the output into > the Ref Guide here: solr/solr-ref-guide/src/solr-system-requirements.adoc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] bringyou commented on a change in pull request #513: LUCENE-8590: Optimize DocValues update datastructures
bringyou commented on a change in pull request #513: LUCENE-8590: Optimize DocValues update datastructures URL: https://github.com/apache/lucene-solr/pull/513#discussion_r399898066 ## File path: lucene/core/src/java/org/apache/lucene/index/BufferedUpdates.java ## @@ -288,15 +184,24 @@ void clear() { deleteTerms.clear(); deleteQueries.clear(); deleteDocIDs.clear(); -numericUpdates.clear(); -binaryUpdates.clear(); numTermDeletes.set(0); -numNumericUpdates.set(0); -numBinaryUpdates.set(0); -bytesUsed.set(0); +numFieldUpdates.set(0); +fieldUpdates.clear(); +bytesUsed.addAndGet(-bytesUsed.get()); +fieldUpdatesBytesUsed.addAndGet(-fieldUpdatesBytesUsed.get()); } boolean any() { -return deleteTerms.size() > 0 || deleteDocIDs.size() > 0 || deleteQueries.size() > 0 || numericUpdates.size() > 0 || binaryUpdates.size() > 0; +return deleteTerms.size() > 0 || deleteDocIDs.size() > 0 || deleteQueries.size() > 0 || numFieldUpdates.get() > 0; + } + + @Override + public long ramBytesUsed() { +return bytesUsed.get() + fieldUpdatesBytesUsed.get(); + } + + void clearDeletedDocIds() { +deleteDocIDs.clear(); +bytesUsed.addAndGet(-deleteDocIDs.size() * BufferedUpdates.BYTES_PER_DEL_DOCID); } Review comment: I'm glad to do so😊 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13492) Disallow explicit GC by default during Solr startup
[ https://issues.apache.org/jira/browse/SOLR-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070658#comment-17070658 ] David Smiley commented on SOLR-13492: - I'm -0 to this. I think it'll be more of an annoyance than a safety net for a very hypothetical situation that isn't even in the code we ship. Using jconsole & jvisualvm to trigger GC is very common. > Disallow explicit GC by default during Solr startup > --- > > Key: SOLR-13492 > URL: https://issues.apache.org/jira/browse/SOLR-13492 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Reporter: Shawn Heisey >Assignee: Shawn Heisey >Priority: Major > Attachments: SOLR-13492.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Solr should use the -XX:+DisableExplicitGC option as part of its default GC > tuning. > None of Solr's stock code uses explicit GCs, so that option will have no > effect on most installs. The effective result of this is that if somebody > adds custom code to Solr and THAT code does an explicit GC, it won't be > allowed to function. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9298) Incorrect clear order in BufferedUpdates
yubinglei created LUCENE-9298: - Summary: Incorrect clear order in BufferedUpdates Key: LUCENE-9298 URL: https://issues.apache.org/jira/browse/LUCENE-9298 Project: Lucene - Core Issue Type: Bug Affects Versions: 8.5 Reporter: yubinglei As mentioned in this https://github.com/apache/lucene-solr/pull/513#discussion_r399131681 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] bringyou opened a new pull request #1389: LUCENE-9298: fix clearDeletedDocIds in BufferedUpdates
bringyou opened a new pull request #1389: LUCENE-9298: fix clearDeletedDocIds in BufferedUpdates URL: https://github.com/apache/lucene-solr/pull/1389 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9298) Incorrect clear order in BufferedUpdates
[ https://issues.apache.org/jira/browse/LUCENE-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070670#comment-17070670 ] yubinglei commented on LUCENE-9298: --- I have created a PR in Github https://github.com/apache/lucene-solr/pull/1389 > Incorrect clear order in BufferedUpdates > > > Key: LUCENE-9298 > URL: https://issues.apache.org/jira/browse/LUCENE-9298 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 8.5 >Reporter: yubinglei >Priority: Minor > Labels: easyfix, newbie, ready-to-commit > Time Spent: 10m > Remaining Estimate: 0h > > As mentioned in this > https://github.com/apache/lucene-solr/pull/513#discussion_r399131681 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] bringyou commented on a change in pull request #513: LUCENE-8590: Optimize DocValues update datastructures
bringyou commented on a change in pull request #513: LUCENE-8590: Optimize DocValues update datastructures URL: https://github.com/apache/lucene-solr/pull/513#discussion_r399898066 ## File path: lucene/core/src/java/org/apache/lucene/index/BufferedUpdates.java ## @@ -288,15 +184,24 @@ void clear() { deleteTerms.clear(); deleteQueries.clear(); deleteDocIDs.clear(); -numericUpdates.clear(); -binaryUpdates.clear(); numTermDeletes.set(0); -numNumericUpdates.set(0); -numBinaryUpdates.set(0); -bytesUsed.set(0); +numFieldUpdates.set(0); +fieldUpdates.clear(); +bytesUsed.addAndGet(-bytesUsed.get()); +fieldUpdatesBytesUsed.addAndGet(-fieldUpdatesBytesUsed.get()); } boolean any() { -return deleteTerms.size() > 0 || deleteDocIDs.size() > 0 || deleteQueries.size() > 0 || numericUpdates.size() > 0 || binaryUpdates.size() > 0; +return deleteTerms.size() > 0 || deleteDocIDs.size() > 0 || deleteQueries.size() > 0 || numFieldUpdates.get() > 0; + } + + @Override + public long ramBytesUsed() { +return bytesUsed.get() + fieldUpdatesBytesUsed.get(); + } + + void clearDeletedDocIds() { +deleteDocIDs.clear(); +bytesUsed.addAndGet(-deleteDocIDs.size() * BufferedUpdates.BYTES_PER_DEL_DOCID); } Review comment: I'm glad to do so😊. create a jira ticket: [9298](https://issues.apache.org/jira/browse/LUCENE-9298) and a [PR](https://github.com/apache/lucene-solr/pull/1389), please take a look This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9298) Incorrect clear order in BufferedUpdates
[ https://issues.apache.org/jira/browse/LUCENE-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yubinglei updated LUCENE-9298: -- Labels: easyfix newbie pull-request-available (was: easyfix newbie ready-to-commit) > Incorrect clear order in BufferedUpdates > > > Key: LUCENE-9298 > URL: https://issues.apache.org/jira/browse/LUCENE-9298 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 8.5 >Reporter: yubinglei >Priority: Minor > Labels: easyfix, newbie, pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > As mentioned in this > https://github.com/apache/lucene-solr/pull/513#discussion_r399131681 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13492) Disallow explicit GC by default during Solr startup
[ https://issues.apache.org/jira/browse/SOLR-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070674#comment-17070674 ] Munendra S N commented on SOLR-13492: - Thanks David for the feedback. With current defaults with which we ship Solr, jconsole and jvisualvm doesn't work as jmx is disabled by default. Also, both the feedbacks was not add the flag. So, considering this I suggest to go with XX:+ExplicitGCInvokesConcurrent so, explicit ones are concurrent. I will ask [~kgsdora] to make the changes once you guys confirm > Disallow explicit GC by default during Solr startup > --- > > Key: SOLR-13492 > URL: https://issues.apache.org/jira/browse/SOLR-13492 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Reporter: Shawn Heisey >Assignee: Shawn Heisey >Priority: Major > Attachments: SOLR-13492.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Solr should use the -XX:+DisableExplicitGC option as part of its default GC > tuning. > None of Solr's stock code uses explicit GCs, so that option will have no > effect on most installs. The effective result of this is that if somebody > adds custom code to Solr and THAT code does an explicit GC, it won't be > allowed to function. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9298) Incorrect clear order in BufferedUpdates
[ https://issues.apache.org/jira/browse/LUCENE-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yubinglei updated LUCENE-9298: -- Description: As mentioned in this [https://github.com/apache/lucene-solr/pull/513#discussion_r399131681] the method clearDeletedDocIds in BufferedUpdates.java has a bug, it can't reset bytesUsed correctly {code:java} void clearDeletedDocIds() { deleteDocIDs.clear(); bytesUsed.addAndGet(-deleteDocIDs.size() * BufferedUpdates.BYTES_PER_DEL_DOCID); } {code} was: As mentioned in this https://github.com/apache/lucene-solr/pull/513#discussion_r399131681 > Incorrect clear order in BufferedUpdates > > > Key: LUCENE-9298 > URL: https://issues.apache.org/jira/browse/LUCENE-9298 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 8.5 >Reporter: yubinglei >Priority: Minor > Labels: easyfix, newbie, pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > As mentioned in this > [https://github.com/apache/lucene-solr/pull/513#discussion_r399131681] > the method clearDeletedDocIds in BufferedUpdates.java has a bug, it can't > reset bytesUsed correctly > {code:java} > void clearDeletedDocIds() { > deleteDocIDs.clear(); > bytesUsed.addAndGet(-deleteDocIDs.size() * > BufferedUpdates.BYTES_PER_DEL_DOCID); > } > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14317) HttpClusterStateProvider throws exception when only one node down
[ https://issues.apache.org/jira/browse/SOLR-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyle updated SOLR-14317: Attachment: SOLR-14317.patch > HttpClusterStateProvider throws exception when only one node down > - > > Key: SOLR-14317 > URL: https://issues.apache.org/jira/browse/SOLR-14317 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.7.1, 7.7.2 >Reporter: Lyle >Assignee: Ishan Chattopadhyaya >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14317.patch, SOLR-14317.patch > > Time Spent: 20m > Remaining Estimate: 0h > > When create a CloudSolrClient with solrUrls, if the first url in the solrUrls > list is invalid or server is down, it will throw exception directly rather > than try remaining url. > In > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClusterStateProvider.java#L65], > if fetchLiveNodes(initialClient) have any IOException, in > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java#L648], > exceptions will be caught and throw SolrServerException to the upper caller, > while no IOExceptioin will be caught in > HttpClusterStateProvider.fetchLiveNodes(HttpClusterStateProvider.java:200). > The SolrServerException should be caught as well in > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClusterStateProvider.java#L69], > so that if first node provided in solrUrs down, we can try to use the second > to fetch live nodes. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14317) HttpClusterStateProvider throws exception when only one node down
[ https://issues.apache.org/jira/browse/SOLR-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070713#comment-17070713 ] Lyle commented on SOLR-14317: - Attached patch for branch_7_7. [~ichattopadhyaya], [~noble], May I know the release plan 7.7.3 ? > HttpClusterStateProvider throws exception when only one node down > - > > Key: SOLR-14317 > URL: https://issues.apache.org/jira/browse/SOLR-14317 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.7.1, 7.7.2 >Reporter: Lyle >Assignee: Ishan Chattopadhyaya >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14317.patch, SOLR-14317.patch > > Time Spent: 20m > Remaining Estimate: 0h > > When create a CloudSolrClient with solrUrls, if the first url in the solrUrls > list is invalid or server is down, it will throw exception directly rather > than try remaining url. > In > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClusterStateProvider.java#L65], > if fetchLiveNodes(initialClient) have any IOException, in > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java#L648], > exceptions will be caught and throw SolrServerException to the upper caller, > while no IOExceptioin will be caught in > HttpClusterStateProvider.fetchLiveNodes(HttpClusterStateProvider.java:200). > The SolrServerException should be caught as well in > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClusterStateProvider.java#L69], > so that if first node provided in solrUrs down, we can try to use the second > to fetch live nodes. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14317) HttpClusterStateProvider throws exception when only one node down
[ https://issues.apache.org/jira/browse/SOLR-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070713#comment-17070713 ] Lyle edited comment on SOLR-14317 at 3/30/20, 6:57 AM: --- Attached patch for branch_7_7. [~ichattopadhyaya], [~noble], May I know the release plan for version 7.7.3 ? was (Author: lyle_wang): Attached patch for branch_7_7. [~ichattopadhyaya], [~noble], May I know the release plan 7.7.3 ? > HttpClusterStateProvider throws exception when only one node down > - > > Key: SOLR-14317 > URL: https://issues.apache.org/jira/browse/SOLR-14317 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.7.1, 7.7.2 >Reporter: Lyle >Assignee: Ishan Chattopadhyaya >Priority: Major > Fix For: 8.6 > > Attachments: SOLR-14317.patch, SOLR-14317.patch > > Time Spent: 20m > Remaining Estimate: 0h > > When create a CloudSolrClient with solrUrls, if the first url in the solrUrls > list is invalid or server is down, it will throw exception directly rather > than try remaining url. > In > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClusterStateProvider.java#L65], > if fetchLiveNodes(initialClient) have any IOException, in > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java#L648], > exceptions will be caught and throw SolrServerException to the upper caller, > while no IOExceptioin will be caught in > HttpClusterStateProvider.fetchLiveNodes(HttpClusterStateProvider.java:200). > The SolrServerException should be caught as well in > [https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClusterStateProvider.java#L69], > so that if first node provided in solrUrs down, we can try to use the second > to fetch live nodes. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org