[jira] [Created] (SOLR-15058) Solr can not get current hostname when hostname contain '_'

2020-12-23 Thread Zhaolong Li (Jira)
Zhaolong Li created SOLR-15058:
--

 Summary: Solr can not get current hostname when hostname contain 
'_'
 Key: SOLR-15058
 URL: https://issues.apache.org/jira/browse/SOLR-15058
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Zhaolong Li


Utils.getBaseUrlForNodeName split the nodename by the first '_'

In ZK, Solr add '_solr' to the end of the hostname


so there should split by the last '_'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15058) Solr can not get correct hostname when hostname contain '_'

2020-12-23 Thread Su Sasa (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Sasa updated SOLR-15058:
---
Summary: Solr can not get correct hostname when hostname contain '_'  (was: 
Solr can not get current hostname when hostname contain '_')

> Solr can not get correct hostname when hostname contain '_'
> ---
>
> Key: SOLR-15058
> URL: https://issues.apache.org/jira/browse/SOLR-15058
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Su Sasa
>Priority: Major
>
> Utils.getBaseUrlForNodeName split the nodename by the first '_'
> In ZK, Solr add '_solr' to the end of the hostname
> so there should split by the last '_'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15058) Solr can not get correct hostname when hostname contain '_'

2020-12-23 Thread Su Sasa (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Sasa updated SOLR-15058:
---
Attachment: 0001-SOLR-15058-get-the-correct-hostname-when-hostname-co.patch

> Solr can not get correct hostname when hostname contain '_'
> ---
>
> Key: SOLR-15058
> URL: https://issues.apache.org/jira/browse/SOLR-15058
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Su Sasa
>Priority: Major
> Attachments: 
> 0001-SOLR-15058-get-the-correct-hostname-when-hostname-co.patch
>
>
> Utils.getBaseUrlForNodeName split the nodename by the first '_'
> In ZK, Solr add '_solr' to the end of the hostname
> so there should split by the last '_'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15058) Solr can not get correct hostname when hostname contain '_'

2020-12-23 Thread Su Sasa (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Su Sasa updated SOLR-15058:
---
Priority: Minor  (was: Major)

> Solr can not get correct hostname when hostname contain '_'
> ---
>
> Key: SOLR-15058
> URL: https://issues.apache.org/jira/browse/SOLR-15058
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Su Sasa
>Priority: Minor
> Attachments: 
> 0001-SOLR-15058-get-the-correct-hostname-when-hostname-co.patch
>
>
> Utils.getBaseUrlForNodeName split the nodename by the first '_'
> In ZK, Solr add '_solr' to the end of the hostname
> so there should split by the last '_'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] Sasasu opened a new pull request #2164: SOLR-15058: get the correct hostname when hostname contain '_'

2020-12-23 Thread GitBox


Sasasu opened a new pull request #2164:
URL: https://github.com/apache/lucene-solr/pull/2164


   
   
   
   # Description
   
   
   `Utils.getBaseUrlForNodeName` split the nodename by the first '_'
   
   In ZK, Solr add '_solr' to the end of the hostname
   
   
   # Solution
   
   should split by the last '_'
   
   # Tests
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [ ] I have run `./gradlew check`.
   - [x] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-9570:

Description: 
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}

  was:Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2148: SOLR-15052: Per-replica states for reducing overseer bottlenecks

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2148:
URL: https://github.com/apache/lucene-solr/pull/2148#discussion_r547887149



##
File path: solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java
##
@@ -73,6 +79,29 @@ public void teardown() {
 System.clearProperty("enable.packages");
   }
 
+  public void testZkBehavior() throws Exception {

Review comment:
   This unit test is misplaced, it has nothing to do with this suite. It 
should be moved to `ZkSolrClientTest`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2148: SOLR-15052: Per-replica states for reducing overseer bottlenecks

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2148:
URL: https://github.com/apache/lucene-solr/pull/2148#discussion_r547890418



##
File path: 
solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java
##
@@ -81,6 +81,7 @@
 public class SolrCloudTestCase extends SolrTestCaseJ4 {
 
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  public static final Boolean USE_PER_REPLICA_STATE = 
Boolean.parseBoolean(System.getProperty("use.per-replica", "false"));

Review comment:
   There's a `defaults` section in our unofficial kitchen-sink ZK config 
file, i.e. `/clusterprops.json`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547915021



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/AttributeFetcher.java
##
@@ -18,93 +18,53 @@
 package org.apache.solr.cluster.placement;
 
 import org.apache.solr.cluster.Node;
+import org.apache.solr.cluster.SolrCollection;
 
 import java.util.Set;
 
 /**
  * Instances of this interface are used to fetch various attributes from 
nodes (and other sources) in the cluster.
  */
 public interface AttributeFetcher {
-  /**
-   * Request the number of cores on each node. To get the value use {@link 
AttributeValues#getCoresCount(Node)}
-   */
-  AttributeFetcher requestNodeCoreCount();
-
-  /**
-   * Request the disk hardware type on each node. To get the value use {@link 
AttributeValues#getDiskType(Node)}
-   */
-  AttributeFetcher requestNodeDiskType();
-
-  /**
-   * Request the free disk size on each node. To get the value use {@link 
AttributeValues#getFreeDisk(Node)}
-   */
-  AttributeFetcher requestNodeFreeDisk();
-
-  /**
-   * Request the total disk size on each node. To get the value use {@link 
AttributeValues#getTotalDisk(Node)}
-   */
-  AttributeFetcher requestNodeTotalDisk();
-
-  /**
-   * Request the heap usage on each node. To get the value use {@link 
AttributeValues#getHeapUsage(Node)}
-   */
-  AttributeFetcher requestNodeHeapUsage();
-
-  /**
-   * Request the system load average on each node. To get the value use {@link 
AttributeValues#getSystemLoadAverage(Node)}
-   */
-  AttributeFetcher requestNodeSystemLoadAverage();
-
   /**
* Request a given system property on each node. To get the value use {@link 
AttributeValues#getSystemProperty(Node, String)}
+   * @param name system property name
*/
   AttributeFetcher requestNodeSystemProperty(String name);
 
   /**
* Request an environment variable on each node. To get the value use {@link 
AttributeValues#getEnvironmentVariable(Node, String)}
+   * @param name environment property name
*/
   AttributeFetcher requestNodeEnvironmentVariable(String name);
 
   /**
-   * Request a node metric from each node. To get the value use {@link 
AttributeValues#getMetric(Node, String, NodeMetricRegistry)}
+   * Request a node metric from each node. To get the value use {@link 
AttributeValues#getNodeMetric(Node, NodeMetric)}
+   * @param metric metric to retrieve (see {@link NodeMetric})
*/
-  AttributeFetcher requestNodeMetric(String metricName, NodeMetricRegistry 
registry);
+  AttributeFetcher requestNodeMetric(NodeMetric metric);
 
+  /**
+   * Request collection-level metrics. To get the values use {@link 
AttributeValues#getCollectionMetrics(String)}.
+   * Note that this request will fetch information from nodes relevant to the 
collection
+   * replicas and not the ones specified in {@link #fetchFrom(Set)} (though 
they may overlap).
+   * @param solrCollection request metrics for this collection
+   * @param metrics metrics to retrieve (see {@link ReplicaMetric})
+   */
+  AttributeFetcher requestCollectionMetrics(SolrCollection solrCollection, 
Set> metrics);
 
   /**
* The set of nodes from which to fetch all node related attributes. Calling 
this method is mandatory if any of the {@code requestNode*}
* methods got called.
+   * @param nodes nodes to fetch from
*/
   AttributeFetcher fetchFrom(Set nodes);
 
-  /**
-   * Requests a (non node) metric of a given scope and name. To get the value 
use {@link AttributeValues#getMetric(String, String)}
-   */
-  AttributeFetcher requestMetric(String scope, String metricName);
-
   /**
* Fetches all requested node attributes from all nodes passed to {@link 
#fetchFrom(Set)} as well as non node attributes
-   * (those requested for example using {@link #requestMetric(String, String)}.
+   * (those requested for example using {@link #requestNodeMetric(NodeMetric)}.

Review comment:
   Didn't you mean to reference `requestCollectionMetrics` here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547915611



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/AttributeValues.java
##
@@ -65,11 +35,10 @@
   /**
* For the given node: metric of specific name and registry

Review comment:
   Javadoc comment needs updating





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254042#comment-17254042
 ] 

ASF subversion and git services commented on LUCENE-9570:
-

Commit 8c234b28791db246068ed3f3f43b7acf83f2c731 in lucene-solr's branch 
refs/heads/master from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8c234b2 ]

LUCENE-9570: code reformatting [record rev].


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254041#comment-17254041
 ] 

ASF subversion and git services commented on LUCENE-9570:
-

Commit 2d6ad2fee6dfd96388594f4de9b37c037efe8017 in lucene-solr's branch 
refs/heads/master from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2d6ad2f ]

LUCENE-9570: code reformatting [partial].


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547918055



##
File path: solr/core/src/java/org/apache/solr/cluster/placement/NodeMetric.java
##
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.placement;
+
+/**
+ * Node metric identifier, corresponding
+ * to a node-level metric registry and the internal metric name.
+ * Alternatively this identified may use a fully-qualified metric key,

Review comment:
   typo: identified -> identifier
   
   Also, the second part of the Javadoc comment seems more appropriate for the 
implementation rather than the interface.
   Can we add to the `Registry` enum a `NULL` or `UNSPECIFIED` value so that 
the implementation registry is never `null`? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254043#comment-17254043
 ] 

ASF subversion and git services commented on LUCENE-9570:
-

Commit 36dd8b03a3d526f882f4caf09945fab6bbefc118 in lucene-solr's branch 
refs/heads/jira/LUCENE-9570 from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=36dd8b0 ]

Merge remote-tracking branch 'origin/master' into jira/LUCENE-9570


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254044#comment-17254044
 ] 

ASF subversion and git services commented on LUCENE-9570:
-

Commit f0e722546c78a4891aaed2a843776eb2620acb9f in lucene-solr's branch 
refs/heads/jira/LUCENE-9570 from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f0e7225 ]

Merge branch 'master' into jira/LUCENE-9570


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254045#comment-17254045
 ] 

ASF subversion and git services commented on LUCENE-9570:
-

Commit 6af9a8f1227eff4cef25fc5a3d8cae3387567965 in lucene-solr's branch 
refs/heads/jira/LUCENE-9570 from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6af9a8f ]

Merge branch 'master' into jira/LUCENE-9570


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-9570:

Description: 
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570 which the PR is based on:
{code}
git remote add dweiss g...@github.com:dweiss/lucene-solr.git
git fetch dweiss
git checkout jira/LUCENE-9570
{code}
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}

  was:
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570 which the PR is based on:
> {code}
> git remote add dweiss g...@github.com:dweiss/lucene-solr.git
> git fetch dweiss
> git checkout jira/LUCENE-9570
> {code}
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547920372



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/MetricAttribute.java
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.cluster.placement;
+
+import java.util.function.Function;
+
+/**
+ * Metric-related attribute of a node or replica. It defines a short symbolic 
name of the metric, the corresponding
+ * internal metric name and the desired format/unit conversion. Generic type
+ * defines the type of converted values of this attribute.
+ */
+public interface MetricAttribute {

Review comment:
   I find it a bit weird (naming wise) that interface `MetricAttribute` has 
subinterfaces `NodeMetric` and `ReplicaMetric`.
   Ideally I'd drop the `Attribute` part of the interface name here (and in 
`MetricAttributeImpl`).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547921749



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/MetricAttributeImpl.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.cluster.placement.impl;
+
+import org.apache.solr.cluster.placement.MetricAttribute;
+
+import java.util.Objects;
+import java.util.function.Function;
+
+/**
+ * Base class for {@link MetricAttribute} implementations.
+ */
+public class MetricAttributeImpl implements MetricAttribute {

Review comment:
   Make class `abstract`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-9570:

Description: 
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570 which the PR is based on:
{code}
git remote add dweiss g...@github.com:dweiss/lucene-solr.git
git fetch dweiss
git checkout jira/LUCENE-9570
{code}
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}

*Packages remaining* (put your name next to a module you're working on to avoid 
duplication).

*  case ":lucene:analysis:icu":
*  case ":lucene:analysis:kuromoji":
*  case ":lucene:analysis:morfologik":
*  case ":lucene:analysis:nori":
 * case ":lucene:analysis:opennlp":
 * case ":lucene:analysis:phonetic":
 * case ":lucene:analysis:smartcn":
 * case ":lucene:analysis:stempel":
 * case ":lucene:backward-codecs":
 * case ":lucene:benchmark":
 * case ":lucene:classification":
 * case ":lucene:codecs":
 * case ":lucene:demo":
 * case ":lucene:expressions":
 * case ":lucene:facet":
 * case ":lucene:grouping":
 * case ":lucene:join":
 * case ":lucene:luke":
 * case ":lucene:memory":
 * case ":lucene:misc":
 * case ":lucene:monitor":
 * case ":lucene:queries":
 * case ":lucene:queryparser":
 * case ":lucene:replicator":
 * case ":lucene:sandbox":
 * case ":lucene:spatial3d":
 * case ":lucene:spatial-extras":
 * case ":lucene:suggest":
 * case ":lucene:test-framework":


  was:
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570 which the PR is based on:
{code}
git remote add dweiss g...@github.com:dweiss/lucene-solr.git
git fetch dweiss
git checkout jira/LUCENE-9570
{code}
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}


> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570 which the PR is based on:
> {code}
> git remote add dweiss g...@github.com:dweiss/lucene-solr.git
> git fetch dweiss
> git checkout jira/LUCENE-9570
> {code}
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/*

[jira] [Updated] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-9570:

Description: 
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570 which the PR is based on:
{code}
git remote add dweiss g...@github.com:dweiss/lucene-solr.git
git fetch dweiss
git checkout jira/LUCENE-9570
{code}
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}

*Packages remaining* (put your name next to a module you're working on to avoid 
duplication).

*  case ":lucene:analysis:common": (partially done, dweiss)
*  case ":lucene:analysis:icu":
*  case ":lucene:analysis:kuromoji":
*  case ":lucene:analysis:morfologik":
*  case ":lucene:analysis:nori":
 * case ":lucene:analysis:opennlp":
 * case ":lucene:analysis:phonetic":
 * case ":lucene:analysis:smartcn":
 * case ":lucene:analysis:stempel":
 * case ":lucene:backward-codecs":
 * case ":lucene:benchmark":
 * case ":lucene:classification":
 * case ":lucene:codecs":
 * case ":lucene:demo":
 * case ":lucene:expressions":
 * case ":lucene:facet":
 * case ":lucene:grouping":
 * case ":lucene:join":
 * case ":lucene:luke":
 * case ":lucene:memory":
 * case ":lucene:misc":
 * case ":lucene:monitor":
 * case ":lucene:queries":
 * case ":lucene:queryparser":
 * case ":lucene:replicator":
 * case ":lucene:sandbox":
 * case ":lucene:spatial3d":
 * case ":lucene:spatial-extras":
 * case ":lucene:suggest":
 * case ":lucene:test-framework":


  was:
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570 which the PR is based on:
{code}
git remote add dweiss g...@github.com:dweiss/lucene-solr.git
git fetch dweiss
git checkout jira/LUCENE-9570
{code}
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}

*Packages remaining* (put your name next to a module you're working on to avoid 
duplication).

*  case ":lucene:analysis:icu":
*  case ":lucene:analysis:kuromoji":
*  case ":lucene:analysis:morfologik":
*  case ":lucene:analysis:nori":
 * case ":lucene:analysis:opennlp":
 * case ":lucene:analysis:phonetic":
 * case ":lucene:analysis:smartcn":
 * case ":lucene:analysis:stempel":
 * case ":lucene:backward-codecs":
 * case ":lucene:benchmark":
 * case ":lucene:classification":
 * case ":lucene:codecs":
 * case ":lucene:demo":
 * case ":lucene:expressions":
 * case ":lucene:facet":
 * case ":lucene:grouping":
 * case ":lucene:join":
 * case ":lucene:luke":
 * case ":lucene:memory":
 * case ":lucene:misc":
 * case ":lucene:monitor":
 * case ":lucene:queries":
 * case ":lucene:queryparser":
 * case ":lucene:replicator":
 * case ":lucene:sandbox":
 * case ":lucene:spatial3d":
 * case ":lucene:spatial-extras":
 * case ":lucene:suggest":
 * case ":lucene:test-framewor

[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547930986



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/MetricAttribute.java
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.cluster.placement;
+
+import java.util.function.Function;
+
+/**
+ * Metric-related attribute of a node or replica. It defines a short symbolic 
name of the metric, the corresponding
+ * internal metric name and the desired format/unit conversion. Generic type
+ * defines the type of converted values of this attribute.
+ */
+public interface MetricAttribute {
+
+  /**
+   * Return the short-hand name that identifies this attribute.
+   */
+  String getName();
+
+  /**
+   * Return the internal name of a Solr metric associated with this attribute.
+   */
+  String getInternalName();
+
+  /**
+   * Conversion function to convert formats/units of raw values.
+   */
+  Function getConverter();

Review comment:
   I don't see the value in exposing the converter function in the 
interface. `convert` method below is sufficient (and its default implementation 
moved to the abstract implementation `MetricAttributeImpl` (or `MetricImpl` as 
I'd prefer it to be named)).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547932957



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/ReplicaMetrics.java
##
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.placement;
+
+import java.util.Optional;
+
+/**
+ * Strongly-typed replica-level metrics.
+ */
+public interface ReplicaMetrics {
+
+   Optional getReplicaMetric(ReplicaMetric metric);

Review comment:
   Shouldn't this method be called `getReplicaMetricValue`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547934318



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -33,18 +37,18 @@
 import java.lang.invoke.MethodHandles;
 import java.util.*;
 import java.util.function.BiConsumer;
+import java.util.stream.Collectors;
 
+/**
+ * Implementation of {@link AttributeFetcher} that uses {@link 
SolrCloudManager}
+ * to access Solr cluster details.
+ */
 public class AttributeFetcherImpl implements AttributeFetcher {
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  boolean requestedNodeCoreCount;
-  boolean requestedNodeDiskType;
-  boolean requestedNodeFreeDisk;
-  boolean requestedNodeTotalDisk;
-  boolean requestedNodeHeapUsage;
-  boolean requestedNodeSystemLoadAverage;
-  Set requestedNodeSystemPropertiesSnitchTags = new HashSet<>();
-  Set requestedNodeMetricSnitchTags = new HashSet<>();
+  Set requestedNodeSystemSnitchTags = new HashSet<>();

Review comment:
   I preferred the previous name `requestedNodeSystemPropertiesSnitchTags` 
("system" by itself is too vague).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547935041



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -33,18 +37,18 @@
 import java.lang.invoke.MethodHandles;
 import java.util.*;
 import java.util.function.BiConsumer;
+import java.util.stream.Collectors;
 
+/**
+ * Implementation of {@link AttributeFetcher} that uses {@link 
SolrCloudManager}
+ * to access Solr cluster details.
+ */
 public class AttributeFetcherImpl implements AttributeFetcher {
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  boolean requestedNodeCoreCount;
-  boolean requestedNodeDiskType;
-  boolean requestedNodeFreeDisk;
-  boolean requestedNodeTotalDisk;
-  boolean requestedNodeHeapUsage;
-  boolean requestedNodeSystemLoadAverage;
-  Set requestedNodeSystemPropertiesSnitchTags = new HashSet<>();
-  Set requestedNodeMetricSnitchTags = new HashSet<>();
+  Set requestedNodeSystemSnitchTags = new HashSet<>();

Review comment:
   Oh I see it's used for both system properties and environment 
variables... In which case I'd drop system in the name.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547935638



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -113,72 +90,46 @@ public AttributeFetcher fetchFrom(Set nodes) {
 return this;
   }
 
-  @Override
-  public AttributeFetcher requestMetric(String scope, String metricName) {
-throw new UnsupportedOperationException("Not yet implemented...");
-  }
+  private static final double GB = 1024 * 1024 * 1024;
 
   @Override
   public AttributeValues fetchAttributes() {
 // TODO Code here only supports node related attributes for now

Review comment:
   remove comment





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547936763



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -113,72 +90,46 @@ public AttributeFetcher fetchFrom(Set nodes) {
 return this;
   }
 
-  @Override
-  public AttributeFetcher requestMetric(String scope, String metricName) {
-throw new UnsupportedOperationException("Not yet implemented...");
-  }
+  private static final double GB = 1024 * 1024 * 1024;
 
   @Override
   public AttributeValues fetchAttributes() {
 // TODO Code here only supports node related attributes for now
 
 // Maps in which attribute values will be added
-Map nodeToCoreCount = Maps.newHashMap();
-Map nodeToDiskType = Maps.newHashMap();
-Map nodeToFreeDisk = Maps.newHashMap();
-Map nodeToTotalDisk = Maps.newHashMap();
-Map nodeToHeapUsage = Maps.newHashMap();
-Map nodeToSystemLoadAverage = Maps.newHashMap();
-Map> syspropSnitchToNodeToValue = 
Maps.newHashMap();
-Map> metricSnitchToNodeToValue = 
Maps.newHashMap();
+Map> systemSnitchToNodeToValue = new HashMap<>();
+Map, Map> metricSnitchToNodeToValue = new 
HashMap<>();
+Map collectionMetricsBuilders = new 
HashMap<>();
+Map> nodeToReplicaInternalTags = new HashMap<>();
+Map>> requestedCollectionNamesMetrics = 
requestedCollectionMetrics.entrySet().stream()
+.collect(Collectors.toMap(e -> e.getKey().getName(), e -> 
e.getValue()));
 
 // In order to match the returned values for the various snitches, we need 
to keep track of where each
 // received value goes. Given the target maps are of different types (the 
maps from Node to whatever defined
 // above) we instead pass a function taking two arguments, the node and 
the (non null) returned value,
 // that will cast the value into the appropriate type for the snitch tag 
and insert it into the appropriate map
 // with the node as the key.
-Map> allSnitchTagsToInsertion = 
Maps.newHashMap();
-if (requestedNodeCoreCount) {
-  allSnitchTagsToInsertion.put(ImplicitSnitch.CORES, (node, value) -> 
nodeToCoreCount.put(node, ((Number) value).intValue()));
-}
-if (requestedNodeDiskType) {
-  allSnitchTagsToInsertion.put(ImplicitSnitch.DISKTYPE, (node, value) -> {
-if ("rotational".equals(value)) {
-  nodeToDiskType.put(node, DiskHardwareType.ROTATIONAL);
-} else if ("ssd".equals(value)) {
-  nodeToDiskType.put(node, DiskHardwareType.SSD);
-}
-// unknown disk type: insert no value, returned optional will be empty
-  });
-}
-if (requestedNodeFreeDisk) {
-  
allSnitchTagsToInsertion.put(SolrClientNodeStateProvider.Variable.FREEDISK.tagName,
-  // Convert from bytes to GB
-  (node, value) -> nodeToFreeDisk.put(node, ((Number) 
value).longValue() / 1024 / 1024 / 1024));
-}
-if (requestedNodeTotalDisk) {
-  
allSnitchTagsToInsertion.put(SolrClientNodeStateProvider.Variable.TOTALDISK.tagName,
-  // Convert from bytes to GB
-  (node, value) -> nodeToTotalDisk.put(node, ((Number) 
value).longValue() / 1024 / 1024 / 1024));
-}
-if (requestedNodeHeapUsage) {
-  allSnitchTagsToInsertion.put(ImplicitSnitch.HEAPUSAGE,
-  (node, value) -> nodeToHeapUsage.put(node, ((Number) 
value).doubleValue()));
-}
-if (requestedNodeSystemLoadAverage) {
-  allSnitchTagsToInsertion.put(ImplicitSnitch.SYSLOADAVG,
-  (node, value) -> nodeToSystemLoadAverage.put(node, ((Number) 
value).doubleValue()));
-}
-for (String sysPropSnitch : requestedNodeSystemPropertiesSnitchTags) {
-  final Map sysPropMap = Maps.newHashMap();
-  syspropSnitchToNodeToValue.put(sysPropSnitch, sysPropMap);
+Map> allSnitchTagsToInsertion = new 
HashMap<>();
+for (String sysPropSnitch : requestedNodeSystemSnitchTags) {
+  final Map sysPropMap = new HashMap<>();
+  systemSnitchToNodeToValue.put(sysPropSnitch, sysPropMap);
   allSnitchTagsToInsertion.put(sysPropSnitch, (node, value) -> 
sysPropMap.put(node, (String) value));
 }
-for (String metricSnitch : requestedNodeMetricSnitchTags) {
-  final Map metricMap = Maps.newHashMap();
-  metricSnitchToNodeToValue.put(metricSnitch, metricMap);
-  allSnitchTagsToInsertion.put(metricSnitch, (node, value) -> 
metricMap.put(node, (Double) value));
+for (NodeMetric metric : requestedNodeMetricSnitchTags) {
+  final Map metricMap = new HashMap<>();
+  metricSnitchToNodeToValue.put(metric, metricMap);
+  String metricSnitch = getMetricSnitchTag(metric);
+  allSnitchTagsToInsertion.put(metricSnitch, (node, value) -> 
metricMap.put(node, metric.convert(value)));
 }
+requestedCollectionMetrics.forEach((collection, tags) -> {
+  Set collectionTags = tags.stream().map(tag -> 
tag.getInternalName()).collect(Collectors.toSe

[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547938685



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -197,32 +148,78 @@ public AttributeValues fetchAttributes() {
   }
 }
 
-return new AttributeValuesImpl(nodeToCoreCount,
-nodeToDiskType,
-nodeToFreeDisk,
-nodeToTotalDisk,
-nodeToHeapUsage,
-nodeToSystemLoadAverage,
-syspropSnitchToNodeToValue,
-metricSnitchToNodeToValue);
+for (Node node : nodeToReplicaInternalTags.keySet()) {

Review comment:
   I understand here we might be contacting nodes again to fetch replica 
metrics. I think this is ok but maybe add a comment that there can be an 
optimization where node metrics and replica metrics are fetched with a single 
call to each node?
   Here we might even contact each node multiple times I believe depending on 
the collections we need info about and how their replicas are distributed?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547939345



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -197,32 +148,78 @@ public AttributeValues fetchAttributes() {
   }
 }
 
-return new AttributeValuesImpl(nodeToCoreCount,
-nodeToDiskType,
-nodeToFreeDisk,
-nodeToTotalDisk,
-nodeToHeapUsage,
-nodeToSystemLoadAverage,
-syspropSnitchToNodeToValue,
-metricSnitchToNodeToValue);
+for (Node node : nodeToReplicaInternalTags.keySet()) {
+  Set tags = nodeToReplicaInternalTags.get(node);
+  Map>> infos = 
cloudManager.getNodeStateProvider().getReplicaInfo(node.getName(), tags);
+  infos.entrySet().stream()
+  .filter(entry -> 
requestedCollectionNamesMetrics.containsKey(entry.getKey()))
+  .forEach(entry -> {
+CollectionMetricsBuilder collectionMetricsBuilder = 
collectionMetricsBuilders
+.computeIfAbsent(entry.getKey(), c -> new 
CollectionMetricsBuilder());
+entry.getValue().forEach((shardName, replicas) -> {
+  CollectionMetricsBuilder.ShardMetricsBuilder shardMetricsBuilder 
=
+  collectionMetricsBuilder.getShardMetricsBuilders()
+  .computeIfAbsent(shardName, s -> new 
CollectionMetricsBuilder.ShardMetricsBuilder());
+  replicas.forEach(replica -> {
+CollectionMetricsBuilder.ReplicaMetricsBuilder 
replicaMetricsBuilder =
+shardMetricsBuilder.getReplicaMetricsBuilders()
+.computeIfAbsent(replica.getName(), n -> new 
CollectionMetricsBuilder.ReplicaMetricsBuilder());
+replicaMetricsBuilder.setLeader(replica.isLeader());
+if (replica.isLeader()) {
+  shardMetricsBuilder.setLeaderMetrics(replicaMetricsBuilder);
+}
+Set> requestedMetrics = 
requestedCollectionNamesMetrics.get(replica.getCollection());
+if (requestedMetrics == null) {
+  throw new RuntimeException("impossible error");

Review comment:
   Likely an error log with the value of `replica.getCollection()` (or add 
the value to the exception text) could help understand how the impossible 
happens. Or just remove this check and let the NPE bubble up?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547940278



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -197,32 +148,78 @@ public AttributeValues fetchAttributes() {
   }
 }
 
-return new AttributeValuesImpl(nodeToCoreCount,
-nodeToDiskType,
-nodeToFreeDisk,
-nodeToTotalDisk,
-nodeToHeapUsage,
-nodeToSystemLoadAverage,
-syspropSnitchToNodeToValue,
-metricSnitchToNodeToValue);
+for (Node node : nodeToReplicaInternalTags.keySet()) {
+  Set tags = nodeToReplicaInternalTags.get(node);
+  Map>> infos = 
cloudManager.getNodeStateProvider().getReplicaInfo(node.getName(), tags);
+  infos.entrySet().stream()
+  .filter(entry -> 
requestedCollectionNamesMetrics.containsKey(entry.getKey()))
+  .forEach(entry -> {
+CollectionMetricsBuilder collectionMetricsBuilder = 
collectionMetricsBuilders
+.computeIfAbsent(entry.getKey(), c -> new 
CollectionMetricsBuilder());
+entry.getValue().forEach((shardName, replicas) -> {
+  CollectionMetricsBuilder.ShardMetricsBuilder shardMetricsBuilder 
=
+  collectionMetricsBuilder.getShardMetricsBuilders()
+  .computeIfAbsent(shardName, s -> new 
CollectionMetricsBuilder.ShardMetricsBuilder());
+  replicas.forEach(replica -> {
+CollectionMetricsBuilder.ReplicaMetricsBuilder 
replicaMetricsBuilder =
+shardMetricsBuilder.getReplicaMetricsBuilders()
+.computeIfAbsent(replica.getName(), n -> new 
CollectionMetricsBuilder.ReplicaMetricsBuilder());
+replicaMetricsBuilder.setLeader(replica.isLeader());
+if (replica.isLeader()) {
+  shardMetricsBuilder.setLeaderMetrics(replicaMetricsBuilder);
+}
+Set> requestedMetrics = 
requestedCollectionNamesMetrics.get(replica.getCollection());
+if (requestedMetrics == null) {
+  throw new RuntimeException("impossible error");
+}
+requestedMetrics.forEach(metric -> {
+  replicaMetricsBuilder.addMetric(metric, 
replica.get(metric.getInternalName()));
+});
+  });
+});
+  });
+}
+
+
+Map collectionMetrics = new HashMap<>();
+collectionMetricsBuilders.forEach((name, builder) -> 
collectionMetrics.put(name, builder.build()));
+
+return new AttributeValuesImpl(systemSnitchToNodeToValue,
+metricSnitchToNodeToValue, collectionMetrics);
   }
 
-  private static SolrInfoBean.Group 
getGroupFromMetricRegistry(NodeMetricRegistry registry) {
+  private static SolrInfoBean.Group 
getGroupFromMetricRegistry(NodeMetric.Registry registry) {
 switch (registry) {
   case SOLR_JVM:
 return SolrInfoBean.Group.jvm;
   case SOLR_NODE:
 return SolrInfoBean.Group.node;
+  case SOLR_JETTY:
+return SolrInfoBean.Group.jetty;
   default:
 throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, 
"Unsupported registry value " + registry);

Review comment:
   If we do add a `NULL` or `UNDEFINED` `Registry` enum value, it's ok to 
handle it as `default` here and throw exception (although it getting all the 
way to here likely denotes a bug in our code).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547943012



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -197,32 +148,78 @@ public AttributeValues fetchAttributes() {
   }
 }
 
-return new AttributeValuesImpl(nodeToCoreCount,
-nodeToDiskType,
-nodeToFreeDisk,
-nodeToTotalDisk,
-nodeToHeapUsage,
-nodeToSystemLoadAverage,
-syspropSnitchToNodeToValue,
-metricSnitchToNodeToValue);
+for (Node node : nodeToReplicaInternalTags.keySet()) {
+  Set tags = nodeToReplicaInternalTags.get(node);
+  Map>> infos = 
cloudManager.getNodeStateProvider().getReplicaInfo(node.getName(), tags);
+  infos.entrySet().stream()
+  .filter(entry -> 
requestedCollectionNamesMetrics.containsKey(entry.getKey()))
+  .forEach(entry -> {
+CollectionMetricsBuilder collectionMetricsBuilder = 
collectionMetricsBuilders
+.computeIfAbsent(entry.getKey(), c -> new 
CollectionMetricsBuilder());
+entry.getValue().forEach((shardName, replicas) -> {
+  CollectionMetricsBuilder.ShardMetricsBuilder shardMetricsBuilder 
=
+  collectionMetricsBuilder.getShardMetricsBuilders()
+  .computeIfAbsent(shardName, s -> new 
CollectionMetricsBuilder.ShardMetricsBuilder());
+  replicas.forEach(replica -> {
+CollectionMetricsBuilder.ReplicaMetricsBuilder 
replicaMetricsBuilder =
+shardMetricsBuilder.getReplicaMetricsBuilders()
+.computeIfAbsent(replica.getName(), n -> new 
CollectionMetricsBuilder.ReplicaMetricsBuilder());
+replicaMetricsBuilder.setLeader(replica.isLeader());
+if (replica.isLeader()) {
+  shardMetricsBuilder.setLeaderMetrics(replicaMetricsBuilder);
+}
+Set> requestedMetrics = 
requestedCollectionNamesMetrics.get(replica.getCollection());
+if (requestedMetrics == null) {
+  throw new RuntimeException("impossible error");
+}
+requestedMetrics.forEach(metric -> {
+  replicaMetricsBuilder.addMetric(metric, 
replica.get(metric.getInternalName()));
+});
+  });
+});
+  });
+}
+
+
+Map collectionMetrics = new HashMap<>();
+collectionMetricsBuilders.forEach((name, builder) -> 
collectionMetrics.put(name, builder.build()));
+
+return new AttributeValuesImpl(systemSnitchToNodeToValue,
+metricSnitchToNodeToValue, collectionMetrics);
   }
 
-  private static SolrInfoBean.Group 
getGroupFromMetricRegistry(NodeMetricRegistry registry) {
+  private static SolrInfoBean.Group 
getGroupFromMetricRegistry(NodeMetric.Registry registry) {
 switch (registry) {
   case SOLR_JVM:
 return SolrInfoBean.Group.jvm;
   case SOLR_NODE:
 return SolrInfoBean.Group.node;
+  case SOLR_JETTY:
+return SolrInfoBean.Group.jetty;
   default:
 throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, 
"Unsupported registry value " + registry);
 }
   }
 
-  public static String getMetricSnitchTag(String metricName, 
NodeMetricRegistry registry) {
-return SolrClientNodeStateProvider.METRICS_PREFIX + 
SolrMetricManager.getRegistryName(getGroupFromMetricRegistry(registry), 
metricName);
+  public static String getMetricSnitchTag(NodeMetric metric) {
+if (metric.getRegistry() != null) {
+  // regular registry + metricName
+  return SolrClientNodeStateProvider.METRICS_PREFIX +
+  
SolrMetricManager.getRegistryName(getGroupFromMetricRegistry(metric.getRegistry()))
 + ":" + metric.getInternalName();
+} else if (ImplicitSnitch.tags.contains(metric.getInternalName())) {

Review comment:
   Can we have the metric instance itself know if it's based on a well 
known snitch name or a metrics: name? That would made the condition tests  here 
clearer. Given different constructors are used, that's easy to track.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547948300



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/MetricAttributeImpl.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.cluster.placement.impl;
+
+import org.apache.solr.cluster.placement.MetricAttribute;
+
+import java.util.Objects;
+import java.util.function.Function;
+
+/**
+ * Base class for {@link MetricAttribute} implementations.
+ */
+public class MetricAttributeImpl implements MetricAttribute {
+
+  public static final double GB = 1024 * 1024 * 1024;
+
+  /**
+   * Identity converter. It returns the raw value unchanged IFF
+   * the value's type can be cast to the generic type of this attribute,
+   * otherwise it returns null.
+   */
+  @SuppressWarnings("unchecked")
+  public final Function IDENTITY_CONVERTER = v -> {
+try {
+  return (T) v;
+} catch (ClassCastException cce) {
+  return null;
+}
+  };
+
+  /**
+   * Bytes to gigabytes converter. Supports converting number or string
+   * representations of raw values expressed in bytes.
+   */
+  @SuppressWarnings("unchecked")
+  public static final Function BYTES_TO_GB_CONVERTER = v -> {
+double sizeInBytes;
+if (!(v instanceof Number)) {
+  if (v == null) {
+return null;
+  }
+  try {
+sizeInBytes = Double.valueOf(String.valueOf(v)).doubleValue();

Review comment:
   `Double.parseDouble(String.valueOf(v))` does the job.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547949666



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/MetricAttributeImpl.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.cluster.placement.impl;
+
+import org.apache.solr.cluster.placement.MetricAttribute;
+
+import java.util.Objects;
+import java.util.function.Function;
+
+/**
+ * Base class for {@link MetricAttribute} implementations.
+ */
+public class MetricAttributeImpl implements MetricAttribute {
+
+  public static final double GB = 1024 * 1024 * 1024;
+
+  /**
+   * Identity converter. It returns the raw value unchanged IFF
+   * the value's type can be cast to the generic type of this attribute,
+   * otherwise it returns null.
+   */
+  @SuppressWarnings("unchecked")
+  public final Function IDENTITY_CONVERTER = v -> {
+try {
+  return (T) v;
+} catch (ClassCastException cce) {
+  return null;
+}
+  };
+
+  /**
+   * Bytes to gigabytes converter. Supports converting number or string
+   * representations of raw values expressed in bytes.
+   */
+  @SuppressWarnings("unchecked")
+  public static final Function BYTES_TO_GB_CONVERTER = v -> {
+double sizeInBytes;
+if (!(v instanceof Number)) {
+  if (v == null) {
+return null;
+  }
+  try {
+sizeInBytes = Double.valueOf(String.valueOf(v)).doubleValue();
+  } catch (Exception nfe) {
+return null;
+  }
+} else {
+  sizeInBytes = ((Number) v).doubleValue();
+}
+return sizeInBytes / GB;
+  };
+
+  protected final String name;
+  protected final String internalName;
+  protected final Function converter;
+
+  /**
+   * Create a metric attribute.
+   * @param name short-hand name that identifies this attribute.
+   * @param internalName internal name of a Solr metric.
+   */
+  public MetricAttributeImpl(String name, String internalName) {
+this(name, internalName, null);
+  }
+
+  /**
+   * Create a metric attribute.
+   * @param name short-hand name that identifies this attribute.
+   * @param internalName internal name of a Solr metric.
+   * @param converter optional raw value converter. If null then
+   *  {@link #IDENTITY_CONVERTER} will be used.
+   */
+  public MetricAttributeImpl(String name, String internalName, 
Function converter) {
+Objects.requireNonNull(name);
+Objects.requireNonNull(internalName);
+this.name = name;
+this.internalName = internalName;
+if (converter == null) {
+  this.converter = IDENTITY_CONVERTER;
+} else {
+  this.converter = converter;
+}
+  }
+
+  @Override
+  public String getName() {
+return name;
+  }
+
+  @Override
+  public String getInternalName() {
+return internalName;
+  }
+
+  @Override
+  public Function getConverter() {
+return converter;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) {
+  return true;
+}
+if (o == null || getClass() != o.getClass()) {
+  return false;
+}
+MetricAttribute that = (MetricAttribute) o;
+return name.equals(that.getName()) && 
internalName.equals(that.getInternalName()) && 
converter.equals(that.getConverter());

Review comment:
   Why the inconsistency between direct field access for `this` instance 
and getter for `that`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547952921



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/plugins/AffinityPlacementFactory.java
##
@@ -194,7 +195,9 @@ public PlacementPlan computePlacement(Cluster cluster, 
PlacementRequest request,
 
   // Request all needed attributes
   
attributeFetcher.requestNodeSystemProperty(AVAILABILITY_ZONE_SYSPROP).requestNodeSystemProperty(REPLICA_TYPE_SYSPROP);
-  attributeFetcher.requestNodeCoreCount().requestNodeFreeDisk();
+  attributeFetcher
+  .requestNodeMetric(NodeMetricImpl.NUM_CORES)

Review comment:
   Leaking implementation to the plugin code here.
   Something that I think we managed to avoid until now. Shall we continue to 
try? And expose these metric instances through a factory that will have an 
interface visible to the client? Or consider it's not worth the trouble?
   
   From `NodeMetricImpl` dependencies go to `ImplicitSnitch` to `ZkStateReader` 
and to basically the whole codebase.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


murblanc commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547954091



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/plugins/AffinityPlacementFactory.java
##
@@ -194,7 +195,9 @@ public PlacementPlan computePlacement(Cluster cluster, 
PlacementRequest request,
 
   // Request all needed attributes
   
attributeFetcher.requestNodeSystemProperty(AVAILABILITY_ZONE_SYSPROP).requestNodeSystemProperty(REPLICA_TYPE_SYSPROP);
-  attributeFetcher.requestNodeCoreCount().requestNodeFreeDisk();
+  attributeFetcher
+  .requestNodeMetric(NodeMetricImpl.NUM_CORES)

Review comment:
   One option (simpler than a factory and less verbose) is to define an 
enum with metrics in interface `NodeMetric` and have the implementation convert 
these to the actual instances (in some `if... else` succession ugliness, like 
was done for example for the `Registry` enum to not leak the actual registry 
values).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547957901



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/AttributeFetcher.java
##
@@ -18,93 +18,53 @@
 package org.apache.solr.cluster.placement;
 
 import org.apache.solr.cluster.Node;
+import org.apache.solr.cluster.SolrCollection;
 
 import java.util.Set;
 
 /**
  * Instances of this interface are used to fetch various attributes from 
nodes (and other sources) in the cluster.
  */
 public interface AttributeFetcher {
-  /**
-   * Request the number of cores on each node. To get the value use {@link 
AttributeValues#getCoresCount(Node)}
-   */
-  AttributeFetcher requestNodeCoreCount();
-
-  /**
-   * Request the disk hardware type on each node. To get the value use {@link 
AttributeValues#getDiskType(Node)}
-   */
-  AttributeFetcher requestNodeDiskType();
-
-  /**
-   * Request the free disk size on each node. To get the value use {@link 
AttributeValues#getFreeDisk(Node)}
-   */
-  AttributeFetcher requestNodeFreeDisk();
-
-  /**
-   * Request the total disk size on each node. To get the value use {@link 
AttributeValues#getTotalDisk(Node)}
-   */
-  AttributeFetcher requestNodeTotalDisk();
-
-  /**
-   * Request the heap usage on each node. To get the value use {@link 
AttributeValues#getHeapUsage(Node)}
-   */
-  AttributeFetcher requestNodeHeapUsage();
-
-  /**
-   * Request the system load average on each node. To get the value use {@link 
AttributeValues#getSystemLoadAverage(Node)}
-   */
-  AttributeFetcher requestNodeSystemLoadAverage();
-
   /**
* Request a given system property on each node. To get the value use {@link 
AttributeValues#getSystemProperty(Node, String)}
+   * @param name system property name
*/
   AttributeFetcher requestNodeSystemProperty(String name);
 
   /**
* Request an environment variable on each node. To get the value use {@link 
AttributeValues#getEnvironmentVariable(Node, String)}
+   * @param name environment property name
*/
   AttributeFetcher requestNodeEnvironmentVariable(String name);
 
   /**
-   * Request a node metric from each node. To get the value use {@link 
AttributeValues#getMetric(Node, String, NodeMetricRegistry)}
+   * Request a node metric from each node. To get the value use {@link 
AttributeValues#getNodeMetric(Node, NodeMetric)}
+   * @param metric metric to retrieve (see {@link NodeMetric})
*/
-  AttributeFetcher requestNodeMetric(String metricName, NodeMetricRegistry 
registry);
+  AttributeFetcher requestNodeMetric(NodeMetric metric);
 
+  /**
+   * Request collection-level metrics. To get the values use {@link 
AttributeValues#getCollectionMetrics(String)}.
+   * Note that this request will fetch information from nodes relevant to the 
collection
+   * replicas and not the ones specified in {@link #fetchFrom(Set)} (though 
they may overlap).
+   * @param solrCollection request metrics for this collection
+   * @param metrics metrics to retrieve (see {@link ReplicaMetric})
+   */
+  AttributeFetcher requestCollectionMetrics(SolrCollection solrCollection, 
Set> metrics);
 
   /**
* The set of nodes from which to fetch all node related attributes. Calling 
this method is mandatory if any of the {@code requestNode*}
* methods got called.
+   * @param nodes nodes to fetch from
*/
   AttributeFetcher fetchFrom(Set nodes);
 
-  /**
-   * Requests a (non node) metric of a given scope and name. To get the value 
use {@link AttributeValues#getMetric(String, String)}
-   */
-  AttributeFetcher requestMetric(String scope, String metricName);
-
   /**
* Fetches all requested node attributes from all nodes passed to {@link 
#fetchFrom(Set)} as well as non node attributes
-   * (those requested for example using {@link #requestMetric(String, String)}.
+   * (those requested for example using {@link #requestNodeMetric(NodeMetric)}.

Review comment:
   Ah, this is more complicated and it needs to be documented properly - 
there's a note in `requestCollectionMetrics` about this but I can add it here, 
too. Collection metrics are fetched from whatever nodes host the replicas, and 
`fetchFrom` doesn't affect this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547969376



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/MetricAttribute.java
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.cluster.placement;
+
+import java.util.function.Function;
+
+/**
+ * Metric-related attribute of a node or replica. It defines a short symbolic 
name of the metric, the corresponding
+ * internal metric name and the desired format/unit conversion. Generic type
+ * defines the type of converted values of this attribute.
+ */
+public interface MetricAttribute {

Review comment:
   There's already a `Metric` class on the classpath, it comes from 
Dropwizard Metrics... I wanted to avoid the naming conflict. Let me think...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547976517



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/ReplicaMetrics.java
##
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.placement;
+
+import java.util.Optional;
+
+/**
+ * Strongly-typed replica-level metrics.
+ */
+public interface ReplicaMetrics {
+
+   Optional getReplicaMetric(ReplicaMetric metric);

Review comment:
   `AttributeValues.getNodeMetric` follows the same pattern .. maybe both 
are wrong (or right?).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ErickErickson commented on a change in pull request #2164: SOLR-15058: get the correct hostname when hostname contain '_'

2020-12-23 Thread GitBox


ErickErickson commented on a change in pull request #2164:
URL: https://github.com/apache/lucene-solr/pull/2164#discussion_r547980795



##
File path: solr/solrj/src/test/org/apache/solr/common/cloud/UrlSchemeTest.java
##
@@ -35,12 +35,12 @@ public void testApplyUrlScheme() {
 String liveNode1 = "192.168.1.1:8983_solr";
 String liveNode2 = "127.0.0.1:8983_solr";
 String liveNode3 = "127.0.0.1_";
-String liveNode4 = "127.0.0.1:61631_l_%2Fig";
+String liveNode4 = "127.0.0.1:61631_lx%2Fig";

Review comment:
   The changes to this test don't exercise the code change since they only 
have one underscore. It would be better to leave liveNode4 alone, and add, say, 
liveNode5 with a value like:
   "127.0.0.1:61631_l_%2Fig_whatever"
   
   Also see the comment in the JIRA.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15058) Solr can not get correct hostname when hostname contain '_'

2020-12-23 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254095#comment-17254095
 ] 

Erick Erickson commented on SOLR-15058:
---

[~ab] [~noble]  This certainly seems wrong. WDYT though about scanning for 
lastIndexOf("_solr") rather than just an underscore? It should still be changed 
to lastIndexOf.

Pinging you two since you were in that code.

> Solr can not get correct hostname when hostname contain '_'
> ---
>
> Key: SOLR-15058
> URL: https://issues.apache.org/jira/browse/SOLR-15058
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Su Sasa
>Priority: Minor
> Attachments: 
> 0001-SOLR-15058-get-the-correct-hostname-when-hostname-co.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Utils.getBaseUrlForNodeName split the nodename by the first '_'
> In ZK, Solr add '_solr' to the end of the hostname
> so there should split by the last '_'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] Sasasu commented on a change in pull request #2164: SOLR-15058: get the correct hostname when hostname contain '_'

2020-12-23 Thread GitBox


Sasasu commented on a change in pull request #2164:
URL: https://github.com/apache/lucene-solr/pull/2164#discussion_r547981945



##
File path: solr/solrj/src/test/org/apache/solr/common/cloud/UrlSchemeTest.java
##
@@ -35,12 +35,12 @@ public void testApplyUrlScheme() {
 String liveNode1 = "192.168.1.1:8983_solr";
 String liveNode2 = "127.0.0.1:8983_solr";
 String liveNode3 = "127.0.0.1_";
-String liveNode4 = "127.0.0.1:61631_l_%2Fig";
+String liveNode4 = "127.0.0.1:61631_lx%2Fig";

Review comment:
   SOLR-15058 will break this UrlSchemeTest





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] Sasasu commented on a change in pull request #2164: SOLR-15058: get the correct hostname when hostname contain '_'

2020-12-23 Thread GitBox


Sasasu commented on a change in pull request #2164:
URL: https://github.com/apache/lucene-solr/pull/2164#discussion_r547981945



##
File path: solr/solrj/src/test/org/apache/solr/common/cloud/UrlSchemeTest.java
##
@@ -35,12 +35,12 @@ public void testApplyUrlScheme() {
 String liveNode1 = "192.168.1.1:8983_solr";
 String liveNode2 = "127.0.0.1:8983_solr";
 String liveNode3 = "127.0.0.1_";
-String liveNode4 = "127.0.0.1:61631_l_%2Fig";
+String liveNode4 = "127.0.0.1:61631_lx%2Fig";

Review comment:
   SOLR-15058 will break this UrlSchemeTest
   I will start a discussion in mail list





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r547997354



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -33,18 +37,18 @@
 import java.lang.invoke.MethodHandles;
 import java.util.*;
 import java.util.function.BiConsumer;
+import java.util.stream.Collectors;
 
+/**
+ * Implementation of {@link AttributeFetcher} that uses {@link 
SolrCloudManager}
+ * to access Solr cluster details.
+ */
 public class AttributeFetcherImpl implements AttributeFetcher {
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  boolean requestedNodeCoreCount;
-  boolean requestedNodeDiskType;
-  boolean requestedNodeFreeDisk;
-  boolean requestedNodeTotalDisk;
-  boolean requestedNodeHeapUsage;
-  boolean requestedNodeSystemLoadAverage;
-  Set requestedNodeSystemPropertiesSnitchTags = new HashSet<>();
-  Set requestedNodeMetricSnitchTags = new HashSet<>();
+  Set requestedNodeSystemSnitchTags = new HashSet<>();

Review comment:
   I disagree, having "system" here makes it clearer what category of 
metrics we want to keep here. In any case, this is an internal variable name...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r548000385



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -197,32 +148,78 @@ public AttributeValues fetchAttributes() {
   }
 }
 
-return new AttributeValuesImpl(nodeToCoreCount,
-nodeToDiskType,
-nodeToFreeDisk,
-nodeToTotalDisk,
-nodeToHeapUsage,
-nodeToSystemLoadAverage,
-syspropSnitchToNodeToValue,
-metricSnitchToNodeToValue);
+for (Node node : nodeToReplicaInternalTags.keySet()) {

Review comment:
   Yes, we make two calls per node - one is `getNodeValues` and the other 
is `getReplicaInfos`.
   
   The number of calls doesn't depend on the number of collections - we already 
use a set of nodes that covers all replicas for all interesting collections and 
we only call `getReplicaInfo` once per node (and it returns all replica infos, 
regardless of collection).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r548001677



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -197,32 +148,78 @@ public AttributeValues fetchAttributes() {
   }
 }
 
-return new AttributeValuesImpl(nodeToCoreCount,
-nodeToDiskType,
-nodeToFreeDisk,
-nodeToTotalDisk,
-nodeToHeapUsage,
-nodeToSystemLoadAverage,
-syspropSnitchToNodeToValue,
-metricSnitchToNodeToValue);
+for (Node node : nodeToReplicaInternalTags.keySet()) {
+  Set tags = nodeToReplicaInternalTags.get(node);
+  Map>> infos = 
cloudManager.getNodeStateProvider().getReplicaInfo(node.getName(), tags);
+  infos.entrySet().stream()
+  .filter(entry -> 
requestedCollectionNamesMetrics.containsKey(entry.getKey()))
+  .forEach(entry -> {
+CollectionMetricsBuilder collectionMetricsBuilder = 
collectionMetricsBuilders
+.computeIfAbsent(entry.getKey(), c -> new 
CollectionMetricsBuilder());
+entry.getValue().forEach((shardName, replicas) -> {
+  CollectionMetricsBuilder.ShardMetricsBuilder shardMetricsBuilder 
=
+  collectionMetricsBuilder.getShardMetricsBuilders()
+  .computeIfAbsent(shardName, s -> new 
CollectionMetricsBuilder.ShardMetricsBuilder());
+  replicas.forEach(replica -> {
+CollectionMetricsBuilder.ReplicaMetricsBuilder 
replicaMetricsBuilder =
+shardMetricsBuilder.getReplicaMetricsBuilders()
+.computeIfAbsent(replica.getName(), n -> new 
CollectionMetricsBuilder.ReplicaMetricsBuilder());
+replicaMetricsBuilder.setLeader(replica.isLeader());
+if (replica.isLeader()) {
+  shardMetricsBuilder.setLeaderMetrics(replicaMetricsBuilder);
+}
+Set> requestedMetrics = 
requestedCollectionNamesMetrics.get(replica.getCollection());
+if (requestedMetrics == null) {
+  throw new RuntimeException("impossible error");

Review comment:
   I'll remove this check.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r548003144



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/AttributeFetcherImpl.java
##
@@ -197,32 +148,78 @@ public AttributeValues fetchAttributes() {
   }
 }
 
-return new AttributeValuesImpl(nodeToCoreCount,
-nodeToDiskType,
-nodeToFreeDisk,
-nodeToTotalDisk,
-nodeToHeapUsage,
-nodeToSystemLoadAverage,
-syspropSnitchToNodeToValue,
-metricSnitchToNodeToValue);
+for (Node node : nodeToReplicaInternalTags.keySet()) {
+  Set tags = nodeToReplicaInternalTags.get(node);
+  Map>> infos = 
cloudManager.getNodeStateProvider().getReplicaInfo(node.getName(), tags);
+  infos.entrySet().stream()
+  .filter(entry -> 
requestedCollectionNamesMetrics.containsKey(entry.getKey()))
+  .forEach(entry -> {
+CollectionMetricsBuilder collectionMetricsBuilder = 
collectionMetricsBuilders
+.computeIfAbsent(entry.getKey(), c -> new 
CollectionMetricsBuilder());
+entry.getValue().forEach((shardName, replicas) -> {
+  CollectionMetricsBuilder.ShardMetricsBuilder shardMetricsBuilder 
=
+  collectionMetricsBuilder.getShardMetricsBuilders()
+  .computeIfAbsent(shardName, s -> new 
CollectionMetricsBuilder.ShardMetricsBuilder());
+  replicas.forEach(replica -> {
+CollectionMetricsBuilder.ReplicaMetricsBuilder 
replicaMetricsBuilder =
+shardMetricsBuilder.getReplicaMetricsBuilders()
+.computeIfAbsent(replica.getName(), n -> new 
CollectionMetricsBuilder.ReplicaMetricsBuilder());
+replicaMetricsBuilder.setLeader(replica.isLeader());
+if (replica.isLeader()) {
+  shardMetricsBuilder.setLeaderMetrics(replicaMetricsBuilder);
+}
+Set> requestedMetrics = 
requestedCollectionNamesMetrics.get(replica.getCollection());
+if (requestedMetrics == null) {
+  throw new RuntimeException("impossible error");
+}
+requestedMetrics.forEach(metric -> {
+  replicaMetricsBuilder.addMetric(metric, 
replica.get(metric.getInternalName()));
+});
+  });
+});
+  });
+}
+
+
+Map collectionMetrics = new HashMap<>();
+collectionMetricsBuilders.forEach((name, builder) -> 
collectionMetrics.put(name, builder.build()));
+
+return new AttributeValuesImpl(systemSnitchToNodeToValue,
+metricSnitchToNodeToValue, collectionMetrics);
   }
 
-  private static SolrInfoBean.Group 
getGroupFromMetricRegistry(NodeMetricRegistry registry) {
+  private static SolrInfoBean.Group 
getGroupFromMetricRegistry(NodeMetric.Registry registry) {
 switch (registry) {
   case SOLR_JVM:
 return SolrInfoBean.Group.jvm;
   case SOLR_NODE:
 return SolrInfoBean.Group.node;
+  case SOLR_JETTY:
+return SolrInfoBean.Group.jetty;
   default:
 throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, 
"Unsupported registry value " + registry);
 }
   }
 
-  public static String getMetricSnitchTag(String metricName, 
NodeMetricRegistry registry) {
-return SolrClientNodeStateProvider.METRICS_PREFIX + 
SolrMetricManager.getRegistryName(getGroupFromMetricRegistry(registry), 
metricName);
+  public static String getMetricSnitchTag(NodeMetric metric) {
+if (metric.getRegistry() != null) {
+  // regular registry + metricName
+  return SolrClientNodeStateProvider.METRICS_PREFIX +
+  
SolrMetricManager.getRegistryName(getGroupFromMetricRegistry(metric.getRegistry()))
 + ":" + metric.getInternalName();
+} else if (ImplicitSnitch.tags.contains(metric.getInternalName())) {

Review comment:
   Well-known tags are simple String-s so they are indistinguishable from 
metric keys (such as `solr.jvm:system.properties:user.name`). We could perhaps 
move this check to `NodeMetricImpl` and add a pseudo-registry of 
`WELL_KNOWN_TAG` ... although this sounds maybe too complicated just to 
prettify these couple lines of code?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14923) Indexing performance is unacceptable when child documents are involved

2020-12-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254118#comment-17254118
 ] 

Thomas Wöckinger commented on SOLR-14923:
-

I run the indexing tests five times, one run took about 60mins +-2mins.

Compared to the the first version it is about 3-4mins better, which is about 5%

Lock contention on UpdateLog is also a bit better.

Another very interesting behavior shown up: In 5 minutes there are about 12000 
java.io.FileNotFoundException and about 2000 java.nio.NoSuchFileException 
thrown. The first is thrown from RAMDirectory.fileLength the second from 
FSDirectory.fileLength. They are booth used from NRTCachingDirectory.

T implementation is different between master and 8.x but *createTempOutput* is 
still using the method *slowFileExists* which is using exception handling to 
detect if a file exists, which can be avoided most of the time in the existing 
implementations. I'm not sure if Solr uses -_XX:-StackTraceInThrowable_, but if 
not these calls can be a hundred times faster. But this seems to be lucene  
related. May i open different issue for this.

[~dsmiley] So from my side this looks very good!

 

> Indexing performance is unacceptable when child documents are involved
> --
>
> Key: SOLR-14923
> URL: https://issues.apache.org/jira/browse/SOLR-14923
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 8.3, 8.4, 8.5, 8.6, 8.7, master (9.0)
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: performance, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Parallel indexing does not make sense at moment when child documents are used.
> The org.apache.solr.update.processor.DistributedUpdateProcessor checks at the 
> end of the method doVersionAdd if Ulog caches should be refreshed.
> This check will return true if any child document is included in the 
> AddUpdateCommand.
> If so ulog.openRealtimeSearcher(); is called, this call is very expensive, 
> and executed in a synchronized block of the UpdateLog instance, therefore all 
> other operations on the UpdateLog are blocked too.
> Because every important UpdateLog method (add, delete, ...) is done using a 
> synchronized block almost each operation is blocked.
> This reduces multi threaded index update to a single thread behavior.
> The described behavior is not depending on any option of the UpdateRequest, 
> so it does not make any difference if 'waitFlush', 'waitSearcher' or 
> 'softCommit'  is true or false.
> The described behavior makes the usage of ChildDocuments useless, because the 
> performance is unacceptable.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2133: SOLR-15019: Replica placement API needs a way to fetch existing replica metrics

2020-12-23 Thread GitBox


sigram commented on a change in pull request #2133:
URL: https://github.com/apache/lucene-solr/pull/2133#discussion_r548004312



##
File path: 
solr/core/src/java/org/apache/solr/cluster/placement/impl/MetricAttributeImpl.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.cluster.placement.impl;
+
+import org.apache.solr.cluster.placement.MetricAttribute;
+
+import java.util.Objects;
+import java.util.function.Function;
+
+/**
+ * Base class for {@link MetricAttribute} implementations.
+ */
+public class MetricAttributeImpl implements MetricAttribute {
+
+  public static final double GB = 1024 * 1024 * 1024;
+
+  /**
+   * Identity converter. It returns the raw value unchanged IFF
+   * the value's type can be cast to the generic type of this attribute,
+   * otherwise it returns null.
+   */
+  @SuppressWarnings("unchecked")
+  public final Function IDENTITY_CONVERTER = v -> {
+try {
+  return (T) v;
+} catch (ClassCastException cce) {
+  return null;
+}
+  };
+
+  /**
+   * Bytes to gigabytes converter. Supports converting number or string
+   * representations of raw values expressed in bytes.
+   */
+  @SuppressWarnings("unchecked")
+  public static final Function BYTES_TO_GB_CONVERTER = v -> {
+double sizeInBytes;
+if (!(v instanceof Number)) {
+  if (v == null) {
+return null;
+  }
+  try {
+sizeInBytes = Double.valueOf(String.valueOf(v)).doubleValue();
+  } catch (Exception nfe) {
+return null;
+  }
+} else {
+  sizeInBytes = ((Number) v).doubleValue();
+}
+return sizeInBytes / GB;
+  };
+
+  protected final String name;
+  protected final String internalName;
+  protected final Function converter;
+
+  /**
+   * Create a metric attribute.
+   * @param name short-hand name that identifies this attribute.
+   * @param internalName internal name of a Solr metric.
+   */
+  public MetricAttributeImpl(String name, String internalName) {
+this(name, internalName, null);
+  }
+
+  /**
+   * Create a metric attribute.
+   * @param name short-hand name that identifies this attribute.
+   * @param internalName internal name of a Solr metric.
+   * @param converter optional raw value converter. If null then
+   *  {@link #IDENTITY_CONVERTER} will be used.
+   */
+  public MetricAttributeImpl(String name, String internalName, 
Function converter) {
+Objects.requireNonNull(name);
+Objects.requireNonNull(internalName);
+this.name = name;
+this.internalName = internalName;
+if (converter == null) {
+  this.converter = IDENTITY_CONVERTER;
+} else {
+  this.converter = converter;
+}
+  }
+
+  @Override
+  public String getName() {
+return name;
+  }
+
+  @Override
+  public String getInternalName() {
+return internalName;
+  }
+
+  @Override
+  public Function getConverter() {
+return converter;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) {
+  return true;
+}
+if (o == null || getClass() != o.getClass()) {
+  return false;
+}
+MetricAttribute that = (MetricAttribute) o;
+return name.equals(that.getName()) && 
internalName.equals(that.getInternalName()) && 
converter.equals(that.getConverter());

Review comment:
   Ah, this was a problem with class hierarchy - it should cast to 
`MetricAttributeImpl`, I'll fix this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15057) avoid unnecessary object retention in FacetRangeProcessor

2020-12-23 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254141#comment-17254141
 ] 

Mike Drob commented on SOLR-15057:
--

This looks like a good change - curious how it was found. Is this from 
production profiling or some static analysis tooling?

> avoid unnecessary object retention in FacetRangeProcessor
> -
>
> Key: SOLR-15057
> URL: https://issues.apache.org/jira/browse/SOLR-15057
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  * The (private) {{doSubs}} method is a no-op if there are no sub-facets.
>  * The (private) {{intersections}} and {{filters}} arrays are only used by 
> the {{doSubs}} method.
>  * The (private) {{rangeStats}} method currently always populates the 
> {{intersections}} and {{filters}} arrays, even when nothing actually 
> subsequently uses them.
>  * If {{rangeStats}} only populated the {{intersections}} array when it's 
> actually needed then the {{DocSet intersection}} object would remain local in 
> scope and hence the garbage collector could collect it earlier.
> [https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.7.0/solr/core/src/java/org/apache/solr/search/facet/FacetRangeProcessor.java#L531-L555]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-15059) Default Grafana dashboard needs to expose graphs for monitoring query performance

2020-12-23 Thread Timothy Potter (Jira)
Timothy Potter created SOLR-15059:
-

 Summary: Default Grafana dashboard needs to expose graphs for 
monitoring query performance
 Key: SOLR-15059
 URL: https://issues.apache.org/jira/browse/SOLR-15059
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Grafana Dashboard, metrics
Reporter: Timothy Potter
Assignee: Timothy Potter


The default Grafana dashboard doesn't expose graphs for monitoring query 
performance. For instance, if I want to see QPS for a collection, that's not 
shown in the default dashboard. Same for quantiles like p95 query latency.

After some digging, these metrics are available in the output from 
{{/admin/metrics}} but are not exported by the exporter.

This PR proposes to enhance the default dashboard with a new Query Metrics 
section with the following metrics:
* Distributed QPS per Collection (aggregated across all cores)
* Distributed QPS per Solr Node (aggregated across all base_url)
* QPS 1-min rate per core
* QPS 5-min rate per core
* Top-level Query latency p99, p95, p75
* Local (non-distrib) query count per core (this is important for determining 
if there is unbalanced load)
* Local (non-distrib) query rate per core (1-min)
* Local (non-distrib) p95 per core

Also, the {{solr-exporter-config.xml}} uses {{jq}} queries to pull metrics from 
the output from {{/admin/metrics}}. This file is huge and contains a bunch of 
{{jq}} boilerplate. Moreover, I'm introducing another 15-20 metrics in this PR, 
it only makes the file more verbose.

Thus, I'm also introducing support for jq templates so as to reduce 
boilerplate, reduce syntax errors, and improve readability. For instance the 
query metrics I'm adding to the config look like this:
{code}
  
$jq:core-query(1minRate, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(5minRate, endswith(".distrib.requestTimes"))
  
{code}
Instead of duplicating the complicated {{jq}} query for each metric. The 
templates are optional and only should be used if a given jq structure is 
repeated 3 or more times. Otherwise, inlining the jq query is still supported.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15059) Default Grafana dashboard needs to expose graphs for monitoring query performance

2020-12-23 Thread Timothy Potter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-15059:
--
Fix Version/s: master (9.0)
   8.8

> Default Grafana dashboard needs to expose graphs for monitoring query 
> performance
> -
>
> Key: SOLR-15059
> URL: https://issues.apache.org/jira/browse/SOLR-15059
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Grafana Dashboard, metrics
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Major
> Fix For: 8.8, master (9.0)
>
>
> The default Grafana dashboard doesn't expose graphs for monitoring query 
> performance. For instance, if I want to see QPS for a collection, that's not 
> shown in the default dashboard. Same for quantiles like p95 query latency.
> After some digging, these metrics are available in the output from 
> {{/admin/metrics}} but are not exported by the exporter.
> This PR proposes to enhance the default dashboard with a new Query Metrics 
> section with the following metrics:
> * Distributed QPS per Collection (aggregated across all cores)
> * Distributed QPS per Solr Node (aggregated across all base_url)
> * QPS 1-min rate per core
> * QPS 5-min rate per core
> * Top-level Query latency p99, p95, p75
> * Local (non-distrib) query count per core (this is important for determining 
> if there is unbalanced load)
> * Local (non-distrib) query rate per core (1-min)
> * Local (non-distrib) p95 per core
> Also, the {{solr-exporter-config.xml}} uses {{jq}} queries to pull metrics 
> from the output from {{/admin/metrics}}. This file is huge and contains a 
> bunch of {{jq}} boilerplate. Moreover, I'm introducing another 15-20 metrics 
> in this PR, it only makes the file more verbose.
> Thus, I'm also introducing support for jq templates so as to reduce 
> boilerplate, reduce syntax errors, and improve readability. For instance the 
> query metrics I'm adding to the config look like this:
> {code}
>   
> $jq:core-query(1minRate, endswith(".distrib.requestTimes"))
>   
>   
> $jq:core-query(5minRate, endswith(".distrib.requestTimes"))
>   
> {code}
> Instead of duplicating the complicated {{jq}} query for each metric. The 
> templates are optional and only should be used if a given jq structure is 
> repeated 3 or more times. Otherwise, inlining the jq query is still supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15059) Default Grafana dashboard needs to expose graphs for monitoring query performance

2020-12-23 Thread Timothy Potter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-15059:
--
Description: 
The default Grafana dashboard doesn't expose graphs for monitoring query 
performance. For instance, if I want to see QPS for a collection, that's not 
shown in the default dashboard. Same for quantiles like p95 query latency.

After some digging, these metrics are available in the output from 
{{/admin/metrics}} but are not exported by the exporter.

This PR proposes to enhance the default dashboard with a new Query Metrics 
section with the following metrics:
* Distributed QPS per Collection (aggregated across all cores)
* Distributed QPS per Solr Node (aggregated across all base_url)
* QPS 1-min rate per core
* QPS 5-min rate per core
* Top-level Query latency p99, p95, p75
* Local (non-distrib) query count per core (this is important for determining 
if there is unbalanced load)
* Local (non-distrib) query rate per core (1-min)
* Local (non-distrib) p95 per core

Also, the {{solr-exporter-config.xml}} uses {{jq}} queries to pull metrics from 
the output from {{/admin/metrics}}. This file is huge and contains a bunch of 
{{jq}} boilerplate. Moreover, I'm introducing another 15-20 metrics in this PR, 
it only makes the file more verbose.

Thus, I'm also introducing support for jq templates so as to reduce 
boilerplate, reduce syntax errors, and improve readability. For instance the 
query metrics I'm adding to the config look like this:
{code}
  
$jq:core-query(1minRate, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(5minRate, endswith(".distrib.requestTimes"))
  
{code}
Instead of duplicating the complicated {{jq}} query for each metric. The 
templates are optional and only should be used if a given jq structure is 
repeated 3 or more times. Otherwise, inlining the jq query is still supported. 
Here's how the templates work:

{code}
  A regex with named groups is used to match template references to template + 
vars using the basic pattern:

  $jq:( , , ,  )

  For instance,

  $jq:core(requests_total, endswith(".requestTimes"), count, COUNTER)

  TEMPLATE = core
  UNIQUE = requests_total (unique suffix for this metric, results in a metric 
named "solr_metrics_core_requests_total")
  KEYSELECTOR = endswith(".requestTimes") (filter to select the specific key 
for this metric)
  METRIC = count
  TYPE = COUNTER

  Some templates may have a default type, so you can omit that from your 
template reference, such as:

  $jq:core(requests_total, endswith(".requestTimes"), count)

  Uses the defaultType=COUNTER as many uses of the core template are counts.

  If a template reference omits the metric, then the unique suffix is used, for 
instance:

  $jq:core-query(1minRate, endswith(".distrib.requestTimes"))

  Creates a GAUGE metric (default type) named 
"solr_metrics_core_query_1minRate" using the 1minRate value from the selected 
JSON object.
{code}

  was:
The default Grafana dashboard doesn't expose graphs for monitoring query 
performance. For instance, if I want to see QPS for a collection, that's not 
shown in the default dashboard. Same for quantiles like p95 query latency.

After some digging, these metrics are available in the output from 
{{/admin/metrics}} but are not exported by the exporter.

This PR proposes to enhance the default dashboard with a new Query Metrics 
section with the following metrics:
* Distributed QPS per Collection (aggregated across all cores)
* Distributed QPS per Solr Node (aggregated across all base_url)
* QPS 1-min rate per core
* QPS 5-min rate per core
* Top-level Query latency p99, p95, p75
* Local (non-distrib) query count per core (this is important for determining 
if there is unbalanced load)
* Local (non-distrib) query rate per core (1-min)
* Local (non-distrib) p95 per core

Also, the {{solr-exporter-config.xml}} uses {{jq}} queries to pull metrics from 
the output from {{/admin/metrics}}. This file is huge and contains a bunch of 
{{jq}} boilerplate. Moreover, I'm introducing another 15-20 metrics in this PR, 
it only makes the file more verbose.

Thus, I'm also introducing support for jq templates so as to reduce 
boilerplate, reduce syntax errors, and improve readability. For instance the 
query metrics I'm adding to the config look like this:
{code}
  
$jq:core-query(1minRate, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(5minRate, endswith(".distrib.requestTimes"))
  
{code}
Instead of duplicating the complicated {{jq}} query for each metric. The 
templates are optional and only should be used if a given jq structure is 
repeated 3 or more times. Otherwise, inlining the jq query is still supported.



> Default Grafana dashboard needs to expose graphs for monitoring query 
> performance
> -

[GitHub] [lucene-solr] thelabdude opened a new pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


thelabdude opened a new pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165


   # Description
   
   See JIRA: https://issues.apache.org/jira/browse/SOLR-15059
   
   # Solution
   
   Improve the Grafana dashboard to include graphs for monitoring query 
performance.
   
   # Tests
   
   Manual testing of the Grafana dashboard in the browser while running query 
load.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [ ] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [ ] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [ ] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [ ] I have developed this patch against the `master` branch.
   - [ ] I have run `./gradlew check`.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15059) Default Grafana dashboard needs to expose graphs for monitoring query performance

2020-12-23 Thread Timothy Potter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-15059:
--
Status: Patch Available  (was: Open)

> Default Grafana dashboard needs to expose graphs for monitoring query 
> performance
> -
>
> Key: SOLR-15059
> URL: https://issues.apache.org/jira/browse/SOLR-15059
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Grafana Dashboard, metrics
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Major
> Fix For: 8.8, master (9.0)
>
> Attachments: Screen Shot 2020-12-23 at 10.22.43 AM.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default Grafana dashboard doesn't expose graphs for monitoring query 
> performance. For instance, if I want to see QPS for a collection, that's not 
> shown in the default dashboard. Same for quantiles like p95 query latency.
> After some digging, these metrics are available in the output from 
> {{/admin/metrics}} but are not exported by the exporter.
> This PR proposes to enhance the default dashboard with a new Query Metrics 
> section with the following metrics:
> * Distributed QPS per Collection (aggregated across all cores)
> * Distributed QPS per Solr Node (aggregated across all base_url)
> * QPS 1-min rate per core
> * QPS 5-min rate per core
> * Top-level Query latency p99, p95, p75
> * Local (non-distrib) query count per core (this is important for determining 
> if there is unbalanced load)
> * Local (non-distrib) query rate per core (1-min)
> * Local (non-distrib) p95 per core
> Also, the {{solr-exporter-config.xml}} uses {{jq}} queries to pull metrics 
> from the output from {{/admin/metrics}}. This file is huge and contains a 
> bunch of {{jq}} boilerplate. Moreover, I'm introducing another 15-20 metrics 
> in this PR, it only makes the file more verbose.
> Thus, I'm also introducing support for jq templates so as to reduce 
> boilerplate, reduce syntax errors, and improve readability. For instance the 
> query metrics I'm adding to the config look like this:
> {code}
>   
> $jq:core-query(1minRate, endswith(".distrib.requestTimes"))
>   
>   
> $jq:core-query(5minRate, endswith(".distrib.requestTimes"))
>   
> {code}
> Instead of duplicating the complicated {{jq}} query for each metric. The 
> templates are optional and only should be used if a given jq structure is 
> repeated 3 or more times. Otherwise, inlining the jq query is still 
> supported. Here's how the templates work:
> {code}
>   A regex with named groups is used to match template references to template 
> + vars using the basic pattern:
>   $jq:( , , ,  )
>   For instance,
>   $jq:core(requests_total, endswith(".requestTimes"), count, COUNTER)
>   TEMPLATE = core
>   UNIQUE = requests_total (unique suffix for this metric, results in a metric 
> named "solr_metrics_core_requests_total")
>   KEYSELECTOR = endswith(".requestTimes") (filter to select the specific key 
> for this metric)
>   METRIC = count
>   TYPE = COUNTER
>   Some templates may have a default type, so you can omit that from your 
> template reference, such as:
>   $jq:core(requests_total, endswith(".requestTimes"), count)
>   Uses the defaultType=COUNTER as many uses of the core template are counts.
>   If a template reference omits the metric, then the unique suffix is used, 
> for instance:
>   $jq:core-query(1minRate, endswith(".distrib.requestTimes"))
>   Creates a GAUGE metric (default type) named 
> "solr_metrics_core_query_1minRate" using the 1minRate value from the selected 
> JSON object.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15059) Default Grafana dashboard needs to expose graphs for monitoring query performance

2020-12-23 Thread Timothy Potter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-15059:
--
Attachment: Screen Shot 2020-12-23 at 10.22.43 AM.png
Status: Open  (was: Open)

> Default Grafana dashboard needs to expose graphs for monitoring query 
> performance
> -
>
> Key: SOLR-15059
> URL: https://issues.apache.org/jira/browse/SOLR-15059
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Grafana Dashboard, metrics
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Major
> Fix For: 8.8, master (9.0)
>
> Attachments: Screen Shot 2020-12-23 at 10.22.43 AM.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default Grafana dashboard doesn't expose graphs for monitoring query 
> performance. For instance, if I want to see QPS for a collection, that's not 
> shown in the default dashboard. Same for quantiles like p95 query latency.
> After some digging, these metrics are available in the output from 
> {{/admin/metrics}} but are not exported by the exporter.
> This PR proposes to enhance the default dashboard with a new Query Metrics 
> section with the following metrics:
> * Distributed QPS per Collection (aggregated across all cores)
> * Distributed QPS per Solr Node (aggregated across all base_url)
> * QPS 1-min rate per core
> * QPS 5-min rate per core
> * Top-level Query latency p99, p95, p75
> * Local (non-distrib) query count per core (this is important for determining 
> if there is unbalanced load)
> * Local (non-distrib) query rate per core (1-min)
> * Local (non-distrib) p95 per core
> Also, the {{solr-exporter-config.xml}} uses {{jq}} queries to pull metrics 
> from the output from {{/admin/metrics}}. This file is huge and contains a 
> bunch of {{jq}} boilerplate. Moreover, I'm introducing another 15-20 metrics 
> in this PR, it only makes the file more verbose.
> Thus, I'm also introducing support for jq templates so as to reduce 
> boilerplate, reduce syntax errors, and improve readability. For instance the 
> query metrics I'm adding to the config look like this:
> {code}
>   
> $jq:core-query(1minRate, endswith(".distrib.requestTimes"))
>   
>   
> $jq:core-query(5minRate, endswith(".distrib.requestTimes"))
>   
> {code}
> Instead of duplicating the complicated {{jq}} query for each metric. The 
> templates are optional and only should be used if a given jq structure is 
> repeated 3 or more times. Otherwise, inlining the jq query is still 
> supported. Here's how the templates work:
> {code}
>   A regex with named groups is used to match template references to template 
> + vars using the basic pattern:
>   $jq:( , , ,  )
>   For instance,
>   $jq:core(requests_total, endswith(".requestTimes"), count, COUNTER)
>   TEMPLATE = core
>   UNIQUE = requests_total (unique suffix for this metric, results in a metric 
> named "solr_metrics_core_requests_total")
>   KEYSELECTOR = endswith(".requestTimes") (filter to select the specific key 
> for this metric)
>   METRIC = count
>   TYPE = COUNTER
>   Some templates may have a default type, so you can omit that from your 
> template reference, such as:
>   $jq:core(requests_total, endswith(".requestTimes"), count)
>   Uses the defaultType=COUNTER as many uses of the core template are counts.
>   If a template reference omits the metric, then the unique suffix is used, 
> for instance:
>   $jq:core-query(1minRate, endswith(".distrib.requestTimes"))
>   Creates a GAUGE metric (default type) named 
> "solr_metrics_core_query_1minRate" using the 1minRate value from the selected 
> JSON object.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] MarcusSorealheis commented on pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


MarcusSorealheis commented on pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#issuecomment-750402943


   Is `stripWs` a term of art? Its definition is semi-obvious but will be even 
more obvious with a new name. The test files will live a long time and this PR 
is a key addition. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254165#comment-17254165
 ] 

Mike Drob commented on LUCENE-9570:
---

I tried applying the git blame ignore revisions magic, but am getting {{fatal: 
invalid object name: 6faa4f98e04}} now. Any body else seeing this or is it 
something in my setup?

> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570 which the PR is based on:
> {code}
> git remote add dweiss g...@github.com:dweiss/lucene-solr.git
> git fetch dweiss
> git checkout jira/LUCENE-9570
> {code}
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}
> *Packages remaining* (put your name next to a module you're working on to 
> avoid duplication).
> *  case ":lucene:analysis:common": (partially done, dweiss)
> *  case ":lucene:analysis:icu":
> *  case ":lucene:analysis:kuromoji":
> *  case ":lucene:analysis:morfologik":
> *  case ":lucene:analysis:nori":
>  * case ":lucene:analysis:opennlp":
>  * case ":lucene:analysis:phonetic":
>  * case ":lucene:analysis:smartcn":
>  * case ":lucene:analysis:stempel":
>  * case ":lucene:backward-codecs":
>  * case ":lucene:benchmark":
>  * case ":lucene:classification":
>  * case ":lucene:codecs":
>  * case ":lucene:demo":
>  * case ":lucene:expressions":
>  * case ":lucene:facet":
>  * case ":lucene:grouping":
>  * case ":lucene:join":
>  * case ":lucene:luke":
>  * case ":lucene:memory":
>  * case ":lucene:misc":
>  * case ":lucene:monitor":
>  * case ":lucene:queries":
>  * case ":lucene:queryparser":
>  * case ":lucene:replicator":
>  * case ":lucene:sandbox":
>  * case ":lucene:spatial3d":
>  * case ":lucene:spatial-extras":
>  * case ":lucene:suggest":
>  * case ":lucene:test-framework":



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh commented on pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


epugh commented on pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#issuecomment-750418268


   How does this compare/overlap with any of the dashboards on Grafana's site?  
https://grafana.com/grafana/dashboards?search=solr
   
   Specifically, the one published by @janhoy at 
https://grafana.com/grafana/dashboards/12456?
   
   Be nice if we had one widely used and supported Grafana dashboard!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thelabdude commented on pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


thelabdude commented on pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#issuecomment-750423446


   @epugh If you diff the dashboard you linked to with the version in master 
(https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/conf/grafana-solr-dashboard.json)
 you'll see they are the same.
   
   This PR just enhances that dashboard with query performance metrics and 
changes a few of the defaults in how null / zero series are handled.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #2152: SOLR-14034: remove deprecated min_rf references

2020-12-23 Thread GitBox


cpoerschke commented on a change in pull request #2152:
URL: https://github.com/apache/lucene-solr/pull/2152#discussion_r548124061



##
File path: solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java
##
@@ -548,9 +548,6 @@ protected int sendDoc(int docId, Integer minRf, SolrClient 
solrClient, String co
 doc.addField("a_t", "hello" + docId);
 
 UpdateRequest up = new UpdateRequest();
-if (minRf != null) {
-  up.setParam(UpdateRequest.MIN_REPFACT, String.valueOf(minRf));
-}

Review comment:
   Thanks for the code pointers and detailed explanation!
   
   How about we scope this pull request to remove `minRf` in almost all code 
paths i.e. `sendDoc` gets its unused argument removed in both the replication 
factor and http partition tests but `sendDocsWithRetry` remains unchanged for 
now since as you say it's used in other test classes too?
   
   It appears that the `minRf` argument in `sendDocsWithRetry` is indeed unused 
already but it might be clearer to tidy that up in a separate follow-up pull 
request rather than mix it in here at this time.

##
File path: solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java
##
@@ -461,38 +447,18 @@ protected void doDelete(UpdateRequest req, String msg, 
int expectedRf, int retri
 
   protected int sendDoc(int docId, int minRf) throws Exception {
 UpdateRequest up = new UpdateRequest();
-boolean minRfExplicit = maybeAddMinRfExplicitly(minRf, up);
 SolrInputDocument doc = new SolrInputDocument();
 doc.addField(id, String.valueOf(docId));
 doc.addField("a_t", "hello" + docId);
 up.add(doc);
-return runAndGetAchievedRf(up, minRfExplicit, minRf);
+return runAndGetAchievedRf(up);

Review comment:
   Alright, so with `maybeAddMinRfExplicitly` gone and the `minRf` argument 
in `runAndGetAchievedRf` gone the `minRf` argument of `sendDoc` becomes unused, 
right? I wonder how the code would work out if `sendDoc` had its `minRf` 
argument taken away too? Bit like peeling an onion this here i.e. multiple 
layers (but it hopefully won't make anyone cry).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on a change in pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


madrob commented on a change in pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#discussion_r548076192



##
File path: 
solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/MetricsConfiguration.java
##
@@ -105,17 +108,23 @@ public static MetricsConfiguration from(String resource) 
throws Exception {
   public static MetricsConfiguration from(Document config) throws Exception {
 Node settings = getNode(config, "/config/settings");
 
+Map jqTemplatesMap = null;
+NodeList jqTemplates = 
(NodeList)(xpathFactory.newXPath()).evaluate("/config/jq-templates/template", 
config, XPathConstants.NODESET);

Review comment:
   Noble just spent a bunch of effort getting rid of XPath in other places, 
is this a good direction to be going now?

##
File path: 
solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/MetricsQueryTemplate.java
##
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.prometheus.exporter;
+
+import java.util.Objects;
+import java.util.Optional;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class MetricsQueryTemplate {
+  private static final Pattern matchJqTemplate =
+  
Pattern.compile("^\\$jq:(?.*?)\\(\\s?(?[^,]*),\\s?(?[^,]*)(,\\s?(?[^,]*)\\s?)?(,\\s?(?[^,]*)\\s?)?\\)$");

Review comment:
   Please add a human readable comment describing the intent

##
File path: 
solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/MetricsQueryTemplate.java
##
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.prometheus.exporter;
+
+import java.util.Objects;
+import java.util.Optional;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class MetricsQueryTemplate {
+  private static final Pattern matchJqTemplate =
+  
Pattern.compile("^\\$jq:(?.*?)\\(\\s?(?[^,]*),\\s?(?[^,]*)(,\\s?(?[^,]*)\\s?)?(,\\s?(?[^,]*)\\s?)?\\)$");
+
+  public static Optional matches(String jsonQuery) {
+Optional maybe = Optional.empty();
+if (jsonQuery != null) {
+  String toMatch = jsonQuery.replaceAll("\\s+", " ").trim();
+  Matcher m = matchJqTemplate.matcher(toMatch);
+  if (m.matches()) {
+maybe = Optional.of(m);
+  }
+}
+return maybe;
+  }
+
+  private final String name;
+  private final String defaultType;
+  private final String template;
+
+  public MetricsQueryTemplate(String name, String template, String 
defaultType) {
+Objects.requireNonNull(name, "jq template must have a name");
+Objects.requireNonNull(template, "jq template is required");
+
+this.name = name;
+this.template = template.replaceAll("\\s+", " ").trim();
+if (this.template.isEmpty()) {
+  throw new IllegalArgumentException("jq template must not be empty");
+}
+this.defaultType = defaultType != null ? defaultType : "GAUGE";
+  }
+
+  public String getName() {
+return name;
+  }
+
+  public String getDefaultType() {
+return defaultType;
+  }
+
+  public String getTemplate() {
+return template;
+  }
+
+  public String applyTemplate(final Matcher matched) {
+String keySelector = matched.group("KEYSELECTOR");
+if (keySelector != null) {
+  if (!keySelector.contains("select(") && !keySelector.contains(".key")) {
+if (keySelector.contains("(") && 

[GitHub] [lucene-solr] madrob commented on pull request #2163: LUCENE-9647 Add back github action for Ant

2020-12-23 Thread GitBox


madrob commented on pull request #2163:
URL: https://github.com/apache/lucene-solr/pull/2163#issuecomment-750424612


   It needs to be in the default branch. That we didn't have this for 8.x 
before isn't a reason not to add it now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thelabdude commented on a change in pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


thelabdude commented on a change in pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#discussion_r548126909



##
File path: 
solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/MetricsConfiguration.java
##
@@ -105,17 +108,23 @@ public static MetricsConfiguration from(String resource) 
throws Exception {
   public static MetricsConfiguration from(Document config) throws Exception {
 Node settings = getNode(config, "/config/settings");
 
+Map jqTemplatesMap = null;
+NodeList jqTemplates = 
(NodeList)(xpathFactory.newXPath()).evaluate("/config/jq-templates/template", 
config, XPathConstants.NODESET);

Review comment:
   The code is already using XPath, I'm not introducing it here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thelabdude commented on a change in pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


thelabdude commented on a change in pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#discussion_r548128158



##
File path: 
solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/MetricsQueryTemplate.java
##
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.prometheus.exporter;
+
+import java.util.Objects;
+import java.util.Optional;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class MetricsQueryTemplate {
+  private static final Pattern matchJqTemplate =
+  
Pattern.compile("^\\$jq:(?.*?)\\(\\s?(?[^,]*),\\s?(?[^,]*)(,\\s?(?[^,]*)\\s?)?(,\\s?(?[^,]*)\\s?)?\\)$");
+
+  public static Optional matches(String jsonQuery) {
+Optional maybe = Optional.empty();
+if (jsonQuery != null) {
+  String toMatch = jsonQuery.replaceAll("\\s+", " ").trim();
+  Matcher m = matchJqTemplate.matcher(toMatch);
+  if (m.matches()) {
+maybe = Optional.of(m);
+  }
+}
+return maybe;
+  }
+
+  private final String name;
+  private final String defaultType;
+  private final String template;
+
+  public MetricsQueryTemplate(String name, String template, String 
defaultType) {
+Objects.requireNonNull(name, "jq template must have a name");
+Objects.requireNonNull(template, "jq template is required");
+
+this.name = name;
+this.template = template.replaceAll("\\s+", " ").trim();
+if (this.template.isEmpty()) {
+  throw new IllegalArgumentException("jq template must not be empty");
+}
+this.defaultType = defaultType != null ? defaultType : "GAUGE";
+  }
+
+  public String getName() {
+return name;
+  }
+
+  public String getDefaultType() {
+return defaultType;
+  }
+
+  public String getTemplate() {
+return template;
+  }
+
+  public String applyTemplate(final Matcher matched) {
+String keySelector = matched.group("KEYSELECTOR");
+if (keySelector != null) {
+  if (!keySelector.contains("select(") && !keySelector.contains(".key")) {
+if (keySelector.contains("(") && keySelector.contains(")")) {
+  // some kind of function here ...
+  keySelector = ".key | " + keySelector.trim();
+} else {
+  keySelector = ".key == " + keySelector.trim();
+}
+  }
+}
+String unique = matched.group("UNIQUE").trim();
+String type = matched.group("TYPE");
+if (type == null) {
+  type = defaultType;
+}
+
+String metric = matched.group("METRIC");
+if (metric == null) {
+  metric = unique;
+}
+
+// could be a simple field name or some kind of function here
+if (!metric.contains("$")) {
+  if ("object.value".equals(metric)) {
+metric = "$object.value"; // don't require the user to supply the $

Review comment:
   just trying to be helpful to keep the template syntax simpler ... e.g. I 
like {{count}} vs. {{$object.value.count}}





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15057) avoid unnecessary object retention in FacetRangeProcessor

2020-12-23 Thread Christine Poerschke (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254194#comment-17254194
 ] 

Christine Poerschke commented on SOLR-15057:


{quote}... curious how it was found ...
{quote}
Bit of a mix, we did some assessments a while ago re: how filter cache entries 
translated into memory requirements for the JVM and from that I vaguely 
remembered about {{DocSet}} object sizes. And then this week when I saw the 
_"save for later // TODO: only save if number of slots is small enough?"_ 
comment in the code that somehow stood out (since we don't do sub-facets i.e. 
there is no "later") whilst investigating faceting performance.

> avoid unnecessary object retention in FacetRangeProcessor
> -
>
> Key: SOLR-15057
> URL: https://issues.apache.org/jira/browse/SOLR-15057
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  * The (private) {{doSubs}} method is a no-op if there are no sub-facets.
>  * The (private) {{intersections}} and {{filters}} arrays are only used by 
> the {{doSubs}} method.
>  * The (private) {{rangeStats}} method currently always populates the 
> {{intersections}} and {{filters}} arrays, even when nothing actually 
> subsequently uses them.
>  * If {{rangeStats}} only populated the {{intersections}} array when it's 
> actually needed then the {{DocSet intersection}} object would remain local in 
> scope and hence the garbage collector could collect it earlier.
> [https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.7.0/solr/core/src/java/org/apache/solr/search/facet/FacetRangeProcessor.java#L531-L555]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob closed pull request #1436: SOLR-14413: allow timeAllowed and cursorMark parameters

2020-12-23 Thread GitBox


madrob closed pull request #1436:
URL: https://github.com/apache/lucene-solr/pull/1436


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14413) allow timeAllowed and cursorMark parameters

2020-12-23 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-14413:
-
Fix Version/s: master (9.0)
   8.8
 Assignee: Mike Drob
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed this to 9.0 and 8.8, thanks for being patient with us and thank you for 
contributing the patch, [~slackhappy]!

> allow timeAllowed and cursorMark parameters
> ---
>
> Key: SOLR-14413
> URL: https://issues.apache.org/jira/browse/SOLR-14413
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: John Gallagher
>Assignee: Mike Drob
>Priority: Minor
> Fix For: 8.8, master (9.0)
>
> Attachments: SOLR-14413-bram.patch, SOLR-14413-jg-update1.patch, 
> SOLR-14413-jg-update2.patch, SOLR-14413-jg-update3.patch, SOLR-14413.patch, 
> Screen Shot 2020-10-23 at 10.08.26 PM.png, Screen Shot 2020-10-23 at 10.09.11 
> PM.png, image-2020-08-18-16-56-41-736.png, image-2020-08-18-16-56-59-178.png, 
> image-2020-08-21-14-18-36-229.png, timeallowed_cursormarks_results.txt
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Ever since cursorMarks were introduced in SOLR-5463 in 2014, cursorMark and 
> timeAllowed parameters were not allowed in combination ("Can not search using 
> both cursorMark and timeAllowed")
> , from [QueryComponent.java|#L359]]:
>  
> {code:java}
>  
>  if (null != rb.getCursorMark() && 0 < timeAllowed) {
>   // fundamentally incompatible
>   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Can not 
> search using both " + CursorMarkParams.CURSOR_MARK_PARAM + " and " + 
> CommonParams.TIME_ALLOWED);
> } {code}
> While theoretically impure to use them in combination, it is often desirable 
> to support cursormarks-style deep paging and attempt to protect Solr nodes 
> from runaway queries using timeAllowed, in the hopes that most of the time, 
> the query completes in the allotted time, and there is no conflict.
>  
> However if the query takes too long, it may be preferable to end the query 
> and protect the Solr node and provide the user with a somewhat inaccurate 
> sorted list. As noted in SOLR-6930, SOLR-5986 and others, timeAllowed is 
> frequently used to prevent runaway load.  In fact, cursorMark and 
> shards.tolerant are allowed in combination, so any argument in favor of 
> purity would be a bit muddied in my opinion.
>  
> This was discussed once in the mailing list that I can find: 
> [https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201506.mbox/%3c5591740b.4080...@elyograg.org%3E]
>  It did not look like there was strong support for preventing the combination.
>  
> I have tested cursorMark and timeAllowed combination together, and even when 
> partial results are returned because the timeAllowed is exceeded, the 
> cursorMark response value is still valid and reasonable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14413) allow timeAllowed and cursorMark parameters

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254199#comment-17254199
 ] 

ASF subversion and git services commented on SOLR-14413:


Commit b13011e97aeddf7c8af1e0bd851c167665187f36 in lucene-solr's branch 
refs/heads/branch_8x from John Gallagher
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b13011e ]

SOLR-14413 allow timeAllowed and cursorMark parameters

closes #1436


> allow timeAllowed and cursorMark parameters
> ---
>
> Key: SOLR-14413
> URL: https://issues.apache.org/jira/browse/SOLR-14413
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: John Gallagher
>Priority: Minor
> Attachments: SOLR-14413-bram.patch, SOLR-14413-jg-update1.patch, 
> SOLR-14413-jg-update2.patch, SOLR-14413-jg-update3.patch, SOLR-14413.patch, 
> Screen Shot 2020-10-23 at 10.08.26 PM.png, Screen Shot 2020-10-23 at 10.09.11 
> PM.png, image-2020-08-18-16-56-41-736.png, image-2020-08-18-16-56-59-178.png, 
> image-2020-08-21-14-18-36-229.png, timeallowed_cursormarks_results.txt
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Ever since cursorMarks were introduced in SOLR-5463 in 2014, cursorMark and 
> timeAllowed parameters were not allowed in combination ("Can not search using 
> both cursorMark and timeAllowed")
> , from [QueryComponent.java|#L359]]:
>  
> {code:java}
>  
>  if (null != rb.getCursorMark() && 0 < timeAllowed) {
>   // fundamentally incompatible
>   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Can not 
> search using both " + CursorMarkParams.CURSOR_MARK_PARAM + " and " + 
> CommonParams.TIME_ALLOWED);
> } {code}
> While theoretically impure to use them in combination, it is often desirable 
> to support cursormarks-style deep paging and attempt to protect Solr nodes 
> from runaway queries using timeAllowed, in the hopes that most of the time, 
> the query completes in the allotted time, and there is no conflict.
>  
> However if the query takes too long, it may be preferable to end the query 
> and protect the Solr node and provide the user with a somewhat inaccurate 
> sorted list. As noted in SOLR-6930, SOLR-5986 and others, timeAllowed is 
> frequently used to prevent runaway load.  In fact, cursorMark and 
> shards.tolerant are allowed in combination, so any argument in favor of 
> purity would be a bit muddied in my opinion.
>  
> This was discussed once in the mailing list that I can find: 
> [https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201506.mbox/%3c5591740b.4080...@elyograg.org%3E]
>  It did not look like there was strong support for preventing the combination.
>  
> I have tested cursorMark and timeAllowed combination together, and even when 
> partial results are returned because the timeAllowed is exceeded, the 
> cursorMark response value is still valid and reasonable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thelabdude commented on a change in pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


thelabdude commented on a change in pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#discussion_r548158086



##
File path: 
solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/exporter/MetricsConfiguration.java
##
@@ -105,17 +108,23 @@ public static MetricsConfiguration from(String resource) 
throws Exception {
   public static MetricsConfiguration from(Document config) throws Exception {
 Node settings = getNode(config, "/config/settings");
 
+Map jqTemplatesMap = null;
+NodeList jqTemplates = 
(NodeList)(xpathFactory.newXPath()).evaluate("/config/jq-templates/template", 
config, XPathConstants.NODESET);

Review comment:
   Also, this happens once during initialization of the Prom exporter, so 
efficiency of XPath isn't so much of a concern. A few extra millis (if that 
much) during init doesn't seem like a problem to me, esp. if it makes the code 
cleaner.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14413) allow timeAllowed and cursorMark parameters

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254200#comment-17254200
 ] 

ASF subversion and git services commented on SOLR-14413:


Commit 70f461ee453afd59f971a3ec5c431181aa1edd10 in lucene-solr's branch 
refs/heads/master from John Gallagher
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=70f461e ]

SOLR-14413 allow timeAllowed and cursorMark parameters

closes #1436


> allow timeAllowed and cursorMark parameters
> ---
>
> Key: SOLR-14413
> URL: https://issues.apache.org/jira/browse/SOLR-14413
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: John Gallagher
>Priority: Minor
> Attachments: SOLR-14413-bram.patch, SOLR-14413-jg-update1.patch, 
> SOLR-14413-jg-update2.patch, SOLR-14413-jg-update3.patch, SOLR-14413.patch, 
> Screen Shot 2020-10-23 at 10.08.26 PM.png, Screen Shot 2020-10-23 at 10.09.11 
> PM.png, image-2020-08-18-16-56-41-736.png, image-2020-08-18-16-56-59-178.png, 
> image-2020-08-21-14-18-36-229.png, timeallowed_cursormarks_results.txt
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Ever since cursorMarks were introduced in SOLR-5463 in 2014, cursorMark and 
> timeAllowed parameters were not allowed in combination ("Can not search using 
> both cursorMark and timeAllowed")
> , from [QueryComponent.java|#L359]]:
>  
> {code:java}
>  
>  if (null != rb.getCursorMark() && 0 < timeAllowed) {
>   // fundamentally incompatible
>   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Can not 
> search using both " + CursorMarkParams.CURSOR_MARK_PARAM + " and " + 
> CommonParams.TIME_ALLOWED);
> } {code}
> While theoretically impure to use them in combination, it is often desirable 
> to support cursormarks-style deep paging and attempt to protect Solr nodes 
> from runaway queries using timeAllowed, in the hopes that most of the time, 
> the query completes in the allotted time, and there is no conflict.
>  
> However if the query takes too long, it may be preferable to end the query 
> and protect the Solr node and provide the user with a somewhat inaccurate 
> sorted list. As noted in SOLR-6930, SOLR-5986 and others, timeAllowed is 
> frequently used to prevent runaway load.  In fact, cursorMark and 
> shards.tolerant are allowed in combination, so any argument in favor of 
> purity would be a bit muddied in my opinion.
>  
> This was discussed once in the mailing list that I can find: 
> [https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201506.mbox/%3c5591740b.4080...@elyograg.org%3E]
>  It did not look like there was strong support for preventing the combination.
>  
> I have tested cursorMark and timeAllowed combination together, and even when 
> partial results are returned because the timeAllowed is exceeded, the 
> cursorMark response value is still valid and reasonable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thelabdude commented on pull request #2165: SOLR-15059: Improve query performance monitoring

2020-12-23 Thread GitBox


thelabdude commented on pull request #2165:
URL: https://github.com/apache/lucene-solr/pull/2165#issuecomment-750440460


   > Is `stripWs` a term of art? Its definition is semi-obvious but will be 
even more obvious with a new name. The test files will live a long time and 
this PR is a key addition.
   
   @MarcusSorealheis Updated the name ;-)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15059) Default Grafana dashboard needs to expose graphs for monitoring query performance

2020-12-23 Thread Timothy Potter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-15059:
--
Description: 
The default Grafana dashboard doesn't expose graphs for monitoring query 
performance. For instance, if I want to see QPS for a collection, that's not 
shown in the default dashboard. Same for quantiles like p95 query latency.

After some digging, these metrics are available in the output from 
{{/admin/metrics}} but are not exported by the exporter.

This PR proposes to enhance the default dashboard with a new Query Metrics 
section with the following metrics:
* Distributed QPS per Collection (aggregated across all cores)
* Distributed QPS per Solr Node (aggregated across all base_url)
* QPS 1-min rate per core
* QPS 5-min rate per core
* Top-level Query latency p99, p95, p75
* Local (non-distrib) query count per core (this is important for determining 
if there is unbalanced load)
* Local (non-distrib) query rate per core (1-min)
* Local (non-distrib) p95 per core

Also, the {{solr-exporter-config.xml}} uses {{jq}} queries to pull metrics from 
the output from {{/admin/metrics}}. This file is huge and contains a bunch of 
{{jq}} boilerplate. Moreover, I'm introducing another 15-20 metrics in this PR, 
it only makes the file more verbose.

Thus, I'm also introducing support for jq templates so as to reduce 
boilerplate, reduce syntax errors, and improve readability. For instance the 
query metrics I'm adding to the config look like this:
{code}
  
$jq:core-query(1minRate, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(5minRate, endswith(".distrib.requestTimes"))
  
{code}
Instead of duplicating the complicated {{jq}} query for each metric. The 
templates are optional and only should be used if a given jq structure is 
repeated 3 or more times. Otherwise, inlining the jq query is still supported. 
Here's how the templates work:

{code}
  A regex with named groups is used to match template references to template + 
vars using the basic pattern:

  $jq:( , , ,  )

  For instance,

  $jq:core(requests_total, endswith(".requestTimes"), count, COUNTER)

  TEMPLATE = core
  UNIQUE = requests_total (unique suffix for this metric, results in a metric 
named "solr_metrics_core_requests_total")
  KEYSELECTOR = endswith(".requestTimes") (filter to select the specific key 
for this metric)
  METRIC = count
  TYPE = COUNTER

  Some templates may have a default type, so you can omit that from your 
template reference, such as:

  $jq:core(requests_total, endswith(".requestTimes"), count)

  Uses the defaultType=COUNTER as many uses of the core template are counts.

  If a template reference omits the metric, then the unique suffix is used, for 
instance:

  $jq:core-query(1minRate, endswith(".distrib.requestTimes"))

  Creates a GAUGE metric (default type) named 
"solr_metrics_core_query_1minRate" using the 1minRate value from the selected 
JSON object.
{code}

Just so people don't have to go digging in the large diff on the config XML, 
here are the query metrics I'm adding to the exporter config with use of the 
templates idea:
{code}
  
$jq:core-query(errors_1minRate, endswith(".errors"), 1minRate)
  
  
$jq:core-query(client_errors_1minRate, endswith(".clientErrors"), 
1minRate)
  
  
$jq:core-query(1minRate, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(5minRate, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(median_ms, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(p75_ms, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(p95_ms, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(p99_ms, endswith(".distrib.requestTimes"))
  
  
$jq:core-query(mean_rate, endswith(".distrib.requestTimes"), 
meanRate)
  
  
$jq:core-query(local_1minRate, endswith(".local.requestTimes"), 
1minRate)
  
  
$jq:core-query(local_5minRate, endswith(".local.requestTimes"), 
5minRate)
  
  
$jq:core-query(local_median_ms, endswith(".local.requestTimes"), 
median_ms)
  
  
$jq:core-query(local_p75_ms, endswith(".local.requestTimes"), 
p75_ms)
  
  
$jq:core-query(local_p95_ms, endswith(".local.requestTimes"), 
p95_ms)
  
  
$jq:core-query(local_p99_ms, endswith(".local.requestTimes"), 
p99_ms)
  
  
$jq:core-query(local_mean_rate, endswith(".local.requestTimes"), 
meanRate)
  
  
$jq:core-query(local_count, endswith(".local.requestTimes"), count, 
COUNT

[GitHub] [lucene-solr] tflobbe commented on pull request #2163: LUCENE-9647 Add back github action for Ant

2020-12-23 Thread GitBox


tflobbe commented on pull request #2163:
URL: https://github.com/apache/lucene-solr/pull/2163#issuecomment-750441504


   > That we didn't have this for 8.x before isn't a reason not to add it now.
   +1! didn't mean we shouldn't, I was just referring to the fact that I didn't 
remove it for a particular reason, it just never existed for this branch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe edited a comment on pull request #2163: LUCENE-9647 Add back github action for Ant

2020-12-23 Thread GitBox


tflobbe edited a comment on pull request #2163:
URL: https://github.com/apache/lucene-solr/pull/2163#issuecomment-750441504


   > That we didn't have this for 8.x before isn't a reason not to add it now.
   
   +1! didn't mean we shouldn't, I was just referring to the fact that I didn't 
remove it for a particular reason, it just never existed for this branch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob closed pull request #413: SOLR-11431: Leader candidate cannot become leader if replica responds 500 to PeerSync

2020-12-23 Thread GitBox


madrob closed pull request #413:
URL: https://github.com/apache/lucene-solr/pull/413


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on pull request #413: SOLR-11431: Leader candidate cannot become leader if replica responds 500 to PeerSync

2020-12-23 Thread GitBox


madrob commented on pull request #413:
URL: https://github.com/apache/lucene-solr/pull/413#issuecomment-750444015


   Closing a stale PR, it looks like this might have had changes once upon a 
time but then they got lost in a rebase.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on pull request #383: In ContextQuery, use a more comprehensive check for an empty prefix automaton.

2020-12-23 Thread GitBox


madrob commented on pull request #383:
URL: https://github.com/apache/lucene-solr/pull/383#issuecomment-750447704


   Going through old patches trying to clean some up before the end of the 
year. @jtibshirani this patch looks good to me, would you like to commit it?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jtibshirani commented on pull request #383: In ContextQuery, use a more comprehensive check for an empty prefix automaton.

2020-12-23 Thread GitBox


jtibshirani commented on pull request #383:
URL: https://github.com/apache/lucene-solr/pull/383#issuecomment-750451014


   Thanks @madrob for pinging on this, I'll commit in the next few days.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on pull request #908: Change the file format of README files from README.txt to README.md a…

2020-12-23 Thread GitBox


madrob commented on pull request #908:
URL: https://github.com/apache/lucene-solr/pull/908#issuecomment-750451742


   Done in #1450



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob closed pull request #908: Change the file format of README files from README.txt to README.md a…

2020-12-23 Thread GitBox


madrob closed pull request #908:
URL: https://github.com/apache/lucene-solr/pull/908


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob merged pull request #2163: LUCENE-9647 Add back github action for Ant

2020-12-23 Thread GitBox


madrob merged pull request #2163:
URL: https://github.com/apache/lucene-solr/pull/2163


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9647) Add back ant precommit on PR for branch 8

2020-12-23 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated LUCENE-9647:
--
Priority: Minor  (was: Major)

> Add back ant precommit on PR for branch 8
> -
>
> Key: LUCENE-9647
> URL: https://issues.apache.org/jira/browse/LUCENE-9647
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When migrating everything to gradle only, we accidentally deleted our 
> branch_8x PR precommit action.
> The file needs to be on master, but the branch specification should specify 
> branch_8x I believe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9647) Add back ant precommit on PR for branch 8

2020-12-23 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved LUCENE-9647.
---
Fix Version/s: master (9.0)
   Resolution: Fixed

> Add back ant precommit on PR for branch 8
> -
>
> Key: LUCENE-9647
> URL: https://issues.apache.org/jira/browse/LUCENE-9647
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When migrating everything to gradle only, we accidentally deleted our 
> branch_8x PR precommit action.
> The file needs to be on master, but the branch specification should specify 
> branch_8x I believe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9647) Add back ant precommit on PR for branch 8

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254234#comment-17254234
 ] 

ASF subversion and git services commented on LUCENE-9647:
-

Commit ba8221098e4f13a57676d66a33093e6d5a7575ac in lucene-solr's branch 
refs/heads/master from Mike Drob
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ba82210 ]

LUCENE-9647 Add back github action for Ant (#2163)



> Add back ant precommit on PR for branch 8
> -
>
> Key: LUCENE-9647
> URL: https://issues.apache.org/jira/browse/LUCENE-9647
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When migrating everything to gradle only, we accidentally deleted our 
> branch_8x PR precommit action.
> The file needs to be on master, but the branch specification should specify 
> branch_8x I believe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-9647) Add back ant precommit on PR for branch 8

2020-12-23 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob reassigned LUCENE-9647:
-

Assignee: Mike Drob

> Add back ant precommit on PR for branch 8
> -
>
> Key: LUCENE-9647
> URL: https://issues.apache.org/jira/browse/LUCENE-9647
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When migrating everything to gradle only, we accidentally deleted our 
> branch_8x PR precommit action.
> The file needs to be on master, but the branch specification should specify 
> branch_8x I believe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2163: LUCENE-9647 Add back github action for Ant

2020-12-23 Thread GitBox


dweiss commented on pull request #2163:
URL: https://github.com/apache/lucene-solr/pull/2163#issuecomment-750461143


   Why merge it with master though? It does not apply to master - only to 8x?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14608) Faster sorting for the /export handler

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254250#comment-17254250
 ] 

ASF subversion and git services commented on SOLR-14608:


Commit 0a7ea0ef20d7b280b7d4381ad851024096addf27 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0a7ea0e ]

SOLR-14608: Reuse docvalues when possible


> Faster sorting for the /export handler
> --
>
> Key: SOLR-14608
> URL: https://issues.apache.org/jira/browse/SOLR-14608
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The largest cost of the export handler is the sorting. This ticket will 
> implement an improved algorithm for sorting that should greatly increase 
> overall throughput for the export handler.
> *The current algorithm is as follows:*
> Collect a bitset of matching docs. Iterate over that bitset and materialize 
> the top level oridinals for the sort fields in the document and add them to 
> priority queue of size 3. Then export the top 3 docs, turn off the 
> bits in the bit set and iterate again until all docs are sorted and sent. 
> There are two performance bottlenecks with this approach:
> 1) Materializing the top level ordinals adds a huge amount of overhead to the 
> sorting process.
> 2) The size of priority queue, 30,000, adds significant overhead to sorting 
> operations.
> *The new algorithm:*
> Has a top level *merge sort iterator* that wraps segment level iterators that 
> perform segment level priority queue sorts.
> *Segment level:*
> The segment level docset will be iterated and the segment level ordinals for 
> the sort fields will be materialized and added to a segment level priority 
> queue. As the segment level iterator pops docs from the priority queue the 
> top level ordinals for the sort fields are materialized. Because the top 
> level ordinals are materialized AFTER the sort, they only need to be looked 
> up when the segment level ordinal changes. This takes advantage of the sort 
> to limit the lookups into the top level ordinal structures. This also 
> eliminates redundant lookups of top level ordinals that occur during the 
> multiple passes over the matching docset.
> The segment level priority queues can be kept smaller than 30,000 to improve 
> performance of the sorting operations because the overall batch size will 
> still be 30,000 or greater when all the segment priority queue sizes are 
> added up. This allows for batch sizes much larger then 30,000 without using a 
> single large priority queue. The increased batch size means fewer iterations 
> over the matching docset and the decreased priority queue size means faster 
> sorting operations.
> *Top level:*
> A top level iterator does a merge sort over the segment level iterators by 
> comparing the top level ordinals materialized when the segment level docs are 
> popped from the segment level priority queues. This requires no extra memory 
> and will be very performant.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-15060) Introduce DelegatingDirectoryFactory

2020-12-23 Thread Bruno Roustant (Jira)
Bruno Roustant created SOLR-15060:
-

 Summary: Introduce DelegatingDirectoryFactory
 Key: SOLR-15060
 URL: https://issues.apache.org/jira/browse/SOLR-15060
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Bruno Roustant
Assignee: Bruno Roustant


FilterDirectory already exists to delegate to a Directory, but there is no 
delegating DirectoryFactory.

+Use cases:+

A DelegatingDirectoryFactory could be used in SOLR-15051 to make 
BlobDirectoryFactory easily delegate to any DirectoryFactory (e.g. 
MMapDirectoryFactory). If the DelegatingDirectoryFactory delegation can be 
configured in solrconfig.xml then it allows any user to change the delegate 
DirectoryFactory without code.

A DelegatingDirectoryFactory could be used by an EncryptingDirectoryFactory 
that could encrypt/decrypt any delegate DirectoryFactory. Here again it should 
be configurable in solrconfig.xml.

+Problem:
+But currently DirectoryFactory delegation does not work with a delegate 
CachingDirectoryFactory because the get() method creates internally a Directory 
instance of a type which cannot be controlled by the caller (the 
DelegatingDirectoryFactory).

+Proposal:+
So here we propose to change DirectoryFactory.get() method by adding a fourth 
parameter Function that allows the caller to wrap the 
internal Directory with a custom FilterDirectory when it is created.

+Benefit:
+Hence we would have a DelegatingDirectoryFactory that could delegate the 
creation of some FilterDirectory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15060) Introduce DelegatingDirectoryFactory

2020-12-23 Thread Bruno Roustant (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Roustant updated SOLR-15060:
--
Description: 
FilterDirectory already exists to delegate to a Directory, but there is no 
delegating DirectoryFactory.

+Use cases:+

A DelegatingDirectoryFactory could be used in SOLR-15051 to make 
BlobDirectoryFactory easily delegate to any DirectoryFactory (e.g. 
MMapDirectoryFactory). If the DelegatingDirectoryFactory delegation can be 
configured in solrconfig.xml then it allows any user to change the delegate 
DirectoryFactory without code.

A DelegatingDirectoryFactory could be used by an EncryptingDirectoryFactory 
that could encrypt/decrypt any delegate DirectoryFactory. Here again it should 
be configurable in solrconfig.xml.

+Problem:+
But currently DirectoryFactory delegation does not work with a delegate 
CachingDirectoryFactory because the get() method creates internally a Directory 
instance of a type which cannot be controlled by the caller (the 
DelegatingDirectoryFactory).

+Proposal:+
 So here we propose to change DirectoryFactory.get() method by adding a fourth 
parameter Function that allows the caller to wrap the 
internal Directory with a custom FilterDirectory when it is created.

+Benefit:+
Hence we would have a DelegatingDirectoryFactory that could delegate the 
creation of some FilterDirectory.

  was:
FilterDirectory already exists to delegate to a Directory, but there is no 
delegating DirectoryFactory.

+Use cases:+

A DelegatingDirectoryFactory could be used in SOLR-15051 to make 
BlobDirectoryFactory easily delegate to any DirectoryFactory (e.g. 
MMapDirectoryFactory). If the DelegatingDirectoryFactory delegation can be 
configured in solrconfig.xml then it allows any user to change the delegate 
DirectoryFactory without code.

A DelegatingDirectoryFactory could be used by an EncryptingDirectoryFactory 
that could encrypt/decrypt any delegate DirectoryFactory. Here again it should 
be configurable in solrconfig.xml.

+Problem:
+But currently DirectoryFactory delegation does not work with a delegate 
CachingDirectoryFactory because the get() method creates internally a Directory 
instance of a type which cannot be controlled by the caller (the 
DelegatingDirectoryFactory).

+Proposal:+
So here we propose to change DirectoryFactory.get() method by adding a fourth 
parameter Function that allows the caller to wrap the 
internal Directory with a custom FilterDirectory when it is created.

+Benefit:
+Hence we would have a DelegatingDirectoryFactory that could delegate the 
creation of some FilterDirectory.


> Introduce DelegatingDirectoryFactory
> 
>
> Key: SOLR-15060
> URL: https://issues.apache.org/jira/browse/SOLR-15060
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bruno Roustant
>Assignee: Bruno Roustant
>Priority: Major
>
> FilterDirectory already exists to delegate to a Directory, but there is no 
> delegating DirectoryFactory.
> +Use cases:+
> A DelegatingDirectoryFactory could be used in SOLR-15051 to make 
> BlobDirectoryFactory easily delegate to any DirectoryFactory (e.g. 
> MMapDirectoryFactory). If the DelegatingDirectoryFactory delegation can be 
> configured in solrconfig.xml then it allows any user to change the delegate 
> DirectoryFactory without code.
> A DelegatingDirectoryFactory could be used by an EncryptingDirectoryFactory 
> that could encrypt/decrypt any delegate DirectoryFactory. Here again it 
> should be configurable in solrconfig.xml.
> +Problem:+
> But currently DirectoryFactory delegation does not work with a delegate 
> CachingDirectoryFactory because the get() method creates internally a 
> Directory instance of a type which cannot be controlled by the caller (the 
> DelegatingDirectoryFactory).
> +Proposal:+
>  So here we propose to change DirectoryFactory.get() method by adding a 
> fourth parameter Function that allows the caller to 
> wrap the internal Directory with a custom FilterDirectory when it is created.
> +Benefit:+
> Hence we would have a DelegatingDirectoryFactory that could delegate the 
> creation of some FilterDirectory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] bruno-roustant opened a new pull request #2166: SOLR-15060: Introduce DelegatingDirectoryFactory.

2020-12-23 Thread GitBox


bruno-roustant opened a new pull request #2166:
URL: https://github.com/apache/lucene-solr/pull/2166


   For the description, see https://issues.apache.org/jira/browse/SOLR-15060
   
   This PR adds a new method get(String path, DirContext dirContext, String 
rawLockType, Function wrappingFunction) in 
DirectoryFactory, with a new wrappingFunction parameter. The existing 
get(String path, DirContext dirContext, String rawLockType) calls the new one 
with Function.identity().
   
   I wonder if this change should be 8x or 9x.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15060) Introduce DelegatingDirectoryFactory

2020-12-23 Thread Bruno Roustant (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Roustant updated SOLR-15060:
--
Description: 
FilterDirectory already exists to delegate to a Directory, but there is no 
delegating DirectoryFactory.

+Use cases:+

A DelegatingDirectoryFactory could be used in SOLR-15051 to make 
BlobDirectoryFactory easily delegate to any DirectoryFactory (e.g. 
MMapDirectoryFactory). If the DelegatingDirectoryFactory delegation can be 
configured in solrconfig.xml then it allows any user to change the delegate 
DirectoryFactory without code.

A DelegatingDirectoryFactory could be used by an EncryptingDirectoryFactory 
that could encrypt/decrypt any delegate DirectoryFactory. Here again it should 
be configurable in solrconfig.xml.

+Problem:+
 But currently DirectoryFactory delegation does not work with a delegate 
CachingDirectoryFactory because the get() method creates internally a Directory 
instance of a type which cannot be controlled by the caller (the 
DelegatingDirectoryFactory).

+Proposal:+
 So here we propose to change DirectoryFactory.get() method by adding a fourth 
parameter Function that allows the caller to wrap the 
internal Directory with a custom FilterDirectory when it is created. Hence we 
would have a DelegatingDirectoryFactory that could delegate the creation of 
some FilterDirectory.

  was:
FilterDirectory already exists to delegate to a Directory, but there is no 
delegating DirectoryFactory.

+Use cases:+

A DelegatingDirectoryFactory could be used in SOLR-15051 to make 
BlobDirectoryFactory easily delegate to any DirectoryFactory (e.g. 
MMapDirectoryFactory). If the DelegatingDirectoryFactory delegation can be 
configured in solrconfig.xml then it allows any user to change the delegate 
DirectoryFactory without code.

A DelegatingDirectoryFactory could be used by an EncryptingDirectoryFactory 
that could encrypt/decrypt any delegate DirectoryFactory. Here again it should 
be configurable in solrconfig.xml.

+Problem:+
But currently DirectoryFactory delegation does not work with a delegate 
CachingDirectoryFactory because the get() method creates internally a Directory 
instance of a type which cannot be controlled by the caller (the 
DelegatingDirectoryFactory).

+Proposal:+
 So here we propose to change DirectoryFactory.get() method by adding a fourth 
parameter Function that allows the caller to wrap the 
internal Directory with a custom FilterDirectory when it is created.

+Benefit:+
Hence we would have a DelegatingDirectoryFactory that could delegate the 
creation of some FilterDirectory.


> Introduce DelegatingDirectoryFactory
> 
>
> Key: SOLR-15060
> URL: https://issues.apache.org/jira/browse/SOLR-15060
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bruno Roustant
>Assignee: Bruno Roustant
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FilterDirectory already exists to delegate to a Directory, but there is no 
> delegating DirectoryFactory.
> +Use cases:+
> A DelegatingDirectoryFactory could be used in SOLR-15051 to make 
> BlobDirectoryFactory easily delegate to any DirectoryFactory (e.g. 
> MMapDirectoryFactory). If the DelegatingDirectoryFactory delegation can be 
> configured in solrconfig.xml then it allows any user to change the delegate 
> DirectoryFactory without code.
> A DelegatingDirectoryFactory could be used by an EncryptingDirectoryFactory 
> that could encrypt/decrypt any delegate DirectoryFactory. Here again it 
> should be configurable in solrconfig.xml.
> +Problem:+
>  But currently DirectoryFactory delegation does not work with a delegate 
> CachingDirectoryFactory because the get() method creates internally a 
> Directory instance of a type which cannot be controlled by the caller (the 
> DelegatingDirectoryFactory).
> +Proposal:+
>  So here we propose to change DirectoryFactory.get() method by adding a 
> fourth parameter Function that allows the caller to 
> wrap the internal Directory with a custom FilterDirectory when it is created. 
> Hence we would have a DelegatingDirectoryFactory that could delegate the 
> creation of some FilterDirectory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss opened a new pull request #2167: LUCENE-9643: git revisions for .git-blame-ignore-revs need to be full

2020-12-23 Thread GitBox


dweiss opened a new pull request #2167:
URL: https://github.com/apache/lucene-solr/pull/2167


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss merged pull request #2167: LUCENE-9643: git revisions for .git-blame-ignore-revs need to be full

2020-12-23 Thread GitBox


dweiss merged pull request #2167:
URL: https://github.com/apache/lucene-solr/pull/2167


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9643) Create .git-blame-ignore-revs file with revisions that only apply reformatting

2020-12-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254293#comment-17254293
 ] 

ASF subversion and git services commented on LUCENE-9643:
-

Commit 409a0ac125d4e524c6dc12fd87d1c92a57f67c54 in lucene-solr's branch 
refs/heads/master from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=409a0ac ]

LUCENE-9643: git revisions for .git-blame-ignore-revs need to be full sha1, not 
abbreviated. (#2167)



> Create .git-blame-ignore-revs file with revisions that only apply reformatting
> --
>
> Key: LUCENE-9643
> URL: https://issues.apache.org/jira/browse/LUCENE-9643
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Interim squashed merges from LUCENE-9570 to master are recorded in 
> .git-blame-ignore-revs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254295#comment-17254295
 ] 

Dawid Weiss commented on LUCENE-9570:
-

Seems like those refs need to be complete sha checksums, not abbreviated ones. 
I corrected the .git-blame-ignore-revs and it's working for me now.

> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
> * switch to branch jira/LUCENE-9570 which the PR is based on:
> {code}
> git remote add dweiss g...@github.com:dweiss/lucene-solr.git
> git fetch dweiss
> git checkout jira/LUCENE-9570
> {code}
> * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
> * Reformat the code:
> {code}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
> * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}
> *Packages remaining* (put your name next to a module you're working on to 
> avoid duplication).
> *  case ":lucene:analysis:common": (partially done, dweiss)
> *  case ":lucene:analysis:icu":
> *  case ":lucene:analysis:kuromoji":
> *  case ":lucene:analysis:morfologik":
> *  case ":lucene:analysis:nori":
>  * case ":lucene:analysis:opennlp":
>  * case ":lucene:analysis:phonetic":
>  * case ":lucene:analysis:smartcn":
>  * case ":lucene:analysis:stempel":
>  * case ":lucene:backward-codecs":
>  * case ":lucene:benchmark":
>  * case ":lucene:classification":
>  * case ":lucene:codecs":
>  * case ":lucene:demo":
>  * case ":lucene:expressions":
>  * case ":lucene:facet":
>  * case ":lucene:grouping":
>  * case ":lucene:join":
>  * case ":lucene:luke":
>  * case ":lucene:memory":
>  * case ":lucene:misc":
>  * case ":lucene:monitor":
>  * case ":lucene:queries":
>  * case ":lucene:queryparser":
>  * case ":lucene:replicator":
>  * case ":lucene:sandbox":
>  * case ":lucene:spatial3d":
>  * case ":lucene:spatial-extras":
>  * case ":lucene:suggest":
>  * case ":lucene:test-framework":



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2163: LUCENE-9647 Add back github action for Ant

2020-12-23 Thread GitBox


dweiss commented on pull request #2163:
URL: https://github.com/apache/lucene-solr/pull/2163#issuecomment-750516702


   bq. It needs to be in the default branch. 
   
   Oh, ok. So it needs to be on the default branch even if it applies only to a 
filtered set of branches? Get it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-15048) collapse + query elevation behaves inconsistenty w/ 'null group' docs depending on group head selector

2020-12-23 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter updated SOLR-15048:
--
Attachment: SOLR-15048.patch
Status: Open  (was: Open)

There are twe distinct but interconnected problems that were contributing to 
the symptoms i was seeing before (which were only a subset of the actual 
problems involved, due to the natural order of the docs in my test)

* the abstract base "strategy" classes expect that (in the nullPolicy=expand) 
case _if_ a null doc was boosted, then it would not *also* be listed in the 
'nullScores' array (and the nullScoreIndex would not be increased in the "null 
group but boosted" code path of the finish method())   -- but not all of the 
concrete strategy impls were returning immediately when detecting a boosted 
doc, meaning that "extra" scores would be in the nullScores array, leading to 
using the wrong index to try nad find the score of (non-boosted) null docs
* the finish method in the abstract base "strategy" class takes care of purging 
any "group heads" that have the same group value as a known boosted doc, but 
this code wasn't also cleaning the "nullDoc" group head of the null group (ie: 
it assumed nullPolicy=ignore or nullPolicy=expand)

I'm attaching a patch with more beefed up testing, and the minimal changes to 
get all of hte various strategies/collectors in agreement of how to deal with 
null groups -- i plan to do a lot of deduplication refactoring prior to 
committing (copy/past coding appears to be how we got in this mess) but i 
wanted to post the patch in this state since it makes it more evident where the 
"wrong" code was.

> collapse + query elevation behaves inconsistenty w/ 'null group' docs 
> depending on group head selector
> --
>
> Key: SOLR-15048
> URL: https://issues.apache.org/jira/browse/SOLR-15048
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-15048.patch, SOLR-15048.patch, SOLR-15048.patch
>
>
> while working on SOLR-15047, I realized I wasn't really clear on what the 
> _expected_ semantics of were suppose to be when "boosting"
>   docs that had null values in the collapse field.
> I expanded on my test from that jira, to demonstrate the logic i (thought) i 
> understood from the Ord based collector - but then discovered that depending 
> on the group head selector used (ie: OrdScoreCollector vs 
> OrdFieldValueCollector+OrdIntStrategy vs 
> OrdFieldValueCollector+OrdValueSourceStrategy , etc...) you get different 
> behavior - not just in what group head is selected, but even when the 
> behavior should be functionally equivilent, you can get different sets of 
> groups. (even for simple string field collapsing, independent of the bugs in 
> numeric field collapsing).
>  
> I have not dug into WTF is happening here, but I'll attach my WIP test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14923) Indexing performance is unacceptable when child documents are involved

2020-12-23 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254315#comment-17254315
 ] 

David Smiley commented on SOLR-14923:
-

Today I made further modifications to NestedShardedAtomicUpdateTest so that it 
didn't always call /get to make some assertions on the results -- I guarded 
this with a random boolean.  I'm aware that /get can _sometimes_ trigger a new 
realtime searcher, and so I wanted to test that the side effect of this wasn't 
masking bugs.  It did find a bug :-(. And I tracked it down; it's limited to 
in-place-updates of a child doc because these updates are added to the update 
log ID'ed by the document itself, not the root ID it is a part of.  
Consequently, a /get later of the root ID asking for child docs (fl=*,[child]) 
can slip past the updateLog (because the ulog has no updates of this root ID) 
and the _existing_ realtimeSearcher will be used, returning a stale value of 
the updated child document.  This is despite {{mustUseRealtimeSearcher==true}} 
because that only matters if a doc is found in the updateLog.  RTG goes through 
some great lengths (with added complexity thereof) to avoid calling 
{{ulog.openRealtimeSearcher}}.

I've tried some ideas and I'll think more on a fix for this one.

> Indexing performance is unacceptable when child documents are involved
> --
>
> Key: SOLR-14923
> URL: https://issues.apache.org/jira/browse/SOLR-14923
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 8.3, 8.4, 8.5, 8.6, 8.7, master (9.0)
>Reporter: Thomas Wöckinger
>Priority: Critical
>  Labels: performance, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Parallel indexing does not make sense at moment when child documents are used.
> The org.apache.solr.update.processor.DistributedUpdateProcessor checks at the 
> end of the method doVersionAdd if Ulog caches should be refreshed.
> This check will return true if any child document is included in the 
> AddUpdateCommand.
> If so ulog.openRealtimeSearcher(); is called, this call is very expensive, 
> and executed in a synchronized block of the UpdateLog instance, therefore all 
> other operations on the UpdateLog are blocked too.
> Because every important UpdateLog method (add, delete, ...) is done using a 
> synchronized block almost each operation is blocked.
> This reduces multi threaded index update to a single thread behavior.
> The described behavior is not depending on any option of the UpdateRequest, 
> so it does not make any difference if 'waitFlush', 'waitSearcher' or 
> 'softCommit'  is true or false.
> The described behavior makes the usage of ChildDocuments useless, because the 
> performance is unacceptable.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-9570:
---
Description: 
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*
 * switch to branch jira/LUCENE-9570 which the PR is based on:
{code:java}
git remote add dweiss g...@github.com:dweiss/lucene-solr.git
git fetch dweiss
git checkout jira/LUCENE-9570
{code}

 * Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code:java}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}

 * Reformat the code:
{code:java}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}

 * Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code:java}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}

*Packages remaining* (put your name next to a module you're working on to avoid 
duplication).
 * case ":lucene:analysis:common": (partially done, dweiss)
 * case ":lucene:analysis:icu":
 * case ":lucene:analysis:kuromoji":
 * case ":lucene:analysis:morfologik":
 * case ":lucene:analysis:nori":
 * case ":lucene:analysis:opennlp":
 * case ":lucene:analysis:phonetic":
 * case ":lucene:analysis:smartcn":
 * case ":lucene:analysis:stempel":
 * case ":lucene:backward-codecs":
 * case ":lucene:benchmark":
 * case ":lucene:classification":
 * case ":lucene:codecs":
 * case ":lucene:demo":
 * case ":lucene:expressions":
 * case ":lucene:facet":
 * case ":lucene:grouping":
 * case ":lucene:join":
 * case ":lucene:luke":
 * case ":lucene:memory":
 * case ":lucene:misc":
 * case ":lucene:monitor":
 * case ":lucene:queries": (Erick Erickson)
 * case ":lucene:queryparser":
 * case ":lucene:replicator":
 * case ":lucene:sandbox":
 * case ":lucene:spatial3d":
 * case ":lucene:spatial-extras":
 * case ":lucene:suggest":
 * case ":lucene:test-framework":

  was:
Review and correct all the javadocs before they're messed up by automatic 
formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
it should be relatively quick.

*Reviewing diffs manually*

* switch to branch jira/LUCENE-9570 which the PR is based on:
{code}
git remote add dweiss g...@github.com:dweiss/lucene-solr.git
git fetch dweiss
git checkout jira/LUCENE-9570
{code}
* Open gradle/validation/spotless.gradle and locate the project/ package you 
wish to review. Enable it in spotless.gradle by creating a corresponding switch 
case block (refer to existing examples), for example:
{code}
  case ":lucene:highlighter":
target "src/**"
targetExclude "**/resources/**", "**/overview.html"
break
{code}
* Reformat the code:
{code}
gradlew tidy && git diff -w > /tmp/diff.patch && git status
{code}
* Look at what has changed (git status) and review the differences manually 
(/tmp/diff.patch). If everything looks ok, commit it directly to 
jira/LUCENE-9570 or make a PR against that branch.
{code}
git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
{code}

*Packages remaining* (put your name next to a module you're working on to avoid 
duplication).

*  case ":lucene:analysis:common": (partially done, dweiss)
*  case ":lucene:analysis:icu":
*  case ":lucene:analysis:kuromoji":
*  case ":lucene:analysis:morfologik":
*  case ":lucene:analysis:nori":
 * case ":lucene:analysis:opennlp":
 * case ":lucene:analysis:phonetic":
 * case ":lucene:analysis:smartcn":
 * case ":lucene:analysis:stempel":
 * case ":lucene:backward-codecs":
 * case ":lucene:benchmark":
 * case ":lucene:classification":
 * case ":lucene:codecs":
 * case ":lucene:demo":
 * case ":lucene:expressions":
 * case ":lucene:facet":
 * case ":lucene:grouping":
 * case ":lucene:join":
 * case ":lucene:luke":
 * case ":lucene:memory":
 * case ":lucene:misc":
 * case ":lucene:monitor":
 * case ":lucene:queries":
 * case ":lucene:queryparser":
 * case ":lucene:replicator":
 * case ":lucene:sandbox":
 * case ":lucene:spatial3d":
 * case ":lucene:spatial-extras":
 * case ":lucene:suggest":
 * case ":lucene:test-framework":



> Review code diffs after automatic formatting and correct problems before it 
> is applied
> 

[jira] [Commented] (LUCENE-9570) Review code diffs after automatic formatting and correct problems before it is applied

2020-12-23 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254382#comment-17254382
 ] 

Erick Erickson commented on LUCENE-9570:


[~dweiss] I've run tidy, checked the diffs, run buildsite to check a couple of 
javadocs that looked odd (they are OK) and am running check. Assuming check 
passes, is there anything else you recommend before pushing to your repo?

> Review code diffs after automatic formatting and correct problems before it 
> is applied
> --
>
> Key: LUCENE-9570
> URL: https://issues.apache.org/jira/browse/LUCENE-9570
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Review and correct all the javadocs before they're messed up by automatic 
> formatting. Apply project-by-project, review diff, correct. Lots of diffs but 
> it should be relatively quick.
> *Reviewing diffs manually*
>  * switch to branch jira/LUCENE-9570 which the PR is based on:
> {code:java}
> git remote add dweiss g...@github.com:dweiss/lucene-solr.git
> git fetch dweiss
> git checkout jira/LUCENE-9570
> {code}
>  * Open gradle/validation/spotless.gradle and locate the project/ package you 
> wish to review. Enable it in spotless.gradle by creating a corresponding 
> switch case block (refer to existing examples), for example:
> {code:java}
>   case ":lucene:highlighter":
> target "src/**"
> targetExclude "**/resources/**", "**/overview.html"
> break
> {code}
>  * Reformat the code:
> {code:java}
> gradlew tidy && git diff -w > /tmp/diff.patch && git status
> {code}
>  * Look at what has changed (git status) and review the differences manually 
> (/tmp/diff.patch). If everything looks ok, commit it directly to 
> jira/LUCENE-9570 or make a PR against that branch.
> {code:java}
> git commit -am ":lucene:core - src/**/org/apache/lucene/document/**"
> {code}
> *Packages remaining* (put your name next to a module you're working on to 
> avoid duplication).
>  * case ":lucene:analysis:common": (partially done, dweiss)
>  * case ":lucene:analysis:icu":
>  * case ":lucene:analysis:kuromoji":
>  * case ":lucene:analysis:morfologik":
>  * case ":lucene:analysis:nori":
>  * case ":lucene:analysis:opennlp":
>  * case ":lucene:analysis:phonetic":
>  * case ":lucene:analysis:smartcn":
>  * case ":lucene:analysis:stempel":
>  * case ":lucene:backward-codecs":
>  * case ":lucene:benchmark":
>  * case ":lucene:classification":
>  * case ":lucene:codecs":
>  * case ":lucene:demo":
>  * case ":lucene:expressions":
>  * case ":lucene:facet":
>  * case ":lucene:grouping":
>  * case ":lucene:join":
>  * case ":lucene:luke":
>  * case ":lucene:memory":
>  * case ":lucene:misc":
>  * case ":lucene:monitor":
>  * case ":lucene:queries": (Erick Erickson)
>  * case ":lucene:queryparser":
>  * case ":lucene:replicator":
>  * case ":lucene:sandbox":
>  * case ":lucene:spatial3d":
>  * case ":lucene:spatial-extras":
>  * case ":lucene:suggest":
>  * case ":lucene:test-framework":



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



  1   2   >