[jira] [Commented] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Allen Sooredoo (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229875#comment-17229875
 ] 

Allen Sooredoo commented on SOLR-14993:
---

Hi,
 
sorry about that, I wanted to do a very quick fix in github and link it here, 
but it turns out I need much more time to write all the tests and submit a 
proper PR.
 
Use case:
- I add to every configset a file called id.txt which contains the id of the 
configset (positive integer)
- When I want to update it (to add new synonyms for instance), I first use 
`zkClient.downloadConfig(configSetName, directory)` (Solrj) to download the 
configset and check the id inside the file before proceeding.
 
Issue:
- If the id fits on 1 byte, the file is created by its content is not 
downloaded because of this check:
|private static int copyDataDown(SolrZkClient zkClient, String zkPath, File 
file) throws IOException, KeeperException, InterruptedException { 
{color:#d73a49} byte{color}[] data = zkClient.getData(zkPath, null, null, 
true);  if (data != null && data.length > 1) { // There are apparently 
basically empty ZNodes. |
 
Changing it to `data.length > 0` would solve my problem, however I may need 
more time to test that this "empty ZNodes" issue doesn't reappear as a side 
effect. Do those "empty ZNodes" have a size of 1 byte? Why not 0?
 
I'm curious if the check > 1 (instead of the expected >0) is a typo or an 
actual fix to a know problem. Do you have more information about that?
 

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Allen Sooredoo (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229875#comment-17229875
 ] 

Allen Sooredoo edited comment on SOLR-14993 at 11/11/20, 9:48 AM:
--

Hi,
  
 sorry about that, I wanted to do a very quick fix in github and link it here, 
but it turns out I need much more time to write all the tests and submit a 
proper PR.
  
 Use case:
 - I add to every configset a file called id.txt which contains the id of the 
configset (positive integer)
 - When I want to update it (to add new synonyms for instance), I first use 
`zkClient.downloadConfig(configSetName, directory)` (Solrj) to download the 
configset and check the id inside the file before proceeding.
  
 Issue:
 - If the id fits on 1 byte, the file is created but its content is not 
downloaded because of this check:
|private static int copyDataDown(SolrZkClient zkClient, String zkPath, File 
file) throws IOException, KeeperException, InterruptedException { 
{color:#d73a49} byte{color}[] data = zkClient.getData(zkPath, null, null, 
true); if (data != null && data.length > 1) { // There are apparently basically 
empty ZNodes.|

 
 Changing it to `data.length > 0` would solve my problem, however I may need 
more time to test that this "empty ZNodes" issue doesn't reappear as a side 
effect. Do those "empty ZNodes" have a size of 1 byte? Why not 0?
  
 I'm curious if the check > 1 (instead of the expected >0) is a typo or an 
actual fix to a know problem. Do you have more information about that?
  


was (Author: alsofr):
Hi,
 
sorry about that, I wanted to do a very quick fix in github and link it here, 
but it turns out I need much more time to write all the tests and submit a 
proper PR.
 
Use case:
- I add to every configset a file called id.txt which contains the id of the 
configset (positive integer)
- When I want to update it (to add new synonyms for instance), I first use 
`zkClient.downloadConfig(configSetName, directory)` (Solrj) to download the 
configset and check the id inside the file before proceeding.
 
Issue:
- If the id fits on 1 byte, the file is created by its content is not 
downloaded because of this check:
|private static int copyDataDown(SolrZkClient zkClient, String zkPath, File 
file) throws IOException, KeeperException, InterruptedException { 
{color:#d73a49} byte{color}[] data = zkClient.getData(zkPath, null, null, 
true);  if (data != null && data.length > 1) { // There are apparently 
basically empty ZNodes. |
 
Changing it to `data.length > 0` would solve my problem, however I may need 
more time to test that this "empty ZNodes" issue doesn't reappear as a side 
effect. Do those "empty ZNodes" have a size of 1 byte? Why not 0?
 
I'm curious if the check > 1 (instead of the expected >0) is a typo or an 
actual fix to a know problem. Do you have more information about that?
 

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Allen Sooredoo (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229875#comment-17229875
 ] 

Allen Sooredoo edited comment on SOLR-14993 at 11/11/20, 9:49 AM:
--

Hi,
  
 sorry about that, I wanted to do a very quick fix in github and link it here, 
but it turns out I need much more time to write all the tests and submit a 
proper PR.
  
 Use case:
 - I add to every configset a file called id.txt which contains the id of the 
configset (positive integer)
 - When I want to update it (to add new synonyms for instance), I first use 
`zkClient.downloadConfig(configSetName, directory)` (Solrj) to download the 
configset and check the id inside the file before proceeding.
  
 Issue:
 - If the id fits on 1 byte, the file is created but its content is not 
downloaded because of this check:
|private static int copyDataDown(SolrZkClient zkClient, String zkPath, File 
file) throws IOException, KeeperException, InterruptedException {
 {color:#d73a49}byte{color}[] data = zkClient.getData(zkPath, null, null, true);
 if (data != null && data.length > 1) { // There are apparently basically empty 
ZNodes.|

https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/cloud/ZkMaintenanceUtils.java

 
 Changing it to `data.length > 0` would solve my problem, however I may need 
more time to test that this "empty ZNodes" issue doesn't reappear as a side 
effect. Do those "empty ZNodes" have a size of 1 byte? Why not 0?
  
 I'm curious if the check > 1 (instead of the expected >0) is a typo or an 
actual fix to a know problem. Do you have more information about that?
  


was (Author: alsofr):
Hi,
  
 sorry about that, I wanted to do a very quick fix in github and link it here, 
but it turns out I need much more time to write all the tests and submit a 
proper PR.
  
 Use case:
 - I add to every configset a file called id.txt which contains the id of the 
configset (positive integer)
 - When I want to update it (to add new synonyms for instance), I first use 
`zkClient.downloadConfig(configSetName, directory)` (Solrj) to download the 
configset and check the id inside the file before proceeding.
  
 Issue:
 - If the id fits on 1 byte, the file is created but its content is not 
downloaded because of this check:
|private static int copyDataDown(SolrZkClient zkClient, String zkPath, File 
file) throws IOException, KeeperException, InterruptedException {
{color:#d73a49}byte{color}[] data = zkClient.getData(zkPath, null, null, true);
if (data != null && data.length > 1) { // There are apparently basically empty 
ZNodes.|

 
 Changing it to `data.length > 0` would solve my problem, however I may need 
more time to test that this "empty ZNodes" issue doesn't reappear as a side 
effect. Do those "empty ZNodes" have a size of 1 byte? Why not 0?
  
 I'm curious if the check > 1 (instead of the expected >0) is a typo or an 
actual fix to a know problem. Do you have more information about that?
  

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Allen Sooredoo (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229875#comment-17229875
 ] 

Allen Sooredoo edited comment on SOLR-14993 at 11/11/20, 9:49 AM:
--

Hi,
  
 sorry about that, I wanted to do a very quick fix in github and link it here, 
but it turns out I need much more time to write all the tests and submit a 
proper PR.
  
 Use case:
 - I add to every configset a file called id.txt which contains the id of the 
configset (positive integer)
 - When I want to update it (to add new synonyms for instance), I first use 
`zkClient.downloadConfig(configSetName, directory)` (Solrj) to download the 
configset and check the id inside the file before proceeding.
  
 Issue:
 - If the id fits on 1 byte, the file is created but its content is not 
downloaded because of this check:
|private static int copyDataDown(SolrZkClient zkClient, String zkPath, File 
file) throws IOException, KeeperException, InterruptedException {
{color:#d73a49}byte{color}[] data = zkClient.getData(zkPath, null, null, true);
if (data != null && data.length > 1) { // There are apparently basically empty 
ZNodes.|

 
 Changing it to `data.length > 0` would solve my problem, however I may need 
more time to test that this "empty ZNodes" issue doesn't reappear as a side 
effect. Do those "empty ZNodes" have a size of 1 byte? Why not 0?
  
 I'm curious if the check > 1 (instead of the expected >0) is a typo or an 
actual fix to a know problem. Do you have more information about that?
  


was (Author: alsofr):
Hi,
  
 sorry about that, I wanted to do a very quick fix in github and link it here, 
but it turns out I need much more time to write all the tests and submit a 
proper PR.
  
 Use case:
 - I add to every configset a file called id.txt which contains the id of the 
configset (positive integer)
 - When I want to update it (to add new synonyms for instance), I first use 
`zkClient.downloadConfig(configSetName, directory)` (Solrj) to download the 
configset and check the id inside the file before proceeding.
  
 Issue:
 - If the id fits on 1 byte, the file is created but its content is not 
downloaded because of this check:
|private static int copyDataDown(SolrZkClient zkClient, String zkPath, File 
file) throws IOException, KeeperException, InterruptedException { 
{color:#d73a49} byte{color}[] data = zkClient.getData(zkPath, null, null, 
true); if (data != null && data.length > 1) { // There are apparently basically 
empty ZNodes.|

 
 Changing it to `data.length > 0` would solve my problem, however I may need 
more time to test that this "empty ZNodes" issue doesn't reappear as a side 
effect. Do those "empty ZNodes" have a size of 1 byte? Why not 0?
  
 I'm curious if the check > 1 (instead of the expected >0) is a typo or an 
actual fix to a know problem. Do you have more information about that?
  

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on a change in pull request #2068: LUCENE-8982: Separate out native code to another module to allow cpp build with gradle

2020-11-11 Thread GitBox


dweiss commented on a change in pull request #2068:
URL: https://github.com/apache/lucene-solr/pull/2068#discussion_r521238097



##
File path: .gitignore
##
@@ -8,6 +8,10 @@ build/
 /.idea/
 #IntelliJ creates this folder, ignore.
 /dev-tools/missing-doclet/out/
+*.iml

Review comment:
   Hmm... how come you have intellij's old-style files and not the 
folder-based layout? :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14975) Optimize CoreContainer.getAllCoreNames and getLoadedCoreNames

2020-11-11 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229896#comment-17229896
 ] 

ASF subversion and git services commented on SOLR-14975:


Commit 67f9245ce30bb21d3976c05548856c81cf7ee8a1 in lucene-solr's branch 
refs/heads/master from Bruno Roustant
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=67f9245 ]

SOLR-14975: Optimize CoreContainer.getAllCoreNames and getLoadedCoreNames.
Also optimize getCoreDescriptors.


> Optimize CoreContainer.getAllCoreNames and getLoadedCoreNames 
> --
>
> Key: SOLR-14975
> URL: https://issues.apache.org/jira/browse/SOLR-14975
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> The methods CoreContainer.getAllCoreNames and getLoadedCoreNames hold a lock 
> while they grab core names to put into a TreeSet.  When there are *many* 
> cores, this delay is noticeable.  Holding this lock effectively blocks 
> queries since queries lookup a core; so it's critically important that these 
> methods are *fast*.  The tragedy here is that some callers merely want to 
> know if a particular name is in the set, or what the aggregated size is.  
> Some callers want to iterate the names but don't really care what the 
> iteration order is.
> I propose that some callers of these two methods find suitable alternatives, 
> like getCoreDescriptor to check for null.  And I propose that these methods 
> return a HashSet -- no order.  If the caller wants it sorted, it can do so 
> itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14975) Optimize CoreContainer.getAllCoreNames and getLoadedCoreNames

2020-11-11 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229899#comment-17229899
 ] 

ASF subversion and git services commented on SOLR-14975:


Commit 91ee53d41873ef235e5a6ac6c67d03988e509562 in lucene-solr's branch 
refs/heads/master from Bruno Roustant
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=91ee53d ]

SOLR-14975: Add entry in CHANGES.txt


> Optimize CoreContainer.getAllCoreNames and getLoadedCoreNames 
> --
>
> Key: SOLR-14975
> URL: https://issues.apache.org/jira/browse/SOLR-14975
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> The methods CoreContainer.getAllCoreNames and getLoadedCoreNames hold a lock 
> while they grab core names to put into a TreeSet.  When there are *many* 
> cores, this delay is noticeable.  Holding this lock effectively blocks 
> queries since queries lookup a core; so it's critically important that these 
> methods are *fast*.  The tragedy here is that some callers merely want to 
> know if a particular name is in the set, or what the aggregated size is.  
> Some callers want to iterate the names but don't really care what the 
> iteration order is.
> I propose that some callers of these two methods find suitable alternatives, 
> like getCoreDescriptor to check for null.  And I propose that these methods 
> return a HashSet -- no order.  If the caller wants it sorted, it can do so 
> itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14683) Review the metrics API to ensure consistent placeholders for missing values

2020-11-11 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229939#comment-17229939
 ] 

ASF subversion and git services commented on SOLR-14683:


Commit 9454c57430bd33c5e94e0c2cc936760c1f66795c in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9454c57 ]

SOLR-14683: Metrics API should ensure consistent placeholders for missing 
values.


> Review the metrics API to ensure consistent placeholders for missing values
> ---
>
> Key: SOLR-14683
> URL: https://issues.apache.org/jira/browse/SOLR-14683
> Project: Solr
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-14683.patch, SOLR-14683.patch
>
>
> Spin-off from SOLR-14657. Some gauges can legitimately be missing or in an 
> unknown state at some points in time, eg. during SolrCore startup or shutdown.
> Currently the API returns placeholders with either impossible values for 
> numeric gauges (such as index size -1) or empty maps / strings for other 
> non-numeric gauges.
> [~hossman] noticed that the values for these placeholders may be misleading, 
> depending on how the user treats them - if the client has no special logic to 
> treat them as "missing values" it may erroneously treat them as valid data. 
> E.g. numeric values of -1 or 0 may severely skew averages and produce 
> misleading peaks / valleys in metrics histories.
> On the other hand returning a literal {{null}} value instead of the expected 
> number may also cause unexpected client issues - although in this case it's 
> clearer that there's actually no data available, so long-term this may be a 
> better strategy than returning impossible values, even if it means that the 
> client should learn to handle {{null}} values appropriately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


sigram commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521322184



##
File path: solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java
##
@@ -366,7 +381,7 @@ public void m2(SolrQueryRequest req, SolrQueryResponse rsp) 
{
 
   }
 
-  public static class CConfig extends PluginMeta {
+  public static class CConfig implements ReflectMapWriter {

Review comment:
   I don't understand this change. Do all config beans have to implement 
`ReflectMapWriter`?
   
   If that's the case, and all config beans have to implement ReflectMapWriter 
then that's what the generic type in ConfigurablePlugin should use. If not 
then this test is misleading again because it suggests you have to implement 
additional interfaces to have a usable config bean.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


sigram commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521323363



##
File path: solr/core/src/java/org/apache/solr/api/ContainerPluginsRegistry.java
##
@@ -114,6 +118,16 @@ public synchronized ApiInfo getPlugin(String name) {
 return currentPlugins.get(name);
   }
 
+  static class PluginMetaHolder {
+private final Map original;
+private final PluginMeta meta;

Review comment:
   It made sense when all plugins were request handlers, which is no longer 
true. In case of plugins that are not handlers a "standard" `pathPrefix` 
property doesn't make sense.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


sigram commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521325260



##
File path: solr/core/src/java/org/apache/solr/api/AnnotatedApi.java
##
@@ -62,6 +63,8 @@
 
 public class AnnotatedApi extends Api implements PermissionNameProvider , 
Closeable {
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  private static final ObjectMapper mapper = 
SolrJacksonAnnotationInspector.createObjectMapper()

Review comment:
   Oh, I didn't notice before that it was created for each request. That 
was horrible!
   
   Still, we already have one instance here and another (configured the same) 
in `ContainerPluginsRegistry`. We should probably expose this instance here and 
reuse it in `ContainerPluginsRegistry`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


sigram commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521325883



##
File path: solr/core/src/java/org/apache/solr/api/ConfigurablePlugin.java
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.api;
+
+/**Implement this interface if your plugin needs to accept some configuration
+ * 
+ * @param  the configuration Object type
+ */
+public interface ConfigurablePlugin {

Review comment:
   Should this be `` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


sigram commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521325883



##
File path: solr/core/src/java/org/apache/solr/api/ConfigurablePlugin.java
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.api;
+
+/**Implement this interface if your plugin needs to accept some configuration
+ * 
+ * @param  the configuration Object type
+ */
+public interface ConfigurablePlugin {

Review comment:
   Should this be `` ? See my comment below in 
`TestContainerPlugin.java`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


sigram commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521325883



##
File path: solr/core/src/java/org/apache/solr/api/ConfigurablePlugin.java
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.api;
+
+/**Implement this interface if your plugin needs to accept some configuration
+ * 
+ * @param  the configuration Object type
+ */
+public interface ConfigurablePlugin {

Review comment:
   Should this be `` ? See my comment below.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14683) Review the metrics API to ensure consistent placeholders for missing values

2020-11-11 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-14683:

Fix Version/s: (was: master (9.0))
   8.8

> Review the metrics API to ensure consistent placeholders for missing values
> ---
>
> Key: SOLR-14683
> URL: https://issues.apache.org/jira/browse/SOLR-14683
> Project: Solr
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.8
>
> Attachments: SOLR-14683.patch, SOLR-14683.patch
>
>
> Spin-off from SOLR-14657. Some gauges can legitimately be missing or in an 
> unknown state at some points in time, eg. during SolrCore startup or shutdown.
> Currently the API returns placeholders with either impossible values for 
> numeric gauges (such as index size -1) or empty maps / strings for other 
> non-numeric gauges.
> [~hossman] noticed that the values for these placeholders may be misleading, 
> depending on how the user treats them - if the client has no special logic to 
> treat them as "missing values" it may erroneously treat them as valid data. 
> E.g. numeric values of -1 or 0 may severely skew averages and produce 
> misleading peaks / valleys in metrics histories.
> On the other hand returning a literal {{null}} value instead of the expected 
> number may also cause unexpected client issues - although in this case it's 
> clearer that there's actually no data available, so long-term this may be a 
> better strategy than returning impossible values, even if it means that the 
> client should learn to handle {{null}} values appropriately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14749) Provide a clean API for cluster-level event processing

2020-11-11 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14749.
-
Resolution: Fixed

> Provide a clean API for cluster-level event processing
> --
>
> Key: SOLR-14749
> URL: https://issues.apache.org/jira/browse/SOLR-14749
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>  Labels: clean-api
> Fix For: master (9.0)
>
>  Time Spent: 22h
>  Remaining Estimate: 0h
>
> This is a companion issue to SOLR-14613 and it aims at providing a clean, 
> strongly typed API for the functionality formerly known as "triggers" - that 
> is, a component for generating cluster-level events corresponding to changes 
> in the cluster state, and a pluggable API for processing these events.
> The 8x triggers have been removed so this functionality is currently missing 
> in 9.0. However, this functionality is crucial for implementing the automatic 
> collection repair and re-balancing as the cluster state changes (nodes going 
> down / up, becoming overloaded / unused / decommissioned, etc).
> For this reason we need this API and a default implementation of triggers 
> that at least can perform automatic collection repair (maintaining the 
> desired replication factor in presence of live node changes).
> As before, the actual changes to the collections will be executed using 
> existing CollectionAdmin API, which in turn may use the placement plugins 
> from SOLR-14613.
> h3. Division of responsibility
>  * built-in Solr components (non-pluggable):
>  ** cluster state monitoring and event generation,
>  ** simple scheduler to periodically generate scheduled events
>  * plugins:
>  ** automatic collection repair on {{nodeLost}} events (provided by default)
>  ** re-balancing of replicas (periodic or on {{nodeAdded}} events)
>  ** reporting (eg. requesting additional node provisioning)
>  ** scheduled maintenance (eg. removing inactive shards after split)
> h3. Other considerations
> These plugins (unlike the placement plugins) need to execute on one 
> designated node in the cluster. Currently the easiest way to implement this 
> is to run them on the Overseer leader node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14948) Autoscaling maxComputeOperations override causes exceptions

2020-11-11 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14948.
-
Resolution: Fixed

> Autoscaling maxComputeOperations override causes exceptions
> ---
>
> Key: SOLR-14948
> URL: https://issues.apache.org/jira/browse/SOLR-14948
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.6.3
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>
> The maximum number of operations to calculate in {{ComputePlanAction}} is 
> estimated based on the average collection replication factor and the size of 
> the cluster.
> In some cases this estimate may be insufficient, and there's an override 
> property that can be defined in {{autoscaling.json}} named 
> {{maxComputeOperations}}. However, the code in {{ComputePlanAction}} makes an 
> explicit cast to {{Integer}} whereas the value coming from a parsed JSON is 
> of type {{Long}}. This results in a {{ClassCastException}} being thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14275) Policy calculations are very slow for large clusters and large operations

2020-11-11 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14275.
-
Resolution: Won't Fix

This implementation is EOL in Solr 8x.

> Policy calculations are very slow for large clusters and large operations
> -
>
> Key: SOLR-14275
> URL: https://issues.apache.org/jira/browse/SOLR-14275
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.7.2, 8.4.1
>Reporter: Andrzej Bialecki
>Priority: Major
>  Labels: scaling
> Attachments: SOLR-14275.patch, scenario.txt
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Replica placement calculations performed during collection creation take 
> extremely long time (several minutes) when using a large cluster and creating 
> a large collection (eg. 1000 nodes, 500 shards, 4 replicas).
> Profiling shows that most of the time is spent in 
> {{Row.computeCacheIfAbsent}}, which probably doesn't reuse this cache as much 
> as it should.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13641) Undocumented and untested "cleanupThread" functionality in LFUCache and FastLRUCache

2020-11-11 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-13641.
-
Resolution: Won't Fix

LFUCache is EOL in Solr 8x and removed in Solr 9.0.

> Undocumented and untested "cleanupThread" functionality in LFUCache and 
> FastLRUCache
> 
>
> Key: SOLR-13641
> URL: https://issues.apache.org/jira/browse/SOLR-13641
> Project: Solr
>  Issue Type: Bug
>Reporter: Andrzej Bialecki
>Priority: Major
>
> Both LFUCache and FastLRUCache support a functionality for running evictions 
> asynchronously, in a thread different from the one that executes a {{put(K, 
> V)}} operation.
> Additionally, these asynchronous evictions can use either a one-off thread 
> created after each put, or a single long-running cleanup thread.
> However, this functionality is not documented anywhere and it's not tested. 
> It should either be removed, if it's not used, or properly documented and 
> tested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14233) JsonSchemaCreator should support Map payloads

2020-11-11 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14233.
-
Resolution: Fixed

This has been fixed in SOLR-14871.

> JsonSchemaCreator should support Map payloads
> -
>
> Key: SOLR-14233
> URL: https://issues.apache.org/jira/browse/SOLR-14233
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 8.4.1
>Reporter: Andrzej Bialecki
>Priority: Major
> Attachments: schema.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While working on v2 API for SOLR-13579 I discovered that it's currently not 
> possible to use API methods where the payload content is a {{java.util.Map}}. 
> This is needed when passing arguments that are arbitrary key-value maps and 
> not strictly-defined beans.
> Specifically, I needed a method like this:
> {code}
> @Command(name = "setparams")
> public void setParams(SolrQueryRequest req, SolrQueryResponse rsp, 
> PayloadObj> payload) {
> ...
> }
> {code}
> But this declaration produced confusing errors during API registration.
> Upon further digging I discovered that {{JsonSchemaCreator}} doesn't support 
> Map payloads.
> Attached patch seems to fix it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12821) IndexSizeTriggerTest.testMixedBounds() failures

2020-11-11 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-12821.
-
Resolution: Won't Fix

Autoscaling framework (the code in 8x) is EOL.

> IndexSizeTriggerTest.testMixedBounds() failures
> ---
>
> Key: SOLR-12821
> URL: https://issues.apache.org/jira/browse/SOLR-12821
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Steven Rowe
>Assignee: Andrzej Bialecki
>Priority: Major
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2077/], 
> reproduced 5/5 iterations for me on Linux:
> {noformat}
> Checking out Revision 03c9c04353ce1b5ace33fddd5bd99059e63ed507 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=IndexSizeTriggerTest -Dtests.method=testMixedBounds 
> -Dtests.seed=9336AB152EE44632 -Dtests.slow=true -Dtests.locale=hr 
> -Dtests.timezone=America/Guayaquil -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 50.8s J1 | IndexSizeTriggerTest.testMixedBounds <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
> expected: but was:
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([9336AB152EE44632:99B514B8635F4D68]:0)
>[junit4]>  at 
> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:669)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): 
> {foo=PostingsFormat(name=MockRandom), id=PostingsFormat(name=Direct)}, 
> docValues:{_version_=DocValuesFormat(name=Asserting), 
> foo=DocValuesFormat(name=Asserting), id=DocValuesFormat(name=Lucene70)}, 
> maxPointsInLeafNode=452, maxMBSortInHeap=5.552665847709986, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@cc0bab0),
>  locale=hr, timezone=America/Guayaquil
>[junit4]   2> NOTE: SunOS 5.11 amd64/Oracle Corporation 1.8.0_172 
> (64-bit)/cpus=3,threads=1,free=191495432,total=518979584
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229973#comment-17229973
 ] 

Erick Erickson commented on SOLR-14993:
---

Hmmm, that really doesn't look right, why > 1 IDK (and I wrote it. Siiih). 
> 0 makes more sense. I'll deal with it. For this case, I'll grab it.

Thanks for sleuthing!

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-14993:
-

Assignee: Erick Erickson

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Assignee: Erick Erickson
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229973#comment-17229973
 ] 

Erick Erickson edited comment on SOLR-14993 at 11/11/20, 1:04 PM:
--

Hmmm, that really doesn't look right, why > 1 IDK (and I wrote it. Siiih). 
> 0 makes more sense. I'll deal with it. For this case, I'll grab it.

Thanks for sleuthing!

As a work around you could make your IDs bigger, a hack to be sure...


was (Author: erickerickson):
Hmmm, that really doesn't look right, why > 1 IDK (and I wrote it. Siiih). 
> 0 makes more sense. I'll deal with it. For this case, I'll grab it.

Thanks for sleuthing!

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Assignee: Erick Erickson
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14993) Unable to download zookeeper files of 1byte in size

2020-11-11 Thread Allen Sooredoo (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17229979#comment-17229979
 ] 

Allen Sooredoo commented on SOLR-14993:
---

Thanks a lot.

My current hack is to add a space after the id inside the file ( "1 "). That 
solves the issue for small values.

> Unable to download zookeeper files of 1byte in size
> ---
>
> Key: SOLR-14993
> URL: https://issues.apache.org/jira/browse/SOLR-14993
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, SolrJ
>Affects Versions: 8.5.1
>Reporter: Allen Sooredoo
>Assignee: Erick Erickson
>Priority: Minor
>
> When downloading a file from Zookeeper using the Solrj client, files of size 
> 1 byte are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9499) Clean up package name conflicts between modules (split packages)

2020-11-11 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida resolved LUCENE-9499.
---
Resolution: Fixed

> Clean up package name conflicts between modules (split packages)
> 
>
> Key: LUCENE-9499
> URL: https://issues.apache.org/jira/browse/LUCENE-9499
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (9.0)
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have lots of package name conflicts (shared package names) between modules 
> in the source tree. It is not only annoying for devs/users but also indeed 
> bad practice since Java 9 (according to my understanding), and we already 
> have some problems with Javadocs due to these splitted packages as some of us 
> would know. Also split packages make migrating to the Java 9 module system 
> impossible.
> This is the placeholder to fix all package name conflicts in Lucene.
> See the dev list thread for more background. 
>  
> [https://lists.apache.org/thread.html/r6496963e89a5e0615e53206429b6843cc5d3e923a2045cc7b7a1eb03%40%3Cdev.lucene.apache.org%3E]
> Modules that need to be fixed / cleaned up:
>  - analyzers-common (LUCENE-9317)
>  - analyzers-icu (LUCENE-9558)
>  - backward-codecs (LUCENE-9318)
>  - sandbox (LUCENE-9319)
>  - misc (LUCENE-9600)
>  - (test-framework: this can be excluded for the moment)
> Also lucene-core will be heavily affected (some classes have to be moved into 
> {{core}}, or some classes' and methods' in {{core}} visibility have to be 
> relaxed).
> Probably most work would be done in a parallel manner, but conflicts can 
> happen. If someone want to help out, please open an issue before working and 
> share your thoughts with me and others.
> I set "Fix version" to 9.0 - means once we make a commit on here, this will 
> be a blocker for release 9.0.0. (I don't think the changes should be 
> delivered across two major releases; all changes have to be out at once in a 
> major release.) If there are any objections or concerns, please leave 
> comments. For now I have no idea about the total volume of changes or 
> technical obstacles that have to be handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9604) Add (precommit) checks to prevent split packages

2020-11-11 Thread Tomoko Uchida (Jira)
Tomoko Uchida created LUCENE-9604:
-

 Summary: Add (precommit) checks to prevent split packages
 Key: LUCENE-9604
 URL: https://issues.apache.org/jira/browse/LUCENE-9604
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Tomoko Uchida


(Follow-up issue for LUCENE-9499)

We could scan all java files via sourceSets and check if there's any conflicts, 
or collect all "package-info.java" files and extract package names from them ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9604) Add (precommit) checks to prevent split packages

2020-11-11 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated LUCENE-9604:
--
Description: 
(Follow-up issue for LUCENE-9499)

We could scan all java files via sourceSets and check if there's any conflicts, 
or collect all "package-info.java" files and extract package names from them 
(we'd need to ensure all packages have package-info.java) ?

  was:
(Follow-up issue for LUCENE-9499)

We could scan all java files via sourceSets and check if there's any conflicts, 
or collect all "package-info.java" files and extract package names from them ?


> Add (precommit) checks to prevent split packages
> 
>
> Key: LUCENE-9604
> URL: https://issues.apache.org/jira/browse/LUCENE-9604
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Tomoko Uchida
>Priority: Minor
>
> (Follow-up issue for LUCENE-9499)
> We could scan all java files via sourceSets and check if there's any 
> conflicts, or collect all "package-info.java" files and extract package names 
> from them (we'd need to ensure all packages have package-info.java) ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija commented on pull request #2056: SOLR-14971: Handle atomic-removes on uncommitted docs

2020-11-11 Thread GitBox


gerlowskija commented on pull request #2056:
URL: https://github.com/apache/lucene-solr/pull/2056#issuecomment-725554469


   Deciding to keep the implementation as-is for now.  We can revisit if other 
type-related bugs pop up, or anyone has insight into what the downstream 
effects of the bulk `toNativeType` call might be on uncommitted docs.
   
   One last note, in writing the tests for this PR I noticed that we have an 
abundance of test files for atomic updates, with a lot of overlap among them.  
I didn't want to dive into sorting those out here, but I'm going to file a 
follow up ticket for cutting out some of the overlap among these. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija merged pull request #2056: SOLR-14971: Handle atomic-removes on uncommitted docs

2020-11-11 Thread GitBox


gerlowskija merged pull request #2056:
URL: https://github.com/apache/lucene-solr/pull/2056


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14971) AtomicUpdate 'remove' fails on 'pints' fields

2020-11-11 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230121#comment-17230121
 ] 

ASF subversion and git services commented on SOLR-14971:


Commit a7197ac0ce8333ce7019f49c6fab690a96ff7d77 in lucene-solr's branch 
refs/heads/master from Jason Gerlowski
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a7197ac ]

SOLR-14971: Handle atomic-removes on uncommitted docs (#2056)

Docs fetched from the update log via RTG look different than docs
fetched from commits in the index: the types of
field-values may be different between the two, etc.

This is a problem for atomic add/remove of field values, where matching
existing values has historically been done by object equals() calls (via
Collection operations).  This relies on equality checks which don't have
flexible enough semantics to match values across these different types.
(For example, `new Long(1).equals(new Integer(1))` returns `false`).
This was causing some add-distinct and remove operations on
uncommitted values to silently fail to remove field values.

This commit patches over this by converting between types in the more
common cases before using the fallback behavior.

> AtomicUpdate 'remove' fails on 'pints' fields
> -
>
> Key: SOLR-14971
> URL: https://issues.apache.org/jira/browse/SOLR-14971
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 8.5.2
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: reproduce.sh
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The "remove" atomic update action on multivalued int fields fails if the 
> document being changed is uncommitted.
> At first glance this appears to be a type-related issue.  
> AtomicUpdateDocumentMerger attempts to handle multivalued int fields by 
> processing the List type, but in uncommitted docs int fields are 
> still List in the tlog.  Conceptually this feels similar to 
> SOLR-13331.
> It's likely this issue also affects other numeric and date fields.
> Attached is a simple script to reproduce, meant to be run from the root of a 
> Solr install.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14788) Solr: The Next Big Thing

2020-11-11 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230125#comment-17230125
 ] 

Ilan Ginzburg commented on SOLR-14788:
--

??Most of the work done by the overseer is useless??

Well... Overseer is running all Collection API commands (can't be qualified as 
useless) and handles all the cluster state updates.

Some of the cluster state updates are definitely useless (a node going down 
triggering potentially thousands of replica state updates) but most other 
cluster updates are useful (sharding changes, added/removed replicas etc.).

I believe getting rid of the massive updates due to node up/down is the low 
hanging fruit here. Doesn't feel too complicated to do (until reality kicks in 
at some later point...).

Eventually though (for performance and code maintainability) I do believe we 
need to get rid of Overseer, but I plan to show proof that this is actually the 
case before suggesting we do that change.

> Solr: The Next Big Thing
> 
>
> Key: SOLR-14788
> URL: https://issues.apache.org/jira/browse/SOLR-14788
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Critical
>
> h3. 
> [!https://www.unicode.org/consortium/aacimg/1F46E.png!|https://www.unicode.org/consortium/adopted-characters.html#b1F46E]{color:#00875a}*The
>  Policeman is on duty!*{color}
> {quote}_{color:#de350b}*When The Policeman is on duty, sit back, relax, and 
> have some fun. Try to make some progress. Don't stress too much about the 
> impact of your changes or maintaining stability and performance and 
> correctness so much. Until the end of phase 1, I've got your back. I have a 
> variety of tools and contraptions I have been building over the years and I 
> will continue training them on this branch. I will review your changes and 
> peer out across the land and course correct where needed. As Mike D will be 
> thinking, "Sounds like a bottleneck Mark." And indeed it will be to some 
> extent. Which is why once stage one is completed, I will flip The Policeman 
> to off duty. When off duty, I'm always* {color:#de350b}*occasionally*{color} 
> *down for some vigilante justice, but I won't be walking the beat, all that 
> stuff about sit back and relax goes out the window.*{color}_
> {quote}
>  
> I have stolen this title from Ishan or Noble and Ishan.
> This issue is meant to capture the work of a small team that is forming to 
> push Solr and SolrCloud to the next phase.
> I have kicked off the work with an effort to create a very fast and solid 
> base. That work is not 100% done, but it's ready to join the fight.
> Tim Potter has started giving me a tremendous hand in finishing up. Ishan and 
> Noble have already contributed support and testing and have plans for 
> additional work to shore up some of our current shortcomings.
> Others have expressed an interest in helping and hopefully they will pop up 
> here as well.
> Let's organize and discuss our efforts here and in various sub issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14788) Solr: The Next Big Thing

2020-11-11 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230125#comment-17230125
 ] 

Ilan Ginzburg edited comment on SOLR-14788 at 11/11/20, 5:38 PM:
-

??Most of the work done by the overseer is useless??

Well... Overseer is running all Collection API commands (can't be qualified as 
useless) and handles all the cluster state updates.

Some of the cluster state updates are definitely useless (a node going down 
triggering potentially thousands of replica state updates) but most other 
cluster updates are useful (sharding changes, added/removed replicas etc.).

I believe getting rid of the massive updates due to node up/down is the low 
hanging fruit here. Doesn't feel too complicated to do (until reality kicks in 
at some later point...).

Eventually though (for performance, scale and code maintainability) I do 
believe we need to get rid of Overseer, but I plan to show proof that this is 
actually the case before suggesting we do that change.


was (Author: murblanc):
??Most of the work done by the overseer is useless??

Well... Overseer is running all Collection API commands (can't be qualified as 
useless) and handles all the cluster state updates.

Some of the cluster state updates are definitely useless (a node going down 
triggering potentially thousands of replica state updates) but most other 
cluster updates are useful (sharding changes, added/removed replicas etc.).

I believe getting rid of the massive updates due to node up/down is the low 
hanging fruit here. Doesn't feel too complicated to do (until reality kicks in 
at some later point...).

Eventually though (for performance and code maintainability) I do believe we 
need to get rid of Overseer, but I plan to show proof that this is actually the 
case before suggesting we do that change.

> Solr: The Next Big Thing
> 
>
> Key: SOLR-14788
> URL: https://issues.apache.org/jira/browse/SOLR-14788
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Critical
>
> h3. 
> [!https://www.unicode.org/consortium/aacimg/1F46E.png!|https://www.unicode.org/consortium/adopted-characters.html#b1F46E]{color:#00875a}*The
>  Policeman is on duty!*{color}
> {quote}_{color:#de350b}*When The Policeman is on duty, sit back, relax, and 
> have some fun. Try to make some progress. Don't stress too much about the 
> impact of your changes or maintaining stability and performance and 
> correctness so much. Until the end of phase 1, I've got your back. I have a 
> variety of tools and contraptions I have been building over the years and I 
> will continue training them on this branch. I will review your changes and 
> peer out across the land and course correct where needed. As Mike D will be 
> thinking, "Sounds like a bottleneck Mark." And indeed it will be to some 
> extent. Which is why once stage one is completed, I will flip The Policeman 
> to off duty. When off duty, I'm always* {color:#de350b}*occasionally*{color} 
> *down for some vigilante justice, but I won't be walking the beat, all that 
> stuff about sit back and relax goes out the window.*{color}_
> {quote}
>  
> I have stolen this title from Ishan or Noble and Ishan.
> This issue is meant to capture the work of a small team that is forming to 
> push Solr and SolrCloud to the next phase.
> I have kicked off the work with an effort to create a very fast and solid 
> base. That work is not 100% done, but it's ready to join the fight.
> Tim Potter has started giving me a tremendous hand in finishing up. Ishan and 
> Noble have already contributed support and testing and have plans for 
> additional work to shore up some of our current shortcomings.
> Others have expressed an interest in helping and hopefully they will pop up 
> here as well.
> Let's organize and discuss our efforts here and in various sub issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14971) AtomicUpdate 'remove' fails on 'pints' fields

2020-11-11 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230134#comment-17230134
 ] 

ASF subversion and git services commented on SOLR-14971:


Commit 8e9db02530d42a45c9ef89d93ea37f340c9a426c in lucene-solr's branch 
refs/heads/branch_8x from Jason Gerlowski
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8e9db02 ]

SOLR-14971: Handle atomic-removes on uncommitted docs (#2056)

Docs fetched from the update log via RTG look different than docs
fetched from commits in the index: the types of
field-values may be different between the two, etc.

This is a problem for atomic add/remove of field values, where matching
existing values has historically been done by object equals() calls (via
Collection operations).  This relies on equality checks which don't have
flexible enough semantics to match values across these different types.
(For example, `new Long(1).equals(new Integer(1))` returns `false`).
This was causing some add-distinct and remove operations on
uncommitted values to silently fail to remove field values.

This commit patches over this by converting between types in the more
common cases before using the fallback behavior.


> AtomicUpdate 'remove' fails on 'pints' fields
> -
>
> Key: SOLR-14971
> URL: https://issues.apache.org/jira/browse/SOLR-14971
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 8.5.2
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: reproduce.sh
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The "remove" atomic update action on multivalued int fields fails if the 
> document being changed is uncommitted.
> At first glance this appears to be a type-related issue.  
> AtomicUpdateDocumentMerger attempts to handle multivalued int fields by 
> processing the List type, but in uncommitted docs int fields are 
> still List in the tlog.  Conceptually this feels similar to 
> SOLR-13331.
> It's likely this issue also affects other numeric and date fields.
> Attached is a simple script to reproduce, meant to be run from the root of a 
> Solr install.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14971) AtomicUpdate 'remove' fails on 'pints' fields

2020-11-11 Thread Jason Gerlowski (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230138#comment-17230138
 ] 

Jason Gerlowski commented on SOLR-14971:


On the github PR, Munendra pointed out that this downside also affects the 
'add-distinct' operation, so the commits above fix that too.

Munendra proposed an alternative approach to what I ended up going with that 
might be worth looking into in the future.  He suggested having the 'original' 
collection of field values iterated over and converted using 'toNativeType'.  
This would ensure that the needle and haystack are all of the same Java type so 
to speak and would save us a lot of manual type checking.  I liked this better 
conceptually but wasn't confident enough about what effect changing the type of 
ALL the uncommitted field values might have downstream to pursue it in this PR. 
 If other cases are found though the ugliness of the type checking might make 
this worth a second look though.

Closing this out. 

> AtomicUpdate 'remove' fails on 'pints' fields
> -
>
> Key: SOLR-14971
> URL: https://issues.apache.org/jira/browse/SOLR-14971
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 8.5.2
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: reproduce.sh
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The "remove" atomic update action on multivalued int fields fails if the 
> document being changed is uncommitted.
> At first glance this appears to be a type-related issue.  
> AtomicUpdateDocumentMerger attempts to handle multivalued int fields by 
> processing the List type, but in uncommitted docs int fields are 
> still List in the tlog.  Conceptually this feels similar to 
> SOLR-13331.
> It's likely this issue also affects other numeric and date fields.
> Attached is a simple script to reproduce, meant to be run from the root of a 
> Solr install.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14971) AtomicUpdate 'remove' fails on 'pints' fields

2020-11-11 Thread Jason Gerlowski (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-14971.

Fix Version/s: 8.8
   master (9.0)
   Resolution: Fixed

> AtomicUpdate 'remove' fails on 'pints' fields
> -
>
> Key: SOLR-14971
> URL: https://issues.apache.org/jira/browse/SOLR-14971
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 8.5.2
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: master (9.0), 8.8
>
> Attachments: reproduce.sh
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The "remove" atomic update action on multivalued int fields fails if the 
> document being changed is uncommitted.
> At first glance this appears to be a type-related issue.  
> AtomicUpdateDocumentMerger attempts to handle multivalued int fields by 
> processing the List type, but in uncommitted docs int fields are 
> still List in the tlog.  Conceptually this feels similar to 
> SOLR-13331.
> It's likely this issue also affects other numeric and date fields.
> Attached is a simple script to reproduce, meant to be run from the root of a 
> Solr install.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2068: LUCENE-8982: Separate out native code to another module to allow cpp build with gradle

2020-11-11 Thread GitBox


dweiss commented on pull request #2068:
URL: https://github.com/apache/lucene-solr/pull/2068#issuecomment-725648046


   Added the kill switch, Zach. I think you need to merge with master and then 
update the overview file that Tomoko moved to package-info.java (grep for 
"build-native-unix" and you'll see old instructions currently in overview.html).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14986) Restrict the properties possible to define with "property.name=value" when creating a collection

2020-11-11 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230270#comment-17230270
 ] 

Erick Erickson commented on SOLR-14986:
---

It's a sticky wicket.

Short form: I don't see any way to make the code "do the right thing" or to 
document under what conditions specifying various options will succeed. So I'm 
thinking of just changing the ref guide for the collections API CREATE and 
ADDREPLICA commands to warn that using *property.name* is an expert option that 
should only be used with a thorough understanding of Solr.

IOW "Don't call us if you try to set these properties  and it doesn't work".

There are tests that create a collection with *property.dataDir=someDir* for 
example. Which works fine in the test, since it's creating a single 1x1 
collection.

However, I can specify an absolute path and allow Solr to use it by setting 
*-Dsolr.allowPaths=/tmp/eoe* and try to create a collection with these 
parameters

*numShards=2&property.dataDir=/tmp/somedir*

which times out, and you have to go to the log to find out why, and even then 
it's rather opaque:

Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this 
virtual machine: /private/tmp/eoe/index/write.lock

Unsurprising since both replicas are pointing to the same dataDir. NOTE: if I 
use a relative path, things are fine. E.g.

*numShards=2&property.dataDir=eoe*

In that case, each replica has a dataDir underneath it called "eoe" but they're 
under separate nodes.

[~romseygeek] Pinging you since you wrote 2 of the 3 tests that use this, 
what's your opinion? BTW, I'm very glad these tests are here because I could 
have introduced a horrible problem if people are relying on this behavior.:

CollectionsAPISolrJTest.testCreateCollectionWithPropertyParam and 
CoreAdminCreateDiscoverTest.testInstanceDirAsPropertyParam

[~shalin] wrote the other one, a lng time ago so do you have an opinion 
either? 

CollectionsAPIDistributedZkTest.addReplicaTest

 

So what I'm thinking now is that catching all the possibilities in the code is 
nearly impossible to get right, and it doesn't feel like it's a good use of 
time. Explaining when you can use even one of these "special" properties in the 
ref guide makes my head explode. It starts to look like this:

"When you create a collection, if you specify a *property.dataDir* that is an 
absolute path, the operation will fail if Solr tries to create two replicas in 
the same Solr instance (which may or may not happen, depending on whether you 
have more replicas than Solr instances, or possibly because of any custom core 
placement rules). In that case the collection creation will time out and the 
solr log will contain the reason. BTW, if Solr happens to create only one 
replica per instance, the first time you use this property, the call will 
succeed. But when you try to CREATE another collection or ADDREPLICA and use 
the same dataDir, it will fail if another replica already exists on that Solr 
instance using that dataDir. If you use relative paths, Solr will create a 
dataDir under the replica's directory." Yuuuccckkk!

And that's just one property I don't even want to think about interactions 
between multiple properties...

So barring objections, I'll just change the ref guide.

> Restrict the properties possible to define with "property.name=value" when 
> creating a collection
> 
>
> Key: SOLR-14986
> URL: https://issues.apache.org/jira/browse/SOLR-14986
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> This came to light when I was looking at two user-list questions where people 
> try to manually define core.properties to define _replicas_ in SolrCloud. 
> There are two related issues:
> 1> You can do things like "action=CREATE&name=eoe&property.collection=blivet" 
> which results in an opaque error about "could not create replica." I 
> propose we return a better error here like "property.collection should not be 
> specified when creating a collection". What do people think about the rest of 
> the auto-created properties on collection creation? 
> coreNodeName
> collection.configName
> name
> numShards
> shard
> collection
> replicaType
> "name" seems to be OK to change, although i don't see anyplace anyone can 
> actually see it afterwards
> 2> Change the ref guide to steer people away from attempting to manually 
> create a core.properties file to define cores/replicas in SolrCloud. There's 
> no warning on the "defining-core-properties.adoc" for instance. Additionally 
> there should be some kind of message on

[GitHub] [lucene-solr] noblepaul commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


noblepaul commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521707369



##
File path: solr/core/src/java/org/apache/solr/api/AnnotatedApi.java
##
@@ -62,6 +63,8 @@
 
 public class AnnotatedApi extends Api implements PermissionNameProvider , 
Closeable {
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  private static final ObjectMapper mapper = 
SolrJacksonAnnotationInspector.createObjectMapper()

Review comment:
   It was not created per request. it was created per API





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


noblepaul commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521707653



##
File path: solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java
##
@@ -366,7 +381,7 @@ public void m2(SolrQueryRequest req, SolrQueryResponse rsp) 
{
 
   }
 
-  public static class CConfig extends PluginMeta {
+  public static class CConfig implements ReflectMapWriter {

Review comment:
   All beans must implement `MapWriter`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


noblepaul commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521707503



##
File path: solr/core/src/java/org/apache/solr/api/ConfigurablePlugin.java
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.api;
+
+/**Implement this interface if your plugin needs to accept some configuration
+ * 
+ * @param  the configuration Object type
+ */
+public interface ConfigurablePlugin {

Review comment:
   may be ``





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #2065: SOLR-14977 : ContainerPlugins should be configurable

2020-11-11 Thread GitBox


noblepaul commented on a change in pull request #2065:
URL: https://github.com/apache/lucene-solr/pull/2065#discussion_r521708062



##
File path: solr/core/src/java/org/apache/solr/api/ContainerPluginsRegistry.java
##
@@ -114,6 +118,16 @@ public synchronized ApiInfo getPlugin(String name) {
 return currentPlugins.get(name);
   }
 
+  static class PluginMetaHolder {
+private final Map original;
+private final PluginMeta meta;

Review comment:
   `PluginMeta` is the information used by the framework. It's optional, so 
your plugin can choose to ignore it
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9605) update snowball to latest (adds Yiddish stemmer)

2020-11-11 Thread Robert Muir (Jira)
Robert Muir created LUCENE-9605:
---

 Summary: update snowball to latest (adds Yiddish stemmer)
 Key: LUCENE-9605
 URL: https://issues.apache.org/jira/browse/LUCENE-9605
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir


I'm trying to find time to upstream our snowball diffs... it helps to be 
reasonably up to date with their sources. Plus there is a new stemmer added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9605) update snowball to latest (adds Yiddish stemmer)

2020-11-11 Thread Robert Muir (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-9605:

Fix Version/s: master (9.0)

> update snowball to latest (adds Yiddish stemmer)
> 
>
> Key: LUCENE-9605
> URL: https://issues.apache.org/jira/browse/LUCENE-9605
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
>Priority: Major
> Fix For: master (9.0)
>
>
> I'm trying to find time to upstream our snowball diffs... it helps to be 
> reasonably up to date with their sources. Plus there is a new stemmer added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] rmuir opened a new pull request #2077: LUCENE-9605: update snowball to d8cf01ddf37a, adds Yiddish

2020-11-11 Thread GitBox


rmuir opened a new pull request #2077:
URL: https://github.com/apache/lucene-solr/pull/2077


   Just merged their master branch, regenerated the patch file, and reran 
`gradlew snowball`, precommit and tests.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14975) Optimize CoreContainer.getAllCoreNames and getLoadedCoreNames

2020-11-11 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230334#comment-17230334
 ] 

Erick Erickson commented on SOLR-14975:
---

[~broustant]

1> do you intend to backport to 8x? If not,  can this be closed?

> Optimize CoreContainer.getAllCoreNames and getLoadedCoreNames 
> --
>
> Key: SOLR-14975
> URL: https://issues.apache.org/jira/browse/SOLR-14975
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> The methods CoreContainer.getAllCoreNames and getLoadedCoreNames hold a lock 
> while they grab core names to put into a TreeSet.  When there are *many* 
> cores, this delay is noticeable.  Holding this lock effectively blocks 
> queries since queries lookup a core; so it's critically important that these 
> methods are *fast*.  The tragedy here is that some callers merely want to 
> know if a particular name is in the set, or what the aggregated size is.  
> Some callers want to iterate the names but don't really care what the 
> iteration order is.
> I propose that some callers of these two methods find suitable alternatives, 
> like getCoreDescriptor to check for null.  And I propose that these methods 
> return a HashSet -- no order.  If the caller wants it sorted, it can do so 
> itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] zacharymorn commented on a change in pull request #2068: LUCENE-8982: Separate out native code to another module to allow cpp build with gradle

2020-11-11 Thread GitBox


zacharymorn commented on a change in pull request #2068:
URL: https://github.com/apache/lucene-solr/pull/2068#discussion_r521801016



##
File path: .gitignore
##
@@ -8,6 +8,10 @@ build/
 /.idea/
 #IntelliJ creates this folder, ignore.
 /dev-tools/missing-doclet/out/
+*.iml

Review comment:
   Hmm I seems to recall these files were generated after running 
`./gradlew idea`, but I removed them in the latest commit anyway.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] zacharymorn commented on a change in pull request #2068: LUCENE-8982: Separate out native code to another module to allow cpp build with gradle

2020-11-11 Thread GitBox


zacharymorn commented on a change in pull request #2068:
URL: https://github.com/apache/lucene-solr/pull/2068#discussion_r521802817



##
File path: lucene/misc/native/src/main/posix/NativePosixUtil.cpp
##
@@ -38,12 +38,12 @@
 
 #ifdef LINUX
 /*
- * Class: org_apache_lucene_store_NativePosixUtil
+ * Class: org_apache_lucene_misc_store_NativePosixUtil
  * Method:posix_fadvise
  * Signature: (Ljava/io/FileDescriptor;JJI)V
  */
 extern "C"
-JNIEXPORT jint JNICALL 
Java_org_apache_lucene_store_NativePosixUtil_posix_1fadvise(JNIEnv *env, jclass 
_ignore, jobject fileDescriptor, jlong offset, jlong len, jint advice)
+JNIEXPORT jint JNICALL 
Java_org_apache_lucene_misc_store_NativePosixUtil_posix_1fadvise(JNIEnv *env, 
jclass _ignore, jobject fileDescriptor, jlong offset, jlong len, jint advice)

Review comment:
   These paths need to be adjusted as the native code has been moved in 
d1110394e9c





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] zacharymorn commented on pull request #2068: LUCENE-8982: Separate out native code to another module to allow cpp build with gradle

2020-11-11 Thread GitBox


zacharymorn commented on pull request #2068:
URL: https://github.com/apache/lucene-solr/pull/2068#issuecomment-725807511


   > Added the kill switch, Zach. I think you need to merge with master and 
then update the overview file that Tomoko moved to package-info.java (grep for 
"build-native-unix" and you'll see old instructions currently in overview.html).
   
   Updated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] zacharymorn commented on pull request #2068: LUCENE-8982: Separate out native code to another module to allow cpp build with gradle

2020-11-11 Thread GitBox


zacharymorn commented on pull request #2068:
URL: https://github.com/apache/lucene-solr/pull/2068#issuecomment-725853994


   > All of it looks good, Zach. Sorry about my lack of consistency here but on 
as second thought I think we should add a safety switch of perhaps 
force-disabling the native build - just in case we commit it in and have to do 
it for some reason.
   > 
   > I'll work on this later today and commit it in.
   > 
   > I also feel tempted to add tests that utilize this native code. This can 
come as a separate issue though.
   
   I just pushed up a commit that adds tests to exercise the native library 
loading part. Seems to be working fine on my Mac. Could you please give it a 
try to see it works on Windows as well? 
   
   On the other hand, I think these tests will break if run from IDEs. Do we 
need to support that in this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org