[GitHub] [lucene-solr] noblepaul merged pull request #997: SOLR-13822: Isolated Classloading from packages

2019-11-06 Thread GitBox
noblepaul merged pull request #997: SOLR-13822: Isolated Classloading from 
packages
URL: https://github.com/apache/lucene-solr/pull/997
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13822) Isolated Classloading from packages

2019-11-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968177#comment-16968177
 ] 

ASF subversion and git services commented on SOLR-13822:


Commit 37059eb5949cca064df73c0486e088b9d201775f in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=37059eb ]

SOLR-13822: Isolated Classloading from packages (#997)

* SOLR-13821: A Package store to store and load package artifacts 

* SOLR-13822: A Package management system with the following features. A 
packages.json in ZK to store
  the configuration, APIs to read/edit them and isolated classloaders to load 
the classes from
  those packages if the 'class' attribute is prefixed with `:` 

*  SOLR-13841: Provide mappings for jackson annotation @JsonProperty to use 
Jackson deserializer

> Isolated Classloading from packages
> ---
>
> Key: SOLR-13822
> URL: https://issues.apache.org/jira/browse/SOLR-13822
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-13822.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Design is here: 
> [https://docs.google.com/document/d/15b3m3i3NFDKbhkhX_BN0MgvPGZaBj34TKNF2-UNC3U8/edit?ts=5d86a8ad#]
>  
> main features:
>  * A new file for packages definition (/packages.json) in ZK
>  * Public APIs to edit/read the file
>  * The APIs are registered at {{/api/cluster/package}}
>  * Classes can be loaded from the package classloader using the 
> {{:}} syntax



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13841) Provide mappings for jackson annotation @JsonProperty to use Jackson deserializer

2019-11-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968178#comment-16968178
 ] 

ASF subversion and git services commented on SOLR-13841:


Commit 37059eb5949cca064df73c0486e088b9d201775f in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=37059eb ]

SOLR-13822: Isolated Classloading from packages (#997)

* SOLR-13821: A Package store to store and load package artifacts 

* SOLR-13822: A Package management system with the following features. A 
packages.json in ZK to store
  the configuration, APIs to read/edit them and isolated classloaders to load 
the classes from
  those packages if the 'class' attribute is prefixed with `:` 

*  SOLR-13841: Provide mappings for jackson annotation @JsonProperty to use 
Jackson deserializer

> Provide mappings for jackson annotation @JsonProperty to use Jackson 
> deserializer
> -
>
> Key: SOLR-13841
> URL: https://issues.apache.org/jira/browse/SOLR-13841
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We can start using annotations in SolrJ to minimize the amount of code we 
> write & improve readability. Jackson is a widely used library and everyone is 
> already familiar with it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13821) Package Store

2019-11-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968176#comment-16968176
 ] 

ASF subversion and git services commented on SOLR-13821:


Commit 37059eb5949cca064df73c0486e088b9d201775f in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=37059eb ]

SOLR-13822: Isolated Classloading from packages (#997)

* SOLR-13821: A Package store to store and load package artifacts 

* SOLR-13822: A Package management system with the following features. A 
packages.json in ZK to store
  the configuration, APIs to read/edit them and isolated classloaders to load 
the classes from
  those packages if the 'class' attribute is prefixed with `:` 

*  SOLR-13841: Provide mappings for jackson annotation @JsonProperty to use 
Jackson deserializer

> Package Store
> -
>
> Key: SOLR-13821
> URL: https://issues.apache.org/jira/browse/SOLR-13821
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Package store is a storage managed by Solr that holds the package artifacts. 
> This is replicated across nodes.
> Design is here: 
> [https://docs.google.com/document/d/15b3m3i3NFDKbhkhX_BN0MgvPGZaBj34TKNF2-UNC3U8/edit?ts=5d86a8ad#]
> The package store is powered by an underlying filestore. This filestore is a 
> fully replicated p2p filesystem storage for artifacts.
> The APIs are as follows
> {code:java}
> # add a file
> POST  /api/cluster/files/path/to/file.jar
> #retrieve a file
> GET /api/cluster/files/path/to/file.jar
> #list files in the /path/to directory
> GET /api/cluster/files/path/to
> #GET meta info of the jar
> GET /api/cluster/files/path/to/file.jar?meta=true
> {code}
> This store keeps 2 files per file
>  # The actual file say {{myplugin.jar}}
>  # A metadata file {{.myplugin.jar.json}} in the same directory
> The contenbts of the metadata file is
> {code:json}
> {
> "sha512" : ""
> "sig": {
> "" :""
> }}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13822) Isolated Classloading from packages

2019-11-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968175#comment-16968175
 ] 

ASF subversion and git services commented on SOLR-13822:


Commit 37059eb5949cca064df73c0486e088b9d201775f in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=37059eb ]

SOLR-13822: Isolated Classloading from packages (#997)

* SOLR-13821: A Package store to store and load package artifacts 

* SOLR-13822: A Package management system with the following features. A 
packages.json in ZK to store
  the configuration, APIs to read/edit them and isolated classloaders to load 
the classes from
  those packages if the 'class' attribute is prefixed with `:` 

*  SOLR-13841: Provide mappings for jackson annotation @JsonProperty to use 
Jackson deserializer

> Isolated Classloading from packages
> ---
>
> Key: SOLR-13822
> URL: https://issues.apache.org/jira/browse/SOLR-13822
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-13822.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Design is here: 
> [https://docs.google.com/document/d/15b3m3i3NFDKbhkhX_BN0MgvPGZaBj34TKNF2-UNC3U8/edit?ts=5d86a8ad#]
>  
> main features:
>  * A new file for packages definition (/packages.json) in ZK
>  * Public APIs to edit/read the file
>  * The APIs are registered at {{/api/cluster/package}}
>  * Classes can be loaded from the package classloader using the 
> {{:}} syntax



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13899) zkstatus page incorrectly reports zookeeper in error when Zookeeper observers are present

2019-11-06 Thread Salvatore Papa (Jira)
Salvatore Papa created SOLR-13899:
-

 Summary: zkstatus page incorrectly reports zookeeper in error when 
Zookeeper observers are present
 Key: SOLR-13899
 URL: https://issues.apache.org/jira/browse/SOLR-13899
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 8.3.0
Reporter: Salvatore Papa
 Attachments: zkstatus.png

When a zookeeper ensemble has 'observers', the zkstatus page incorrectly says 
Zookeeper status is in error (See attachment.)

This is because the 
[ZookeeperStatusHandler|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java]
 does not account for the 
'[observer|https://zookeeper.apache.org/doc/current/zookeeperObservers.html]' 
role whatsoever.

This should be an easy fix - I see there being two options;

1. Treat observers as followers by changing 
[L112|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java#L112]
 to
{code:java}
if ("follower".equals(state) || "observer".equals(state)) {
{code}
 
 2. Ignore observers from the required follower count by changing 
[L116|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java#L116]
 to
{code:java}
  reportedFollowers = 
Integer.parseInt(String.valueOf(stat.get("zk_synced_followers")));
{code}
Option 1 will make the zkstatus page show error when an observer is down.
 Option 2 will not make the zkstatus page show error when an observer is down.

*Ideally*, additional logic to account for observers should be added, and show 
a STATUS_YELLOW when any observers are down (but followers are all up), as this 
means the ensemble is only in a degraded, but functional state.

Happy to create a PR, however I don't have a lot of free time at home at the 
moment, so it may take a week or two.

 

Additional info:

See below for example mntr output for the Leader/Follower/Observer roles, 
noting the Leader's zk_followers and zk_synced_followers values, and the values 
of zk_server_state.

Leader:
{code:java}
[root@master1 ~]# echo mntr | nc master3 12181
zk_version 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 
20:18 GMT
zk_avg_latency 0
zk_max_latency 2
zk_min_latency 0
zk_packets_received 97
zk_packets_sent 96
zk_num_alive_connections 2
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count 92
zk_watch_count 7
zk_ephemerals_count 9
zk_approximate_data_size 236333
zk_open_file_descriptor_count 64
zk_max_file_descriptor_count 4096
zk_followers 4
zk_synced_followers 2
zk_pending_syncs 0
zk_last_proposal_size -1
zk_max_proposal_size -1
zk_min_proposal_size -1
{code}
Follower:
{code:java}
[root@master1 ~]# echo mntr | nc master2 12181
zk_version  3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
10/08/2019 20:18 GMT
zk_avg_latency  0
zk_max_latency  6
zk_min_latency  0
zk_packets_received 97
zk_packets_sent 96
zk_num_alive_connections2
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  92
zk_watch_count  7
zk_ephemerals_count 9
zk_approximate_data_size236333
zk_open_file_descriptor_count   60
zk_max_file_descriptor_count4096
{code}
Observer:
{code:java}
[root@master1 ~]# echo mntr | nc slave1 12181
zk_version  3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
10/08/2019 20:18 GMT
zk_avg_latency  0
zk_max_latency  8
zk_min_latency  0
zk_packets_received 174
zk_packets_sent 173
zk_num_alive_connections2
zk_outstanding_requests 0
zk_server_state observer
zk_znode_count  92
zk_watch_count  7
zk_ephemerals_count 9
zk_approximate_data_size236333
zk_open_file_descriptor_count   59
zk_max_file_descriptor_count4096
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13899) zkstatus page incorrectly reports zookeeper in error when Zookeeper observers are present

2019-11-06 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968388#comment-16968388
 ] 

Erick Erickson commented on SOLR-13899:
---

Thanks for raising this, and _especially_ for providing such detailed analysis! 
I won't be able to get to this either, but thought an "good job" was in order ;)

> zkstatus page incorrectly reports zookeeper in error when Zookeeper observers 
> are present
> -
>
> Key: SOLR-13899
> URL: https://issues.apache.org/jira/browse/SOLR-13899
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.3.0
>Reporter: Salvatore
>Priority: Trivial
>  Labels: easyfix
> Attachments: zkstatus.png
>
>
> When a zookeeper ensemble has 'observers', the zkstatus page incorrectly says 
> Zookeeper status is in error (See attachment.)
> This is because the 
> [ZookeeperStatusHandler|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java]
>  does not account for the 
> '[observer|https://zookeeper.apache.org/doc/current/zookeeperObservers.html]' 
> role whatsoever.
> This should be an easy fix - I see there being two options;
> 1. Treat observers as followers by changing 
> [L112|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java#L112]
>  to
> {code:java}
> if ("follower".equals(state) || "observer".equals(state)) {
> {code}
>  
>  2. Ignore observers from the required follower count by changing 
> [L116|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java#L116]
>  to
> {code:java}
>   reportedFollowers = 
> Integer.parseInt(String.valueOf(stat.get("zk_synced_followers")));
> {code}
> Option 1 will make the zkstatus page show error when an observer is down.
>  Option 2 will not make the zkstatus page show error when an observer is down.
> *Ideally*, additional logic to account for observers should be added, and 
> show a STATUS_YELLOW when any observers are down (but followers are all up), 
> as this means the ensemble is only in a degraded, but functional state.
> Happy to create a PR, however I don't have a lot of free time at home at the 
> moment, so it may take a week or two.
>  
> Additional info:
> See below for example mntr output for the Leader/Follower/Observer roles, 
> noting the Leader's zk_followers and zk_synced_followers values, and the 
> values of zk_server_state.
> Leader:
> {code:java}
> [root@master1 ~]# echo mntr | nc master3 12181
> zk_version 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
> 10/08/2019 20:18 GMT
> zk_avg_latency 0
> zk_max_latency 2
> zk_min_latency 0
> zk_packets_received 97
> zk_packets_sent 96
> zk_num_alive_connections 2
> zk_outstanding_requests 0
> zk_server_state leader
> zk_znode_count 92
> zk_watch_count 7
> zk_ephemerals_count 9
> zk_approximate_data_size 236333
> zk_open_file_descriptor_count 64
> zk_max_file_descriptor_count 4096
> zk_followers 4
> zk_synced_followers 2
> zk_pending_syncs 0
> zk_last_proposal_size -1
> zk_max_proposal_size -1
> zk_min_proposal_size -1
> {code}
> Follower:
> {code:java}
> [root@master1 ~]# echo mntr | nc master2 12181
> zk_version3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
> 10/08/2019 20:18 GMT
> zk_avg_latency0
> zk_max_latency6
> zk_min_latency0
> zk_packets_received   97
> zk_packets_sent   96
> zk_num_alive_connections  2
> zk_outstanding_requests   0
> zk_server_state   follower
> zk_znode_count92
> zk_watch_count7
> zk_ephemerals_count   9
> zk_approximate_data_size  236333
> zk_open_file_descriptor_count 60
> zk_max_file_descriptor_count  4096
> {code}
> Observer:
> {code:java}
> [root@master1 ~]# echo mntr | nc slave1 12181
> zk_version3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
> 10/08/2019 20:18 GMT
> zk_avg_latency0
> zk_max_latency8
> zk_min_latency0
> zk_packets_received   174
> zk_packets_sent   173
> zk_num_alive_connections  2
> zk_outstanding_requests   0
> zk_server_state   observer
> zk_znode_count92
> zk_watch_count7
> zk_ephemerals_count   9
> zk_approximate_data_size  236333
> zk_open_file_descriptor_count 59
> zk_max_file_descriptor_count  4096
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucen

[jira] [Created] (SOLR-13900) Permissions deleting works wrong

2019-11-06 Thread Yuliia Sydoruk (Jira)
Yuliia Sydoruk created SOLR-13900:
-

 Summary: Permissions deleting works wrong
 Key: SOLR-13900
 URL: https://issues.apache.org/jira/browse/SOLR-13900
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Authorization, security
Reporter: Yuliia Sydoruk


Permissions indexes in security.json file do not correspond to indexes while 
deleting.

The line 

{{(141) setIndex(p);}}

in 
[solr/security/AutorizationEditOperation.java|[https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/security/AutorizationEditOperation.java]]
 makes indexes renumber before deleting and it leads to wrong behavior.

*USE CASE 1:*

There are 2 new permissions added to security.json (with indexes 13 and 14):
{code:java}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
  {
"collection":"",
"path":"/schema/*",
"role":"test-role",
"index":13},
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}

{code}
Step 1: remove the permission with index=13; result: permission is deleted 
correctly, security.json is next:
{code:java}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12,
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}

{code}
Step 2: try to remove the permission with index=14; result: "No such index: 14" 
error is returned.

*USE CASE 2:*

There are 3 new permissions added to security.json (with indexes 13, 14 and 15):
{code:json}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
  {
"collection":"",
"path":"/schema/*",
"role":"test-role",
"index":13},
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}, 
  { 
"path":"/admin/collections", 
"params":\{"collection":["anotherTestCollection"]}, 
"role":"test-role", 
"index":15}

{code}
Step 1: remove the permission with index=13; result: permission is deleted 
correctly, security.json becomes next:
{code:json}

   { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
   {
"path":"/admin/collections", 
"params":{"collection":["testCollection"]}, 
"role":"test-role", "index":14}, 
   { 
"path":"/admin/collections", 
"params":{"collection":["anotherTestCollection"]}, 
"role":"test-role", 
"index":15}

{code}
 
 Step 2: try to remove the permission with index=14; result: permission with 
index 15 is deleted, which is *wrong*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13900) Permissions deleting works wrong

2019-11-06 Thread Yuliia Sydoruk (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuliia Sydoruk updated SOLR-13900:
--
Description: 
Permissions indexes in security.json file do not correspond to indexes while 
deleting.

The line 

{{(141) setIndex(p);}}

in 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/security/AutorizationEditOperation.java]
 makes indexes renumber before deleting and it leads to wrong behavior.

*USE CASE 1:*

There are 2 new permissions added to security.json (with indexes 13 and 14):
{code:java}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
  {
"collection":"",
"path":"/schema/*",
"role":"test-role",
"index":13},
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}

{code}
Step 1: remove the permission with index=13; result: permission is deleted 
correctly, security.json is next:
{code:java}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12,
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}

{code}
Step 2: try to remove the permission with index=14; result: "No such index: 14" 
error is returned.

*USE CASE 2:*

There are 3 new permissions added to security.json (with indexes 13, 14 and 15):
{code:json}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
  {
"collection":"",
"path":"/schema/*",
"role":"test-role",
"index":13},
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}, 
  { 
"path":"/admin/collections", 
"params":\{"collection":["anotherTestCollection"]}, 
"role":"test-role", 
"index":15}

{code}
Step 1: remove the permission with index=13; result: permission is deleted 
correctly, security.json becomes next:
{code:json}

   { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
   {
"path":"/admin/collections", 
"params":{"collection":["testCollection"]}, 
"role":"test-role", "index":14}, 
   { 
"path":"/admin/collections", 
"params":{"collection":["anotherTestCollection"]}, 
"role":"test-role", 
"index":15}

{code}
 
 Step 2: try to remove the permission with index=14; result: permission with 
index 15 is deleted, which is *wrong*

  was:
Permissions indexes in security.json file do not correspond to indexes while 
deleting.

The line 

{{(141) setIndex(p);}}

in 
[solr/security/AutorizationEditOperation.java|[https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/security/AutorizationEditOperation.java]]
 makes indexes renumber before deleting and it leads to wrong behavior.

*USE CASE 1:*

There are 2 new permissions added to security.json (with indexes 13 and 14):
{code:java}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
  {
"collection":"",
"path":"/schema/*",
"role":"test-role",
"index":13},
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}

{code}
Step 1: remove the permission with index=13; result: permission is deleted 
correctly, security.json is next:
{code:java}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12,
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}

{code}
Step 2: try to remove the permission with index=14; result: "No such index: 14" 
error is returned.

*USE CASE 2:*

There are 3 new permissions added to security.json (with indexes 13, 14 and 15):
{code:json}

  { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
  {
"collection":"",
"path":"/schema/*",
"role":"test-role",
"index":13},
  {
"path":"/admin/collections",
"params":{"collection":["testCollection"]},
"role":"test-role",
"index":14}, 
  { 
"path":"/admin/collections", 
"params":\{"collection":["anotherTestCollection"]}, 
"role":"test-role", 
"index":15}

{code}
Step 1: remove the permission with index=13; result: permission is deleted 
correctly, security.json becomes next:
{code:json}

   { 
"role":"admin", 
"name":"schema-edit", 
"index":12},
   {
"path":"/admin/collections", 
"params":{"collection":["testCollec

[GitHub] [lucene-solr] CaoManhDat merged pull request #995: SOLR-13844: Fixing tests related to ShardTerms recovery removal

2019-11-06 Thread GitBox
CaoManhDat merged pull request #995: SOLR-13844: Fixing tests related to 
ShardTerms recovery removal
URL: https://github.com/apache/lucene-solr/pull/995
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13844) Remove recovering shard term with corresponding core shard term.

2019-11-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968526#comment-16968526
 ] 

ASF subversion and git services commented on SOLR-13844:


Commit 5c7215fabfc6669422e764cbdb87718eab053964 in lucene-solr's branch 
refs/heads/master from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5c7215f ]

SOLR-13844: Fixing tests related to ShardTerms recovery removal (#995)



> Remove recovering shard term with corresponding core shard term.
> 
>
> Key: SOLR-13844
> URL: https://issues.apache.org/jira/browse/SOLR-13844
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Houston Putman
>Assignee: Cao Manh Dat
>Priority: Minor
> Fix For: master (9.0), 8.4
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently if a recovering replica (solr 7.3+) is deleted, the term for that 
> core in the shard's terms in ZK is removed. However the {{_recovering}} 
> term is not removed as well. This can create clutter and confusion in the 
> shard terms ZK node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13844) Remove recovering shard term with corresponding core shard term.

2019-11-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968527#comment-16968527
 ] 

ASF subversion and git services commented on SOLR-13844:


Commit f4eea9b2f5cfd7c6e0087e9bae155896d93e6e4b in lucene-solr's branch 
refs/heads/branch_8x from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f4eea9b ]

SOLR-13844: Fixing tests related to ShardTerms recovery removal (#995)



> Remove recovering shard term with corresponding core shard term.
> 
>
> Key: SOLR-13844
> URL: https://issues.apache.org/jira/browse/SOLR-13844
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Houston Putman
>Assignee: Cao Manh Dat
>Priority: Minor
> Fix For: master (9.0), 8.4
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently if a recovering replica (solr 7.3+) is deleted, the term for that 
> core in the shard's terms in ZK is removed. However the {{_recovering}} 
> term is not removed as well. This can create clutter and confusion in the 
> shard terms ZK node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13844) Remove recovering shard term with corresponding core shard term.

2019-11-06 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968532#comment-16968532
 ] 

Cao Manh Dat commented on SOLR-13844:
-

thanks [~houstonputman] and [~hossman]

> Remove recovering shard term with corresponding core shard term.
> 
>
> Key: SOLR-13844
> URL: https://issues.apache.org/jira/browse/SOLR-13844
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Houston Putman
>Assignee: Cao Manh Dat
>Priority: Minor
> Fix For: master (9.0), 8.4
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently if a recovering replica (solr 7.3+) is deleted, the term for that 
> core in the shard's terms in ZK is removed. However the {{_recovering}} 
> term is not removed as well. This can create clutter and confusion in the 
> shard terms ZK node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9038) Evaluate Caffeine for LruQueryCache

2019-11-06 Thread Ben Manes (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968535#comment-16968535
 ] 

Ben Manes commented on LUCENE-9038:
---

In retrospect, a separate {{QueryCache}} should be implemented. 
{{LruQueryCache}} declares in its contract that methods like {{onHit}}, 
{{onQueryEviction}}, etc. are executed under the global lock. This means 
implementations may rely on this exclusive read/write access to data 
structures, a requirement that cannot be supported efficiently. There are 
external usages of these hooks, such as in ElasticSearch, which would need to 
be reviewed. A deprecation and migration would be a safer approach for a 
transition.

> Evaluate Caffeine for LruQueryCache
> ---
>
> Key: LUCENE-9038
> URL: https://issues.apache.org/jira/browse/LUCENE-9038
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ben Manes
>Priority: Major
>
> [LRUQueryCache|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/LRUQueryCache.java]
>  appears to play a central role in Lucene's performance. There are many 
> issues discussing its performance, such as LUCENE-7235, LUCENE-7237, 
> LUCENE-8027, LUCENE-8213, and LUCENE-9002. It appears that the cache's 
> overhead can be just as much of a benefit as a liability, causing various 
> workarounds and complexity.
> When reviewing the discussions and code, the following issues are concerning:
> # The cache is guarded by a single lock for all reads and writes.
> # All computations are performed outside of the any locking to avoid 
> penalizing other callers. This  doesn't handle the cache stampedes meaning 
> that multiple threads may cache miss, compute the value, and try to store it. 
> That redundant work becomes expensive under load and can be mitigated with ~ 
> per-key locks.
> # The cache queries the entry to see if it's even worth caching. At first 
> glance one assumes that is so that inexpensive entries don't bang on the lock 
> or thrash the LRU. However, this is also used to indicate data dependencies 
> for uncachable items (per JIRA), which perhaps shouldn't be invoking the 
> cache.
> # The cache lookup is skipped if the global lock is held and the value is 
> computed, but not stored. This means a busy lock reduces performance across 
> all usages and the cache's effectiveness degrades. This is not counted in the 
> miss rate, giving a false impression.
> # An attempt was made to perform computations asynchronously, due to their 
> heavy cost on tail latencies. That work was reverted due to test failures and 
> is being worked on.
> # An [in-progress change|https://github.com/apache/lucene-solr/pull/940] 
> tries to avoid LRU thrashing due to large, infrequently used items being 
> cached.
> # The cache is tightly intertwined with business logic, making it hard to 
> tease apart core algorithms and data structures from the usage scenarios.
> It seems that more and more items skip being cached because of concurrency 
> and hit rate performance, causing special case fixes based on knowledge of 
> the external code flows. Since the developers are experts on search, not 
> caching, it seems justified to evaluate if an off-the-shelf library would be 
> more helpful in terms of developer time, code complexity, and performance. 
> Solr has already introduced [Caffeine|https://github.com/ben-manes/caffeine] 
> in SOLR-8241 and SOLR-13817.
> The proposal is to replace the internals {{LruQueryCache}} so that external 
> usages are not affected in terms of the API. However, like in {{SolrCache}}, 
> a difference is that Caffeine only bounds by either the number of entries or 
> an accumulated size (e.g. bytes), but not both constraints. This likely is an 
> acceptable divergence in how the configuration is honored.
> cc [~ab], [~dsmiley]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9018) Separator for ConcatenateGraphFilterFactory

2019-11-06 Thread Stanislav Mikulchik (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Mikulchik updated LUCENE-9018:

Attachment: LUCENE-9018.patch

> Separator for ConcatenateGraphFilterFactory
> ---
>
> Key: LUCENE-9018
> URL: https://issues.apache.org/jira/browse/LUCENE-9018
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Stanislav Mikulchik
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-9018.patch, LUCENE-9018.patch
>
>
> I would like to have an option to choose a separator to use for token 
> concatenation. Currently ConcatenateGraphFilterFactory can use only "\u001F" 
> symbol.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9018) Separator for ConcatenateGraphFilterFactory

2019-11-06 Thread Stanislav Mikulchik (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968549#comment-16968549
 ] 

Stanislav Mikulchik commented on LUCENE-9018:
-

Unfortunately there is no getCharacter method that can tolerate optional 
parameter so I created my own.

Renamed separator to tokenSeparator.

Replaced preserveSep with Character tokenSeparator.

In the first try I didn't notice the usage of ConcatenateGraphFilter 
constructor in the other classes, so I fixed that too.

[^LUCENE-9018.patch]

> Separator for ConcatenateGraphFilterFactory
> ---
>
> Key: LUCENE-9018
> URL: https://issues.apache.org/jira/browse/LUCENE-9018
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Stanislav Mikulchik
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-9018.patch, LUCENE-9018.patch
>
>
> I would like to have an option to choose a separator to use for token 
> concatenation. Currently ConcatenateGraphFilterFactory can use only "\u001F" 
> symbol.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9020) Find a way to publish Solr RefGuide without checking into git

2019-11-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/LUCENE-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968641#comment-16968641
 ] 

Jan Høydahl commented on LUCENE-9020:
-

Uwe, what is the INFRA issue number regarding process for publishing 
sub-sections of site through svn instead of git/pelican?

> Find a way to publish Solr RefGuide without checking into git
> -
>
> Key: LUCENE-9020
> URL: https://issues.apache.org/jira/browse/LUCENE-9020
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Jan Høydahl
>Priority: Major
>
> Currently we check in all versions of RefGuide (hundreds of small html files) 
> into svn to publish as part of the site. With new site we should find a 
> smoother way to do this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13898) Non-atomic use of SolrCache get / put

2019-11-06 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-13898:

Attachment: SOLR-13898.patch

> Non-atomic use of SolrCache get / put
> -
>
> Key: SOLR-13898
> URL: https://issues.apache.org/jira/browse/SOLR-13898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.3
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-13898.patch
>
>
> As pointed out by [~ben.manes] in SOLR-13817 Solr code base in many key 
> places uses a similar pattern of non-atomic get / put calls to SolrCache-s. 
> In multi-threaded environment this leads to cache misses and additional 
> unnecessary computations when competing threads both discover a missing 
> value, non-atomically compute it and update the cache.
> Some of these places are known performance bottlenecks where efficient 
> caching is especially important, such as {{SolrIndexSearcher}}, 
> {{SolrDocumentFetcher}}, {{UninvertedField}} and join queries .
> I propose to add {{SolrCache.computeIfAbsent(key, mappingFunction)}} that 
> will atomically retrieve existing values or compute and update the cache. 
> This will require also changing how the {{SolrCache.get(...)}} is used in 
> many components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13898) Non-atomic use of SolrCache get / put

2019-11-06 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968661#comment-16968661
 ] 

Andrzej Bialecki commented on SOLR-13898:
-

Initial patch:
* I added {{SolrCache.computeIfAbsent}} method and implemented it in each 
subclass.
* {{UninvertedField}} and {{SolrDocumentFetcher}} (plus a few other minor use 
cases) have been changed to use {{computeIfAbsent}}. However, the way caching 
is used in {{SolrIndexSearcher}} was too scary for me to touch ... so this 
still needs to be modified.
* I'd like to add some performance tests to see how the hit ratio is affected 
by concurrent initialization when using get / put vs. computeIfAbsent.

All existing tests are passing.

> Non-atomic use of SolrCache get / put
> -
>
> Key: SOLR-13898
> URL: https://issues.apache.org/jira/browse/SOLR-13898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.3
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-13898.patch
>
>
> As pointed out by [~ben.manes] in SOLR-13817 Solr code base in many key 
> places uses a similar pattern of non-atomic get / put calls to SolrCache-s. 
> In multi-threaded environment this leads to cache misses and additional 
> unnecessary computations when competing threads both discover a missing 
> value, non-atomically compute it and update the cache.
> Some of these places are known performance bottlenecks where efficient 
> caching is especially important, such as {{SolrIndexSearcher}}, 
> {{SolrDocumentFetcher}}, {{UninvertedField}} and join queries .
> I propose to add {{SolrCache.computeIfAbsent(key, mappingFunction)}} that 
> will atomically retrieve existing values or compute and update the cache. 
> This will require also changing how the {{SolrCache.get(...)}} is used in 
> many components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13898) Non-atomic use of SolrCache get / put

2019-11-06 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-13898:

Fix Version/s: 8.4

> Non-atomic use of SolrCache get / put
> -
>
> Key: SOLR-13898
> URL: https://issues.apache.org/jira/browse/SOLR-13898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.3
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.4
>
> Attachments: SOLR-13898.patch
>
>
> As pointed out by [~ben.manes] in SOLR-13817 Solr code base in many key 
> places uses a similar pattern of non-atomic get / put calls to SolrCache-s. 
> In multi-threaded environment this leads to cache misses and additional 
> unnecessary computations when competing threads both discover a missing 
> value, non-atomically compute it and update the cache.
> Some of these places are known performance bottlenecks where efficient 
> caching is especially important, such as {{SolrIndexSearcher}}, 
> {{SolrDocumentFetcher}}, {{UninvertedField}} and join queries .
> I propose to add {{SolrCache.computeIfAbsent(key, mappingFunction)}} that 
> will atomically retrieve existing values or compute and update the cache. 
> This will require also changing how the {{SolrCache.get(...)}} is used in 
> many components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13899) zkstatus page incorrectly reports zookeeper in error when Zookeeper observers are present

2019-11-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968669#comment-16968669
 ] 

Jan Høydahl commented on SOLR-13899:


Appreciate your Jira report (as the one who build the original zk status page). 
It's not a big hurry to fix this as the UI still works and it is just wrong to 
call it an error. So I'd appreciate if you would attempt a fix and come back 
with a Pull Request when you have time. Of your two options I prefer option 1, 
perhaps as a first step. Then of course it would be nice to handle observers 
specifically as you suggest.

> zkstatus page incorrectly reports zookeeper in error when Zookeeper observers 
> are present
> -
>
> Key: SOLR-13899
> URL: https://issues.apache.org/jira/browse/SOLR-13899
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.3.0
>Reporter: Salvatore
>Priority: Trivial
>  Labels: easyfix
> Attachments: zkstatus.png
>
>
> When a zookeeper ensemble has 'observers', the zkstatus page incorrectly says 
> Zookeeper status is in error (See attachment.)
> This is because the 
> [ZookeeperStatusHandler|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java]
>  does not account for the 
> '[observer|https://zookeeper.apache.org/doc/current/zookeeperObservers.html]' 
> role whatsoever.
> This should be an easy fix - I see there being two options;
> 1. Treat observers as followers by changing 
> [L112|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java#L112]
>  to
> {code:java}
> if ("follower".equals(state) || "observer".equals(state)) {
> {code}
>  
>  2. Ignore observers from the required follower count by changing 
> [L116|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java#L116]
>  to
> {code:java}
>   reportedFollowers = 
> Integer.parseInt(String.valueOf(stat.get("zk_synced_followers")));
> {code}
> Option 1 will make the zkstatus page show error when an observer is down.
>  Option 2 will not make the zkstatus page show error when an observer is down.
> *Ideally*, additional logic to account for observers should be added, and 
> show a STATUS_YELLOW when any observers are down (but followers are all up), 
> as this means the ensemble is only in a degraded, but functional state.
> Happy to create a PR, however I don't have a lot of free time at home at the 
> moment, so it may take a week or two.
>  
> Additional info:
> See below for example mntr output for the Leader/Follower/Observer roles, 
> noting the Leader's zk_followers and zk_synced_followers values, and the 
> values of zk_server_state.
> Leader:
> {code:java}
> [root@master1 ~]# echo mntr | nc master3 12181
> zk_version 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
> 10/08/2019 20:18 GMT
> zk_avg_latency 0
> zk_max_latency 2
> zk_min_latency 0
> zk_packets_received 97
> zk_packets_sent 96
> zk_num_alive_connections 2
> zk_outstanding_requests 0
> zk_server_state leader
> zk_znode_count 92
> zk_watch_count 7
> zk_ephemerals_count 9
> zk_approximate_data_size 236333
> zk_open_file_descriptor_count 64
> zk_max_file_descriptor_count 4096
> zk_followers 4
> zk_synced_followers 2
> zk_pending_syncs 0
> zk_last_proposal_size -1
> zk_max_proposal_size -1
> zk_min_proposal_size -1
> {code}
> Follower:
> {code:java}
> [root@master1 ~]# echo mntr | nc master2 12181
> zk_version3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
> 10/08/2019 20:18 GMT
> zk_avg_latency0
> zk_max_latency6
> zk_min_latency0
> zk_packets_received   97
> zk_packets_sent   96
> zk_num_alive_connections  2
> zk_outstanding_requests   0
> zk_server_state   follower
> zk_znode_count92
> zk_watch_count7
> zk_ephemerals_count   9
> zk_approximate_data_size  236333
> zk_open_file_descriptor_count 60
> zk_max_file_descriptor_count  4096
> {code}
> Observer:
> {code:java}
> [root@master1 ~]# echo mntr | nc slave1 12181
> zk_version3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 
> 10/08/2019 20:18 GMT
> zk_avg_latency0
> zk_max_latency8
> zk_min_latency0
> zk_packets_received   174
> zk_packets_sent   173
> zk_num_alive_connections  2
> zk_outstanding_requests   0
> zk_server_state   observer
> zk_znode_count92
> zk_watch_count7
> zk_ephemerals_count   9
> zk_approximate_data_size  236333
> zk_open_file_descriptor_count 59
> zk_max_file_descriptor_count  

[jira] [Commented] (SOLR-13900) Permissions deleting works wrong

2019-11-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968673#comment-16968673
 ] 

Jan Høydahl commented on SOLR-13900:


Thanks for your report. I suspect you use the authorization REST API to edit 
permissions.

Do you think you are able to propose a fix in a Pull Request?

> Permissions deleting works wrong
> 
>
> Key: SOLR-13900
> URL: https://issues.apache.org/jira/browse/SOLR-13900
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authorization, security
>Reporter: Yuliia Sydoruk
>Priority: Major
>
> Permissions indexes in security.json file do not correspond to indexes while 
> deleting.
> The line 
> {{(141) setIndex(p);}}
> in 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/security/AutorizationEditOperation.java]
>  makes indexes renumber before deleting and it leads to wrong behavior.
> *USE CASE 1:*
> There are 2 new permissions added to security.json (with indexes 13 and 14):
> {code:java}
> 
>   { 
> "role":"admin", 
> "name":"schema-edit", 
> "index":12},
>   {
> "collection":"",
> "path":"/schema/*",
> "role":"test-role",
> "index":13},
>   {
> "path":"/admin/collections",
> "params":{"collection":["testCollection"]},
> "role":"test-role",
> "index":14}
> 
> {code}
> Step 1: remove the permission with index=13; result: permission is deleted 
> correctly, security.json is next:
> {code:java}
> 
>   { 
> "role":"admin", 
> "name":"schema-edit", 
> "index":12,
>   {
> "path":"/admin/collections",
> "params":{"collection":["testCollection"]},
> "role":"test-role",
> "index":14}
> 
> {code}
> Step 2: try to remove the permission with index=14; result: "No such index: 
> 14" error is returned.
> *USE CASE 2:*
> There are 3 new permissions added to security.json (with indexes 13, 14 and 
> 15):
> {code:json}
> 
>   { 
> "role":"admin", 
> "name":"schema-edit", 
> "index":12},
>   {
> "collection":"",
> "path":"/schema/*",
> "role":"test-role",
> "index":13},
>   {
> "path":"/admin/collections",
> "params":{"collection":["testCollection"]},
> "role":"test-role",
> "index":14}, 
>   { 
> "path":"/admin/collections", 
> "params":\{"collection":["anotherTestCollection"]}, 
> "role":"test-role", 
> "index":15}
> 
> {code}
> Step 1: remove the permission with index=13; result: permission is deleted 
> correctly, security.json becomes next:
> {code:json}
> 
>{ 
> "role":"admin", 
> "name":"schema-edit", 
> "index":12},
>{
> "path":"/admin/collections", 
> "params":{"collection":["testCollection"]}, 
> "role":"test-role", "index":14}, 
>{ 
> "path":"/admin/collections", 
> "params":{"collection":["anotherTestCollection"]}, 
> "role":"test-role", 
> "index":15}
> 
> {code}
>  
>  Step 2: try to remove the permission with index=14; result: permission with 
> index 15 is deleted, which is *wrong*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13898) Non-atomic use of SolrCache get / put

2019-11-06 Thread Ben Manes (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968676#comment-16968676
 ] 

Ben Manes commented on SOLR-13898:
--

FYI,
{{ConcurrentHashMap#computeIfAbsent}} is pessimistic and locks more 
aggressively than you might expect. In Java 8 it will always lock, even if 
present. In Java 9 it will perform a partial prescreen - if the first entry in 
the hashbin is the desired entry then return, else lock and scan the bucket. 
Caffeine always does a full prescreen, which means a present entry is always 
retrieved lock-free. When 
[talking|http://jsr166-concurrency.10961.n7.nabble.com/Re-ConcurrentHashMap-computeIfAbsent-td11687.html]
 to Doug Lea in Java 8 time-frame, he adopted a pessimistic stance due to 
biased locking and safepoints at 32+ cores favors that approach.

You may want to offer a similar optimization in your non-Caffeine 
implementations to perform a `get(key)` and fallback to {{computeIfAbsent}}. I 
haven't benchmarked at Java 9, but you can see the results in Java 8 were 
[dramatic|https://github.com/ben-manes/caffeine/wiki/Benchmarks#compute].

> Non-atomic use of SolrCache get / put
> -
>
> Key: SOLR-13898
> URL: https://issues.apache.org/jira/browse/SOLR-13898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.3
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.4
>
> Attachments: SOLR-13898.patch
>
>
> As pointed out by [~ben.manes] in SOLR-13817 Solr code base in many key 
> places uses a similar pattern of non-atomic get / put calls to SolrCache-s. 
> In multi-threaded environment this leads to cache misses and additional 
> unnecessary computations when competing threads both discover a missing 
> value, non-atomically compute it and update the cache.
> Some of these places are known performance bottlenecks where efficient 
> caching is especially important, such as {{SolrIndexSearcher}}, 
> {{SolrDocumentFetcher}}, {{UninvertedField}} and join queries .
> I propose to add {{SolrCache.computeIfAbsent(key, mappingFunction)}} that 
> will atomically retrieve existing values or compute and update the cache. 
> This will require also changing how the {{SolrCache.get(...)}} is used in 
> many components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13894) Solr 8.3 streaming expreessions do not return all fields (select)

2019-11-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968715#comment-16968715
 ] 

Jörn Franke commented on SOLR-13894:


This is also reproducible in Solr 8.3.0 server. Just go to the admin UI, any 
collection and execute the following on the streaming handler:


{color:#22}select(search(testcollection,q{color}{color:#22}=“test”,df=“Default”,defType=“{color}{color:#22}edismax”,fl=“id”,
 qt=“/export”, sort=“id asc”),id,if(eq(1,1),Y,N) as found){color}

{color:#22}In {color}{color:#22}8.3{color}{color:#22} it returns 
only the id field => INCORRECT{color}
{color:#22}In 8.2 it returns id,found field => CORRECT{color}

{color:#22}Since found is generated by select (and not coming from the 
collection) there must be an issue with select.{color}

 

{color:#22}Debug logs do not show any error and the expression is correctly 
received by Solr.{color}

> Solr 8.3 streaming expreessions do not return all fields (select)
> -
>
> Key: SOLR-13894
> URL: https://issues.apache.org/jira/browse/SOLR-13894
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, streaming expressions
>Affects Versions: 8.3.0
>Reporter: Jörn Franke
>Priority: Major
>
> {color:#22}I use streaming expressions, e.g.{color}
> {color:#22}sortsSelect(search(...),id,if({color}{color:#22}eq(1,1),Y,N)
>  as found), by=“field A asc”){color}
> {color:#22}(Using export handler, sort is not really mandatory , I will 
> remove it later anyway){color}
> {color:#22}This works perfectly fine if I use Solr 8.2.0 (server + 
> client). It returns Tuples in the form \{ “id”,”12345”, “found”:”Y”}{color}
> {color:#22}However, if I use Solr 8.2.0 as server and Solr 8.3.0 as 
> client then the above statement only returns the id field, but not the 
> "found" field.{color}
> {color:#22}Questions:{color}
> {color:#22}1) is this expected behavior, ie Solr client 8.3.0 is in this 
> case not compatible with Solr 8.2.0 and server upgrade to Solr 8.3.0 will fix 
> this?{color}
> {color:#22}2) has the syntax for the above expression changed? If so 
> how?{color}
> {color:#22}3) is this not expected behavior and I should create a Jira 
> for it?{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13894) Solr 8.3 streaming expreessions do not return all fields (select)

2019-11-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968715#comment-16968715
 ] 

Jörn Franke edited comment on SOLR-13894 at 11/6/19 9:14 PM:
-

This is also reproducible in Solr 8.3.0 server. Just go to the admin UI, any 
collection and execute the following on the streaming handler:

{color:#22}select(search(testcollection,q{color}{color:#22}=“test”,df=“Default”,defType=“{color}{color:#22}edismax”,fl=“id”,
 qt=“/export”, sort=“id asc”),id,if(eq(1,1),Y,N) as found){color}

{color:#22}In {color}{color:#22}8.3{color}{color:#22} it returns 
only the id field => INCORRECT{color}
 {color:#22}In 8.2 it returns id,found field => CORRECT{color}

{color:#22}Since "found" is generated by select (and not coming from the 
collection) there must be an issue with select.{color}

 

{color:#22}Debug logs do not show any error and the expression is correctly 
received by Solr.{color}


was (Author: jornfranke):
This is also reproducible in Solr 8.3.0 server. Just go to the admin UI, any 
collection and execute the following on the streaming handler:


{color:#22}select(search(testcollection,q{color}{color:#22}=“test”,df=“Default”,defType=“{color}{color:#22}edismax”,fl=“id”,
 qt=“/export”, sort=“id asc”),id,if(eq(1,1),Y,N) as found){color}

{color:#22}In {color}{color:#22}8.3{color}{color:#22} it returns 
only the id field => INCORRECT{color}
{color:#22}In 8.2 it returns id,found field => CORRECT{color}

{color:#22}Since found is generated by select (and not coming from the 
collection) there must be an issue with select.{color}

 

{color:#22}Debug logs do not show any error and the expression is correctly 
received by Solr.{color}

> Solr 8.3 streaming expreessions do not return all fields (select)
> -
>
> Key: SOLR-13894
> URL: https://issues.apache.org/jira/browse/SOLR-13894
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud, streaming expressions
>Affects Versions: 8.3.0
>Reporter: Jörn Franke
>Priority: Major
>
> {color:#22}I use streaming expressions, e.g.{color}
> {color:#22}sortsSelect(search(...),id,if({color}{color:#22}eq(1,1),Y,N)
>  as found), by=“field A asc”){color}
> {color:#22}(Using export handler, sort is not really mandatory , I will 
> remove it later anyway){color}
> {color:#22}This works perfectly fine if I use Solr 8.2.0 (server + 
> client). It returns Tuples in the form \{ “id”,”12345”, “found”:”Y”}{color}
> {color:#22}However, if I use Solr 8.2.0 as server and Solr 8.3.0 as 
> client then the above statement only returns the id field, but not the 
> "found" field.{color}
> {color:#22}Questions:{color}
> {color:#22}1) is this expected behavior, ie Solr client 8.3.0 is in this 
> case not compatible with Solr 8.2.0 and server upgrade to Solr 8.3.0 will fix 
> this?{color}
> {color:#22}2) has the syntax for the above expression changed? If so 
> how?{color}
> {color:#22}3) is this not expected behavior and I should create a Jira 
> for it?{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9037) ArrayIndexOutOfBoundsException due to repeated IOException during indexing

2019-11-06 Thread Ilan Ginzburg (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilan Ginzburg updated LUCENE-9037:
--
Attachment: TestIndexWriterTermsHashOverflow.java

> ArrayIndexOutOfBoundsException due to repeated IOException during indexing
> --
>
> Key: LUCENE-9037
> URL: https://issues.apache.org/jira/browse/LUCENE-9037
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.1
>Reporter: Ilan Ginzburg
>Priority: Minor
> Attachments: TestIndexWriterTermsHashOverflow.java
>
>
> There is a limit to the number of tokens that can be held in memory by Lucene 
> when docs are indexed using DocumentsWriter, then bad things happen. The 
> limit can be reached by submitting a really large document, by submitting a 
> large number of documents without doing a commit (see LUCENE-8118) or by 
> repeatedly submitting documents that fail to get indexed in some specific 
> ways, leading to Lucene not cleaning up the in memory data structures that 
> eventually overflow.
> The overflow is due to a 32 bit (signed) integer wrapping around to negative 
> territory, then causing an ArrayIndexOutOfBoundsException. 
> The failure path that we are reliably hitting is due to an IOException during 
> doc tokenization. A tokenizer implementing TokenStream throws an exception 
> from incrementToken() which causes indexing of that doc to fail. 
> The IOException bubbles back up to DocumentsWriter.updateDocument() (or 
> DocumentsWriter.updateDocuments() in some other cases) where it is not 
> treated as an AbortingException therefore it is not causing a reset of the 
> DocumentsWriterPerThread. On repeated failures (without any successful 
> indexing in between) if the upper layer (client via Solr) resubmits the doc 
> that fails again, DocumentsWriterPerThread will eventually cause 
> TermsHashPerField data structures to grow and overflow, leading to an 
> exception stack similar to the one in LUCENE-8118 (below stack trace copied 
> from a test run repro on 7.1):
> java.lang.ArrayIndexOutOfBoundsException: 
> -65536java.lang.ArrayIndexOutOfBoundsException: -65536
>  at __randomizedtesting.SeedInfo.seed([394FAB2B91B1D90A:C86FB3F3CE001AA8]:0) 
> at 
> org.apache.lucene.index.TermsHashPerField.writeByte(TermsHashPerField.java:198)
>  at 
> org.apache.lucene.index.TermsHashPerField.writeVInt(TermsHashPerField.java:221)
>  at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.writeProx(FreqProxTermsWriterPerField.java:80)
>  at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.addTerm(FreqProxTermsWriterPerField.java:171)
>  at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185) 
> at 
> org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:792)
>  at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:430)
>  at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392)
>  at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:239)
>  at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:481)
>  at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1717) 
> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1462)
> Using tokens composed only of lowercase letters, it takes less than 
> 130,000,000 different tokens (the shortest ones) to overflow 
> TermsHashPerField.
> Using a single document (composed of the 20,000 shortest lowercase tokens) 
> submitted repeatedly for indexing requires 6352 submissions all failing with 
> an IOException on incrementToken() to trigger the 
> ArrayIndexOutOfBoundsException.
> A proposed fix is to treat in DocumentsWriter.updateDocument() and 
> DocumentsWriter.updateDocuments() an IOException in the same way we treat an 
> AbortingException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc opened a new pull request #998: LUCENE-9037: ArrayIndexOutOfBoundsException due to repeated IOExcepti…

2019-11-06 Thread GitBox
murblanc opened a new pull request #998: LUCENE-9037: 
ArrayIndexOutOfBoundsException due to repeated IOExcepti…
URL: https://github.com/apache/lucene-solr/pull/998
 
 
   LUCENE-9037: ArrayIndexOutOfBoundsException due to repeated IOException 
during indexing
   
   # Description
   
   This patch forces a reset of the indexing in memory data structures when an 
IOException is thrown during tokenization.
   
   # Solution
   
   # Tests
   
   Attached to the Jira (see first comment) a test that fails without the 
patch, but that fails in a different way with the patch... I wasn't able to 
imagine a realistic test that would pass with the patch (the test does pass in 
Solr 7.1 with a patch doing something equivalent, but behavior in Solr 8 
changed).
   
   # Checklist
   
   - [ ] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [X] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [X] I am authorized to contribute this code to the ASF and have removed 
any code I do not have a license to distribute.
   - [ ] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [X] I have developed this patch against the `master` branch.
   - [ ] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9037) ArrayIndexOutOfBoundsException due to repeated IOException during indexing

2019-11-06 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968784#comment-16968784
 ] 

Ilan Ginzburg commented on LUCENE-9037:
---

[^TestIndexWriterTermsHashOverflow.java]

I'm trying to port my fix and test from 7.1 to master, but the code is 
different.

The test class (attached) works great. Seeing same issues as in 7.1 (i.e. able 
to reproduce the ArrayIndexOutOfBoundsException in about 2:30 minutes).

But in Solr 8 as opposed to 7.1, there is no exception catching in 
DocumentsWriter (AbortingException has vanished altogether). A call to 
DocumentsWriterPerThread.onAbortingException() is used to notify of an aborting 
exception (and DocumentsWriterPerThread.hasHitAbortingException() to later 
check if one was hit).

Effect of an exception causing abort (difference from 7.1) is that the 
IndexWriter is now closed (and with it the DocumentsWriterPerThread). So the 
tests in the attached class fail on stock Solr 8 with 
ArrayIndexOutOfBoundsException, and with the patch to make IOException 
considered aborting 
([[https://github.com/apache/lucene-solr/pull/998]|[https://github.com/apache/lucene-solr/pull/998]]),
 they fail too "AlreadyClosedException: this IndexWriter is closed" (except 
testSingleLargeDocFails that verifies a failure that happens with or without 
the patch).

Changing the test to reopen an IndexWriter after each failure is not an option 
(creates a new DocumentsWriterPerThread). So basically I have this test showing 
a failure in Solr 8 but I have no way of showing what my proposed fix fixes.

 

> ArrayIndexOutOfBoundsException due to repeated IOException during indexing
> --
>
> Key: LUCENE-9037
> URL: https://issues.apache.org/jira/browse/LUCENE-9037
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.1
>Reporter: Ilan Ginzburg
>Priority: Minor
> Attachments: TestIndexWriterTermsHashOverflow.java
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a limit to the number of tokens that can be held in memory by Lucene 
> when docs are indexed using DocumentsWriter, then bad things happen. The 
> limit can be reached by submitting a really large document, by submitting a 
> large number of documents without doing a commit (see LUCENE-8118) or by 
> repeatedly submitting documents that fail to get indexed in some specific 
> ways, leading to Lucene not cleaning up the in memory data structures that 
> eventually overflow.
> The overflow is due to a 32 bit (signed) integer wrapping around to negative 
> territory, then causing an ArrayIndexOutOfBoundsException. 
> The failure path that we are reliably hitting is due to an IOException during 
> doc tokenization. A tokenizer implementing TokenStream throws an exception 
> from incrementToken() which causes indexing of that doc to fail. 
> The IOException bubbles back up to DocumentsWriter.updateDocument() (or 
> DocumentsWriter.updateDocuments() in some other cases) where it is not 
> treated as an AbortingException therefore it is not causing a reset of the 
> DocumentsWriterPerThread. On repeated failures (without any successful 
> indexing in between) if the upper layer (client via Solr) resubmits the doc 
> that fails again, DocumentsWriterPerThread will eventually cause 
> TermsHashPerField data structures to grow and overflow, leading to an 
> exception stack similar to the one in LUCENE-8118 (below stack trace copied 
> from a test run repro on 7.1):
> java.lang.ArrayIndexOutOfBoundsException: 
> -65536java.lang.ArrayIndexOutOfBoundsException: -65536
>  at __randomizedtesting.SeedInfo.seed([394FAB2B91B1D90A:C86FB3F3CE001AA8]:0) 
> at 
> org.apache.lucene.index.TermsHashPerField.writeByte(TermsHashPerField.java:198)
>  at 
> org.apache.lucene.index.TermsHashPerField.writeVInt(TermsHashPerField.java:221)
>  at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.writeProx(FreqProxTermsWriterPerField.java:80)
>  at 
> org.apache.lucene.index.FreqProxTermsWriterPerField.addTerm(FreqProxTermsWriterPerField.java:171)
>  at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185) 
> at 
> org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:792)
>  at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:430)
>  at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392)
>  at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:239)
>  at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:481)
>  at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1717) 
> at org.apache.lucene.index.Ind

[jira] [Commented] (SOLR-13888) SolrCloud 2

2019-11-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968794#comment-16968794
 ] 

David Smiley commented on SOLR-13888:
-

I hope you can be unplugged and come back to us well rested and energized to 
tackle this important work.  I’m confused on two things:  there is no info to 
take, no code. Your detailed clear messages don’t count :-). And is “Won’t Fix” 
really appropriate?  Seems premature.

> SolrCloud 2
> ---
>
> Key: SOLR-13888
> URL: https://issues.apache.org/jira/browse/SOLR-13888
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: solrscreen.png
>
>
> As devs discuss dropping the SolrCloud name on the dev list, here is an issue 
> titled SolrCloud 2.
> A couple times now I've pulled on the sweater thread that is our broken 
> tests. It leads to one place - SolrCloud is sick and devs are adding spotty 
> code on top of it at a rate that will lead to the system falling in on 
> itself. As it is, it's a very slow, very inefficient, very unreliable, very 
> buggy system.
> This is not why I am here. This is the opposite of why I am here.
> So please, let's stop. We can't build on that thing as it is.
>  
> I need some time, I lost a lot of work at one point, the scope has expanded 
> since I realized how problematic some things really are, but I have an 
> alternative path that is not so duct tape and straw. As the building climbs, 
> that foundation is going to kill us all.
>  
> This i not about an architecture change - the architecture is fine. The 
> implementation is broken and getting worse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13860) Enable back TestTlogReplica

2019-11-06 Thread Tomas Eduardo Fernandez Lobbe (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968823#comment-16968823
 ] 

Tomas Eduardo Fernandez Lobbe commented on SOLR-13860:
--

The behavior I'm seeing in the most recent Jenkins failure is:

The tests kills jetty, validates some things, and then starts it back. Before 
finishing, it waits until the replicas are all active (the killed jetty is one 
that hold a replica) and asserts that it can query all the replicas, and 
expects the right number of docs. What seems to be happening is:
 * ...
 * Jetty is restarted
 * Replicas recover/become active
 * Replicas respond to queries
 * Delete collection is issued (tearDown)
 * The overseer fails to talk with the node recently restarted (NoHttpResponse)

So, my question here is, why can't the Overseer talk with this node, if the 
test can. Is the connection not reset?
 The delete command actually succeeds (even if unloading the replica in the 
recently restarted node fails)
 The next test tries to create a collection, but it fails for the same reason, 
the Overseer can't talk with the recently restarted node. This is what caused 
the next test to fail.
{noformat}
   [junit4]   2> 3203381 INFO  (qtp1449030325-48263) [n:127.0.0.1:58757_solr 
c:tlog_replica_test_kill_tlog_replica s:shard1 r:core_node3 
x:tlog_replica_test_kill_tlog_replica_shard1_replica_t1 ] o.a.s.c.S.Request 
[tlog_replica_test_kill_tlog_replica_shard1_replica_t1]  webapp=/solr 
path=/select params={q=*:*&wt=javabin&version=2} hits=2 status=0 QTime=0
   [junit4]   2> 3203383 INFO  (qtp1312503125-48169) [n:127.0.0.1:58760_solr 
c:tlog_replica_test_kill_tlog_replica s:shard1 r:core_node4 
x:tlog_replica_test_kill_tlog_replica_shard1_replica_t2 ] o.a.s.c.S.Request 
[tlog_replica_test_kill_tlog_replica_shard1_replica_t2]  webapp=/solr 
path=/select params={q=*:*&wt=javabin&version=2} hits=2 status=0 QTime=0
   [junit4]   2> 3203386 INFO  
(TEST-TestTlogReplica.testKillTlogReplica-seed#[355EF1911B9643DA]) [ ] 
o.a.s.c.TestTlogReplica tearDown deleting collection
   [junit4]   2> 3203386 INFO  
(TEST-TestTlogReplica.testKillTlogReplica-seed#[355EF1911B9643DA]) [ ] 
o.a.h.i.e.RetryExec I/O exception (java.net.SocketException) caught when 
processing request to {}->http://127.0.0.1:58757: Software caused connection 
abort: recv failed
   [junit4]   2> 3203386 INFO  
(TEST-TestTlogReplica.testKillTlogReplica-seed#[355EF1911B9643DA]) [ ] 
o.a.h.i.e.RetryExec Retrying request to {}->http://127.0.0.1:58757
   [junit4]   2> 3203387 INFO  (qtp1449030325-48259) [n:127.0.0.1:58757_solr
 ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :delete with params 
name=tlog_replica_test_kill_tlog_replica&action=DELETE&wt=javabin&version=2 and 
sendToOCPQueue=true
   [junit4]   2> 3203391 INFO  
(OverseerThreadFactory-15915-thread-2-processing-n:127.0.0.1:58760_solr) 
[n:127.0.0.1:58760_solr ] o.a.s.c.a.c.OverseerCollectionMessageHandler 
Executing Collection 
Cmd=action=UNLOAD&deleteInstanceDir=true&deleteDataDir=true&deleteMetricsHistory=true,
 asyncId=null
   [junit4]   2> 3203392 ERROR 
(OverseerThreadFactory-15915-thread-2-processing-n:127.0.0.1:58760_solr) 
[n:127.0.0.1:58760_solr ] o.a.s.c.a.c.OverseerCollectionMessageHandler 
Error from shard: http://127.0.0.1:58757/solr
   [junit4]   2>   => org.apache.solr.client.solrj.SolrServerException: 
IOException occurred when talking to server at: http://127.0.0.1:58757/solr
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:679)
   [junit4]   2> org.apache.solr.client.solrj.SolrServerException: IOException 
occurred when talking to server at: http://127.0.0.1:58757/solr
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:679)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1290) ~[java/:?]
   [junit4]   2>at 
org.apache.solr.handler.component.HttpShardHandlerFactory$1.request(HttpShardHandlerFactory.java:176)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:200)
 ~[java/:?]
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
   [junit4]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
   [junit4]   2>at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutor

[jira] [Commented] (LUCENE-9031) UnsupportedOperationException on highlighting Interval Query

2019-11-06 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968840#comment-16968840
 ] 

Lucene/Solr QA commented on LUCENE-9031:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} highlighter in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} queries in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  3m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | lucene.search.uhighlight.TestUnifiedHighlighterMTQ |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-9031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985023/LUCENE-9031.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 5c7215fabfc |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/222/artifact/out/patch-unit-lucene_highlighter.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/222/testReport/ |
| modules | C: lucene/highlighter lucene/queries U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/222/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> UnsupportedOperationException on highlighting Interval Query
> 
>
> Key: LUCENE-9031
> URL: https://issues.apache.org/jira/browse/LUCENE-9031
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queries
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.4
>
> Attachments: LUCENE-9031.patch, LUCENE-9031.patch, LUCENE-9031.patch, 
> LUCENE-9031.patch, LUCENE-9031.patch, LUCENE-9031.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When UnifiedHighlighter highlights Interval Query it encounters 
> UnsupportedOperationException. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jgq2008303393 edited a comment on issue #940: LUCENE-9002: Query caching leads to absurdly slow queries

2019-11-06 Thread GitBox
jgq2008303393 edited a comment on issue #940: LUCENE-9002: Query caching leads 
to absurdly slow queries
URL: https://github.com/apache/lucene-solr/pull/940#issuecomment-549842236
 
 
   @jpountz I have resolved the CHANGES.txt conflict again. Please help to 
merge. Thanks very much : )


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ctargett opened a new pull request #999: SOLR-13782: Remove PDF Ref Guide build

2019-11-06 Thread GitBox
ctargett opened a new pull request #999: SOLR-13782: Remove PDF Ref Guide build
URL: https://github.com/apache/lucene-solr/pull/999
 
 
   # Description
   
   The community recently decided to stop building the PDF in order to focus on 
the HTML and make it the "official" version of the Ref Guide. This PR pulls out 
all the PDF-related elements of the Ref Guide build.
   
   # Solution
   
   This PR makes several changes:
   
   - Removes the PDF-related build targets from build.xml
   - Removes PDF-specific validation rules in tools classes
   - Changes the Jenkins build script for the Ref Guide to only build the HTML 
version
   - Updates Ref Guide publication process docs with new steps
   - Removes link to "current PDF" from Ref Guide nav
   - Miscellaneous other cleanups to excise PDF as an available version.
   
   # Tests
   
   I've built the HTML version of the Ref Guide and also precommit to be sure 
they pass.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I am authorized to contribute this code to the ASF and have removed 
any code I do not have a license to distribute.
   - [x] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [x] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [x] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13782) Make HTML Ref Guide the primary release vehicle instead of PDF

2019-11-06 Thread Cassandra Targett (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968876#comment-16968876
 ] 

Cassandra Targett commented on SOLR-13782:
--

With Hoss' help there ended up being several changes here. From the pull 
request:

* Removes the PDF-related build targets from build.xml
* Removes PDF-specific validation rules in tools classes
* Changes the Jenkins build script for the Ref Guide to only build the HTML 
version
* Updates Ref Guide publication process docs with new steps
* Removes link to "current PDF" from Ref Guide nav
* Miscellaneous other cleanups to excise PDF as an available version.

If no one has any comments in the PR, I'll try to commit in the next day or two.

> Make HTML Ref Guide the primary release vehicle instead of PDF
> --
>
> Key: SOLR-13782
> URL: https://issues.apache.org/jira/browse/SOLR-13782
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 8.3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As discussed in a recent mail thread [1], we have agreed that it is time for 
> us to stop treating the PDF version of the Ref Guide as the "official" 
> version and instead emphasize the HTML version as the official version.
> The arguments for/against this decision are in the linked thread, but for the 
> purpose of this issue there are a couple of things to do:
> - Modify the publication process docs (under 
> {{solr/solr-ref-guide/src/meta-docs}}
> - Announce to the solr-user list that this is happening
> A separate issue will be created to automate parts of the publication 
> process, since they require some discussion and possibly coordination with 
> Infra on the options there.
> [1] 
> https://lists.apache.org/thread.html/f517b3b74a0a33e5e6fa87e888459fc007decc49d27a4f49822ca2ee@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13888) SolrCloud 2

2019-11-06 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968916#comment-16968916
 ] 

Mark Robert Miller commented on SOLR-13888:
---

I don't know if I'll fix :) But I'd like to kill this issue briefly because I 
have to transition to real life.

I'll be handing my work off to some of my good committer friends that I work 
with, it's in 0 shape for them, good luck.

> SolrCloud 2
> ---
>
> Key: SOLR-13888
> URL: https://issues.apache.org/jira/browse/SOLR-13888
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: solrscreen.png
>
>
> As devs discuss dropping the SolrCloud name on the dev list, here is an issue 
> titled SolrCloud 2.
> A couple times now I've pulled on the sweater thread that is our broken 
> tests. It leads to one place - SolrCloud is sick and devs are adding spotty 
> code on top of it at a rate that will lead to the system falling in on 
> itself. As it is, it's a very slow, very inefficient, very unreliable, very 
> buggy system.
> This is not why I am here. This is the opposite of why I am here.
> So please, let's stop. We can't build on that thing as it is.
>  
> I need some time, I lost a lot of work at one point, the scope has expanded 
> since I realized how problematic some things really are, but I have an 
> alternative path that is not so duct tape and straw. As the building climbs, 
> that foundation is going to kill us all.
>  
> This i not about an architecture change - the architecture is fine. The 
> implementation is broken and getting worse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8836) Optimize DocValues TermsDict to continue scanning from the last position when possible

2019-11-06 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968986#comment-16968986
 ] 

Mikhail Khludnev commented on LUCENE-8836:
--

I'm not sure if {{SortedSetDocValues.termsEnum()}} is relevant to this? 

> Optimize DocValues TermsDict to continue scanning from the last position when 
> possible
> --
>
> Key: LUCENE-8836
> URL: https://issues.apache.org/jira/browse/LUCENE-8836
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
>  Labels: docValues, optimization
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Lucene80DocValuesProducer.TermsDict is used to lookup for either a term or a 
> term ordinal.
> Currently it does not have the optimization the FSTEnum has: to be able to 
> continue a sequential scan from where the last lookup was in the IndexInput. 
> For sparse lookups (when searching only a few terms or ordinal) it is not an 
> issue. But for multiple lookups in a row this optimization could save 
> re-scanning all the terms from the block start (since they are delat encoded).
> This patch proposes the optimization.
> To estimate the gain, we ran 3 Lucene tests while counting the seeks and the 
> term reads in the IndexInput, with and without the optimization:
> TestLucene70DocValuesFormat - the optimization saves 24% seeks and 15% term 
> reads.
> TestDocValuesQueries - the optimization adds 0.7% seeks and 0.003% term reads.
> TestDocValuesRewriteMethod.testRegexps - the optimization saves 71% seeks and 
> 82% term reads.
> In some cases, when scanning many terms in lexicographical order, the 
> optimization saves a lot. In some case, when only looking for some sparse 
> terms, the optimization does not bring improvement, but does not penalize 
> neither. It seems to be worth to always have it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9036) ExitableDirectoryReader to interrupt DocValues as well

2019-11-06 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-9036:
-
Status: Patch Available  (was: Open)

> ExitableDirectoryReader to interrupt DocValues as well
> --
>
> Key: LUCENE-9036
> URL: https://issues.apache.org/jira/browse/LUCENE-9036
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: LUCENE-9036.patch
>
>
> This allow to make AnalyticsComponent and json.facet sensitive to time 
> allowed. 
> Does it make sense? Is it enough to check on DV creation ie per field/segment 
> or it's worth to check every Nth doc? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9036) ExitableDirectoryReader to interrupt DocValues as well

2019-11-06 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-9036:
-
Attachment: LUCENE-9036.patch
Status: Open  (was: Open)

What do you think about [^LUCENE-9036.patch]? Tests are coming little bit 
later.  

> ExitableDirectoryReader to interrupt DocValues as well
> --
>
> Key: LUCENE-9036
> URL: https://issues.apache.org/jira/browse/LUCENE-9036
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: LUCENE-9036.patch
>
>
> This allow to make AnalyticsComponent and json.facet sensitive to time 
> allowed. 
> Does it make sense? Is it enough to check on DV creation ie per field/segment 
> or it's worth to check every Nth doc? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9031) UnsupportedOperationException on highlighting Interval Query

2019-11-06 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969002#comment-16969002
 ] 

Mikhail Khludnev commented on LUCENE-9031:
--

I think I disable MTQ Intervals test, commit it as-is unless [~romseygeek] 
provide more ideas, and address MTQ later, it won't be easy.   

> UnsupportedOperationException on highlighting Interval Query
> 
>
> Key: LUCENE-9031
> URL: https://issues.apache.org/jira/browse/LUCENE-9031
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queries
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.4
>
> Attachments: LUCENE-9031.patch, LUCENE-9031.patch, LUCENE-9031.patch, 
> LUCENE-9031.patch, LUCENE-9031.patch, LUCENE-9031.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When UnifiedHighlighter highlights Interval Query it encounters 
> UnsupportedOperationException. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org