[jira] [Commented] (SOLR-13808) Query DSL should let to cache filter

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005193#comment-17005193
 ] 

ASF subversion and git services commented on SOLR-13808:


Commit 3ae1a0b3bad84cdfaa3941b87a1a7fcad63a66d4 in lucene-solr's branch 
refs/heads/gradle-master from Mikhail Khludnev
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3ae1a0b ]

SOLR-13808: remove redundant @Repeat


> Query DSL should let to cache filter
> 
>
> Key: SOLR-13808
> URL: https://issues.apache.org/jira/browse/SOLR-13808
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.5
>
> Attachments: SOLR-13808.patch, SOLR-13808.patch
>
>
> Query DSL let to express Lucene BQ's filter
>  
> {code:java}
> { query: {bool: { filter: {term: {f:name,query:"foo bar"}}} }}{code}
> However, it might easily catch the need in caching it in filter cache. This 
> might rely on ExtensibleQuery and QParser: 
> {code:java}
> { query: {bool: { filter: {term: {f:name,query:"foo bar", cache:true}}} }}
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13808) Query DSL should let to cache filter

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005192#comment-17005192
 ] 

ASF subversion and git services commented on SOLR-13808:


Commit 3f29fe0b804baf0a40af378441c82ee7c6b8ec19 in lucene-solr's branch 
refs/heads/gradle-master from Mikhail Khludnev
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3f29fe0 ]

SOLR-13808: caching {!bool filter=..} by default.


> Query DSL should let to cache filter
> 
>
> Key: SOLR-13808
> URL: https://issues.apache.org/jira/browse/SOLR-13808
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.5
>
> Attachments: SOLR-13808.patch, SOLR-13808.patch
>
>
> Query DSL let to express Lucene BQ's filter
>  
> {code:java}
> { query: {bool: { filter: {term: {f:name,query:"foo bar"}}} }}{code}
> However, it might easily catch the need in caching it in filter cache. This 
> might rely on ExtensibleQuery and QParser: 
> {code:java}
> { query: {bool: { filter: {term: {f:name,query:"foo bar", cache:true}}} }}
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14141) eliminate JKS keystore from solr SSL docs

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005194#comment-17005194
 ] 

ASF subversion and git services commented on SOLR-14141:


Commit 1cb6e35058bd0d36b20eb44326c4cf7c79696391 in lucene-solr's branch 
refs/heads/gradle-master from Robert Muir
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1cb6e35 ]

SOLR-14141: eliminate JKS keystore from solr ssl docs.

Currently the documentation pretends to create a JKS keystore. It is
only actually a JKS keystore on java 8: on java9+ it is a PKCS12
keystore with a .jks extension (because PKCS12 is the new java default).
It works even though solr explicitly tells the JDK
(SOLR_SSL_KEY_STORE_TYPE=JKS) that its JKS when it is in fact not, due
to how keystore backwards compatibility was implemented.

Fix docs to explicitly create a PKCS12 keystore with .p12 extension and
so on instead of a PKCS12 keystore masquerading as a JKS one. This
simplifies the SSL steps since the "conversion" step (which was doing
nothing) from .JKS -> .P12 can be removed.


> eliminate JKS keystore from solr SSL docs
> -
>
> Key: SOLR-14141
> URL: https://issues.apache.org/jira/browse/SOLR-14141
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Robert Muir
>Assignee: Robert Muir
>Priority: Major
> Fix For: 8.5
>
> Attachments: SOLR-14141.patch, SOLR-14141.patch
>
>
> On the "Enabling SSL" page: 
> https://lucene.apache.org/solr/guide/8_3/enabling-ssl.html#enabling-ssl
> The first step is currently to create a JKS keystore. The next step 
> immediately converts the JKS keystore into PKCS12, so that openssl can then 
> be used to extract key material in PEM format for use with curl.
> Now that PKCS12 is java's default keystore format, why not omit step 1 
> entirely? What am I missing? PKCS12 is a more commonly 
> understood/standardized format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14154) Return correct isolation level when retrieving it from the SQL Connection

2019-12-30 Thread Nick Vercammen (Jira)
Nick Vercammen created SOLR-14154:
-

 Summary: Return correct isolation level when retrieving it from 
the SQL Connection
 Key: SOLR-14154
 URL: https://issues.apache.org/jira/browse/SOLR-14154
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Parallel SQL
Affects Versions: 8.4
Reporter: Nick Vercammen
 Fix For: master (9.0)


When calling the getTransactionIsolation() on the Sql.ConnectionImpl an 
UnsupportedException is thrown. It would be better to return TRANSACTION_NONE 
so clients can determine themselves it is not supported without receiving an 
exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] asfgit closed pull request #1105: LUCENE-9105: UniformSplit postings format detects corrupted index

2019-12-30 Thread GitBox
asfgit closed pull request #1105: LUCENE-9105: UniformSplit postings format 
detects corrupted index
URL: https://github.com/apache/lucene-solr/pull/1105
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9105) UniformSplit postings format should detect corrupted index

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005266#comment-17005266
 ] 

ASF subversion and git services commented on LUCENE-9105:
-

Commit bbb6e418e42ae518a74fc0f97360cd0666a78e80 in lucene-solr's branch 
refs/heads/master from Bruno Roustant
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bbb6e41 ]

LUCENE-9105: UniformSplit postings format detects corrupted index and better 
handles IO exceptions.

Closes #1105


> UniformSplit postings format should detect corrupted index
> --
>
> Key: LUCENE-9105
> URL: https://issues.apache.org/jira/browse/LUCENE-9105
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Assignee: Bruno Roustant
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> BlockTree postings format has some checks when reading index metadata to 
> detect index corruption. UniformSplit should have the same. Additionally 
> UniformSplit has assertions in BlockReader that should be runtime checks to 
> also detect index corruption (this case has been encountered in production 
> environment).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #1128: SOLR-14153 Word choice should be "starting", not "staring"

2019-12-30 Thread GitBox
janhoy merged pull request #1128: SOLR-14153 Word choice should be "starting", 
not "staring"
URL: https://github.com/apache/lucene-solr/pull/1128
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on issue #1128: SOLR-14153 Word choice should be "starting", not "staring"

2019-12-30 Thread GitBox
janhoy commented on issue #1128: SOLR-14153 Word choice should be "starting", 
not "staring"
URL: https://github.com/apache/lucene-solr/pull/1128#issuecomment-569657070
 
 
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14153) Fix typo on upgrade page in Ref Guide

2019-12-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-14153:
---
Fix Version/s: 8.4

> Fix typo on upgrade page in Ref Guide
> -
>
> Key: SOLR-14153
> URL: https://issues.apache.org/jira/browse/SOLR-14153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.4
>Reporter: David Eric Pugh
>Priority: Trivial
> Fix For: master (9.0), 8.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Use of the word "staring" instead of "starting" on Ref Guide page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14153) Fix typo on upgrade page in Ref Guide

2019-12-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-14153:
--

Assignee: Jan Høydahl

> Fix typo on upgrade page in Ref Guide
> -
>
> Key: SOLR-14153
> URL: https://issues.apache.org/jira/browse/SOLR-14153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.4
>Reporter: David Eric Pugh
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: master (9.0), 8.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Use of the word "staring" instead of "starting" on Ref Guide page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14153) Fix typo on upgrade page in Ref Guide

2019-12-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-14153.

Resolution: Fixed

> Fix typo on upgrade page in Ref Guide
> -
>
> Key: SOLR-14153
> URL: https://issues.apache.org/jira/browse/SOLR-14153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.4
>Reporter: David Eric Pugh
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: master (9.0), 8.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Use of the word "staring" instead of "starting" on Ref Guide page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005284#comment-17005284
 ] 

Andrzej Bialecki commented on SOLR-13486:
-

[~noble.paul] {{SolrMetricManager}} is initialized and the {{MetricsHandler}} 
is initialized *before* any core is loaded, so in this sense metrics are always 
"up" - however, if the metrics for this specific core are missing (e.g. because 
the core is not there) then an empty response is returned.

However, in this case because we're getting an NPE I think it was an 
IOException due to some other error that caused this.

Can you guys check the logs of the failing runs and see if there are any INFO 
lines starting with "Error on getting remote info, trying again:"?

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX 
> (either on it's own volition, or because it was instructed to by leaderX) in 
> an attempt to recover
>  *** the responses to these recovery requests will not include updates in the 
> tlog files that existed on leaderX prior to startup that hvae not yet been 
> replayed
>  * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... 
> during startup" can finish
>  ** replicaY now thinks it is in sync with leaderX, but leaderX has 
> (replayed) updates the other replicas know nothing about



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13971) CVE-2019-17558: Velocity custom template RCE vulnerability

2019-12-30 Thread Erik Hatcher (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-13971:

Security: Public  (was: Private (Security Issue))

> CVE-2019-17558: Velocity custom template RCE vulnerability
> --
>
> Key: SOLR-13971
> URL: https://issues.apache.org/jira/browse/SOLR-13971
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.0, 5.5.5, 6.0, 6.6.5, 7.0, 7.7, 8.0, 8.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: 8.4
>
> Attachments: SOLR-13971.patch
>
>
> We need to disable this. There is a zero day attack in the wild. 41 stars on 
> this github project: 
> # https://github.com/jas502n/solr_rce
> # https://gist.github.com/s00py/a1ba36a3689fa13759ff910e179fc133
> We need to disable this in a way that cannot be re-enabled using the Config 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14025) Velocity response writer RCE vulnerability persists after 8.3.1

2019-12-30 Thread Erik Hatcher (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-14025:

Security: Public  (was: Private (Security Issue))

> Velocity response writer RCE vulnerability persists after 8.3.1
> ---
>
> Key: SOLR-14025
> URL: https://issues.apache.org/jira/browse/SOLR-14025
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - Velocity
>Affects Versions: 8.3.1
>Reporter: Ishan Chattopadhyaya
>Assignee: Erik Hatcher
>Priority: Blocker
> Fix For: 8.4
>
> Attachments: SOLR-14025.patch, SOLR-14025.patch, SOLR-14025.patch, 
> SOLR-14025.patch, SOLR-14025.patch
>
>
> [~gezapeti] from Cloudera kindly reported this to me:
> {code}
> Hi Ishan! I’d like to raise (yet an other) issue with SOLR-13971 and the 
> Velocity templates. I’m working at Cloudera on Solr and have taken the time 
> to test out whether the fix in 8.3.1 is sufficient to mitigate the issue. The 
> sad thing is: It’s possible to upload a properties file into ZK and add the 
> resource loaders in that file. I think we should add yet-an-other option to 
> make the init-from-property file functionality off by default.
> https://github.com/apache/lucene-solr/blob/master/solr/contrib/velocity/src/java/org/apache/solr/response/VelocityResponseWriter.java#L73
>  this property loads the file here 
> https://github.com/apache/lucene-solr/blob/master/solr/contrib/velocity/src/java/org/apache/solr/response/VelocityResponseWriter.java#L141
> solr/contrib/velocity/src/java/org/apache/solr/response/VelocityResponseWriter.java:73
> apache/lucene-solr 
> | Added by GitHub
> solr/contrib/velocity/src/java/org/apache/solr/response/VelocityResponseWriter.java:141
> apache/lucene-solr 
> | Added by GitHub
> {code}
> Seems like our mitigation wasn't good enough, there's another way to load 
> resources.
> I've requested him to follow procedure here 
> (https://cwiki.apache.org/confluence/display/solr/SolrSecurity). Meanwhile, I 
> opened this JIRA anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005313#comment-17005313
 ] 

Dawid Weiss commented on SOLR-13486:


I attached an example log report file from the failure, perhaps it'll be of 
some help.

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz, 
> org.apache.solr.cloud.TestCloudConsistency.zip
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX 
> (either on it's own volition, or because it was instructed to by leaderX) in 
> an attempt to recover
>  *** the responses to these recovery requests will not include updates in the 
> tlog files that existed on leaderX prior to startup that hvae not yet been 
> replayed
>  * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... 
> during startup" can finish
>  ** replicaY now thinks it is in sync with leaderX, but leaderX has 
> (replayed) updates the other replicas know nothing about



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-13486:
---
Attachment: org.apache.solr.cloud.TestCloudConsistency.zip

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz, 
> org.apache.solr.cloud.TestCloudConsistency.zip
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX 
> (either on it's own volition, or because it was instructed to by leaderX) in 
> an attempt to recover
>  *** the responses to these recovery requests will not include updates in the 
> tlog files that existed on leaderX prior to startup that hvae not yet been 
> replayed
>  * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... 
> during startup" can finish
>  ** replicaY now thinks it is in sync with leaderX, but leaderX has 
> (replayed) updates the other replicas know nothing about



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005322#comment-17005322
 ] 

Erick Erickson commented on SOLR-13486:
---

[~ab] Yes, I do get that error on some failed tests. I notice that this can be 
logged in two places, and the second one (AutoScalingSnitch.getRemoteInfo) 
throws an explicit exception. My guess is that one should continue to throw a 
SERVER_ERROR, exception correct?

[~dweiss] I'm seeing two distinct errors now, but neither one of them is 
exactly yours. One is coming from assertDocExists like yours, but the root 
cause seems to be "Server refused connection". Do you see that in your stack 
traces?

The other one I'm getting is a timeout waiting for the collection to become 
active.

Digging more...

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz, 
> org.apache.solr.cloud.TestCloudConsistency.zip
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX 
> (either on it's own volition, or because it was instructed to by leaderX) in 
> an attempt to recover
>  *** the responses to these recovery requests will not include updates in the 
> tlog files that existed on leaderX prior to startup that hvae not yet been 
> replayed
>  * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... 
> during startup" can finish
>  ** replicaY now thinks it is in sync with leaderX, but leaderX has 
> (replayed) updates the other replicas know nothing about



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005323#comment-17005323
 ] 

Dawid Weiss commented on SOLR-13486:


The one I attached is by far the one I see most often. Like I said -- this 
happens when running full test suite.

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz, 
> org.apache.solr.cloud.TestCloudConsistency.zip
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX 
> (either on it's own volition, or because it was instructed to by leaderX) in 
> an attempt to recover
>  *** the responses to these recovery requests will not include updates in the 
> tlog files that existed on leaderX prior to startup that hvae not yet been 
> replayed
>  * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... 
> during startup" can finish
>  ** replicaY now thinks it is in sync with leaderX, but leaderX has 
> (replayed) updates the other replicas know nothing about



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005322#comment-17005322
 ] 

Erick Erickson edited comment on SOLR-13486 at 12/30/19 2:04 PM:
-

[~ab] Yes, I do get that error on some failed tests. I notice that this can be 
logged in two places, and the second one (AutoScalingSnitch.getRemoteInfo) 
throws an explicit exception. My guess is that one should continue to throw a 
SERVER_ERROR, exception correct?

[~dweiss] I'm seeing two distinct errors now, but neither one of them is 
exactly yours. One is coming from assertDocExists like yours, but the root 
cause seems to be "Server refused connection". Do you see that in your stack 
traces?

The other one I'm getting is a timeout waiting for the collection to become 
active.

I'm getting about a 130 of these, or about 65 retries for each replica trying 
to recover:
{code}
109782 ERROR (recoveryExecutor-173-thread-1-processing-n:127.0.0.1:64320_solr 
x:outOfSyncReplicasCannotBecomeLeader-false_shard1_replica_n3 
c:outOfSyncReplicasCannotBecomeLeader-false s:shard1 r:core_node4) 
[n:127.0.0.1:64320_solr c:outOfSyncReplicasCannotBecomeLeader-false s:shard1 
r:core_node4 x:outOfSyncReplicasCannotBecomeLeader-false_shard1_replica_n3 ] 
o.a.s.c.RecoveryStrategy Failed to connect leader http://127.0.0.1:64314/solr 
on recovery, try again
{code}

This comes from RecoveryStrategy, which pauses half a second then rebuilds the 
connection, so I'm a bit mystified. I wonder if this has anything to do with 
the proxy in the test...




was (Author: erickerickson):
[~ab] Yes, I do get that error on some failed tests. I notice that this can be 
logged in two places, and the second one (AutoScalingSnitch.getRemoteInfo) 
throws an explicit exception. My guess is that one should continue to throw a 
SERVER_ERROR, exception correct?

[~dweiss] I'm seeing two distinct errors now, but neither one of them is 
exactly yours. One is coming from assertDocExists like yours, but the root 
cause seems to be "Server refused connection". Do you see that in your stack 
traces?

The other one I'm getting is a timeout waiting for the collection to become 
active.

Digging more...

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz, 
> org.apache.solr.cloud.TestCloudConsistency.zip
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the l

[jira] [Commented] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005357#comment-17005357
 ] 

Erick Erickson commented on SOLR-13486:
---

Right. WDYT about you just @Ignore the test for now locally? Hoss' rollups show 
it failing once in the last week in 30 runs, and trying to run the entire suite 
enough times to have a hope of showing it fixed will take forever, there's no 
sense in you being burdened with this wonkiness.

So I'll try to clean up what I can reproduce, then push it to the repo and 
maybe then go from there.

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz, 
> org.apache.solr.cloud.TestCloudConsistency.zip
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX 
> (either on it's own volition, or because it was instructed to by leaderX) in 
> an attempt to recover
>  *** the responses to these recovery requests will not include updates in the 
> tlog files that existed on leaderX prior to startup that hvae not yet been 
> replayed
>  * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... 
> during startup" can finish
>  ** replicaY now thinks it is in sync with leaderX, but leaderX has 
> (replayed) updates the other replicas know nothing about



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)

2019-12-30 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005364#comment-17005364
 ] 

Dawid Weiss commented on SOLR-13486:


You can leave that in -- I'll post more stack traces once I have them, no 
problem. I just wanted to let you guys know this one has been pretty bad for me 
recently.

> race condition between leader's "replay on startup" and non-leader's "recover 
> from leader" can leave replicas out of sync (TestCloudConsistency)
> 
>
> Key: SOLR-13486
> URL: https://issues.apache.org/jira/browse/SOLR-13486
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Erick Erickson
>Priority: Major
> Attachments: 
> apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, 
> apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz, 
> org.apache.solr.cloud.TestCloudConsistency.zip
>
>
> I've been investigating some jenkins failures from TestCloudConsistency, 
> which at first glance suggest a problem w/replica(s) recovering after a 
> network partition from the leader - but in digging into the logs the root 
> cause acturally seems to be a thread race conditions when a replica (the 
> leader) is first registered...
>  * The {{ZkContainer.registerInZk(...)}} method (which is called by 
> {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically 
> run in a background thread (via the {{ZkContainer.coreZkRegister}} 
> ExecutorService)
>  * {{ZkContainer.registerInZk(...)}} delegates to 
> {{ZKController.register(...)}} which is ultimately responsible for checking 
> if there are any "old" tlogs on disk, and if so handling the "Replaying tlog 
> for  during startup" logic
>  * Because this happens in a background thread, other logic/requests can be 
> handled by this core/replica in the meantime - before it starts (or while in 
> the middle of) replaying the tlogs
>  ** Notably: *leader's that have not yet replayed tlogs on startup will 
> erroneously respond to RTG / Fingerprint / PeerSync requests from other 
> replicas w/incomplete data*
> ...In general, it seems scary / fishy to me that a replica can (aparently) 
> become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog 
> ... during startup" logic ... particularly since this can happen even for 
> replicas that are/become leaders. It seems like this could potentially cause 
> a whole host of problems, only one of which manifests in this particular test 
> failure:
>  * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog 
> ... during startup" check:
>  ** replicaX can recognize (via zk terms) that it should be the leader(X)
>  ** this leaderX can then instruct some other replicaY to recover from it
>  ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX 
> (either on it's own volition, or because it was instructed to by leaderX) in 
> an attempt to recover
>  *** the responses to these recovery requests will not include updates in the 
> tlog files that existed on leaderX prior to startup that hvae not yet been 
> replayed
>  * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... 
> during startup" can finish
>  ** replicaY now thinks it is in sync with leaderX, but leaderX has 
> (replayed) updates the other replicas know nothing about



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14106) SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005376#comment-17005376
 ] 

ASF subversion and git services commented on SOLR-14106:


Commit b6ea9919442bbf8ef506434ca668967d2e7cabcf in lucene-solr's branch 
refs/heads/branch_8_4 from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b6ea991 ]

SOLR-14106: Cleanup Jetty SslContextFactory usage

Jetty 9.4.16.v20190411 and up introduced separate
client and server SslContextFactory implementations.
This split requires the proper use of of
SslContextFactory in clients and server configs.

This fixes the following
* SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0
* Http2SolrClient SSL not working in branch_8x

(Cherry-picked from 3f23002456f7b991dd51601e3228ddbc033eb6b7)


> SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0
> ---
>
> Key: SOLR-14106
> URL: https://issues.apache.org/jira/browse/SOLR-14106
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.2, 8.3, 8.4, 8.3.1
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
>  Labels: jetty, ssl
> Fix For: 8.5, 8.4.1
>
> Attachments: SOLR-14106.patch, SOLR-14106.patch, SOLR-14106.patch, 
> deprecation-warning.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> For a client we use SSL certificate authentication with Solr through the 
> {{SOLR_SSL_NEED_CLIENT_AUTH=true}} setting. The client must then prove 
> through a local pem file that it has the correct client certificate.
> This works well until Solr 8.1.1, but fails with Solr 8.2 and also 8.3.1. 
> There has been a Jetty upgrade from from jetty-9.4.14 to jetty-9.4.19 and I 
> see some deprecation warnings in the log of 8.3.1:
> {noformat}
> o.e.j.x.XmlConfiguration Deprecated method public void 
> org.eclipse.jetty.util.ssl.SslContextFactory.setWantClientAuth(boolean) in 
> file:///opt/solr-8.3.1/server/etc/jetty-ssl.xml
> {noformat}
> I have made a simple reproduction script using Docker to reproduce first the 
> 8.1.1 behaviour that succeeds, then 8.3.1 which fails:
> {code}
> wget https://www.dropbox.com/s/fkjcez1i5anh42i/tls.tgz
> tar -xvzf tls.tgz
> cd tls
> ./repro.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy closed pull request #1096: SOLR-14109 Always log to stdout from zkcli.sh

2019-12-30 Thread GitBox
janhoy closed pull request #1096: SOLR-14109 Always log to stdout from zkcli.sh
URL: https://github.com/apache/lucene-solr/pull/1096
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy opened a new pull request #1129: SOLR-14109 Always log to stdout from zkcli.sh

2019-12-30 Thread GitBox
janhoy opened a new pull request #1129: SOLR-14109 Always log to stdout from 
zkcli.sh
URL: https://github.com/apache/lucene-solr/pull/1129
 
 
   There should be no reason to log anywhere else. This fixes a problem with 
bogous logging when running the tool from Solr's Docker image.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy closed pull request #1129: SOLR-14109 Always log to stdout from zkcli.sh

2019-12-30 Thread GitBox
janhoy closed pull request #1129: SOLR-14109 Always log to stdout from zkcli.sh
URL: https://github.com/apache/lucene-solr/pull/1129
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy opened a new pull request #1130: SOLR-14109 Always log to stdout from zkcli.sh

2019-12-30 Thread GitBox
janhoy opened a new pull request #1130: SOLR-14109 Always log to stdout from 
zkcli.sh
URL: https://github.com/apache/lucene-solr/pull/1130
 
 
   There should be no reason to log anywhere else. This fixes a problem with 
bogous logging when running the tool from Solr's Docker image.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #1130: SOLR-14109 Always log to stdout from zkcli.sh

2019-12-30 Thread GitBox
janhoy merged pull request #1130: SOLR-14109 Always log to stdout from zkcli.sh
URL: https://github.com/apache/lucene-solr/pull/1130
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14109) zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005396#comment-17005396
 ] 

ASF subversion and git services commented on SOLR-14109:


Commit 33bd811fb8b2a9bee595548e96c2a74721aa11b3 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=33bd811 ]

SOLR-14109: Always log to stdout from 
server/scripts/cloud-scripts/zkcli.{bat|sh} (#1130)



> zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set
> 
>
> Key: SOLR-14109
> URL: https://issues.apache.org/jira/browse/SOLR-14109
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I noticed this when running {{zkcli.sh}} from solr's docker image. [The 
> docker image sets the variable 
> {{LOG4J_PROPS}}|https://github.com/docker-solr/docker-solr/blob/master/8.3/Dockerfile],
>  causing the zkcli script to pick up and use that logger instead of the 
> console logger. Problem with that is that Solr's log4j2 config relies on the 
> {{solr.log.dir}} sysprop being set, which it is not when running this script.
> So either fix the wrapper script to set {{solr.log.dir}} or, better, always 
> log to stdout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1123: LUCENE-9093: Unified highlighter with word separator never gives context to the left

2019-12-30 Thread GitBox
dsmiley commented on a change in pull request #1123: LUCENE-9093: Unified 
highlighter with word separator never gives context to the left
URL: https://github.com/apache/lucene-solr/pull/1123#discussion_r362015879
 
 

 ##
 File path: 
lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/LengthGoalBreakIterator.java
 ##
 @@ -167,68 +146,70 @@ public int previous() {
 
   @Override
   public int following(int matchEndIndex) {
-final int targetIdx = (matchEndIndex + 1) + (int)(lengthGoal * (1.f - 
fragmentAlignment));
+return following(matchEndIndex, (matchEndIndex + 1) + (int)(lengthGoal * 
(1.f - fragmentAlignment)));
+  }
+
+  private int following(int matchEndIndex, int targetIdx) {
 if (targetIdx >= getText().getEndIndex()) {
-  return baseIter.last();
+  if (currentCache == baseIter.last()) {
+return DONE;
+  }
+  return currentCache = baseIter.last();
 }
 final int afterIdx = baseIter.following(targetIdx - 1);
 if (afterIdx == DONE) {
-  return baseIter.current();
+  currentCache = baseIter.last();
+  return DONE;
 }
 if (afterIdx == targetIdx) { // right on the money
-  return afterIdx;
+  return currentCache = afterIdx;
 }
 if (isMinimumLength) { // thus never undershoot
-  return afterIdx;
+  return currentCache = afterIdx;
 }
 
 // note: it is a shame that we invoke preceding() *one more time*; BI's 
are sometimes expensive.
 
 // Find closest break to target
 final int beforeIdx = baseIter.preceding(targetIdx);
 if (targetIdx - beforeIdx < afterIdx - targetIdx && beforeIdx > 
matchEndIndex) {
-  return beforeIdx;
+  return currentCache = beforeIdx;
 }
-return afterIdx;
-  }
-
-  private int moveToBreak(int idx) { // precondition: idx is a known break
-// bi.isBoundary(idx) has side-effect of moving the position.  Not obvious!
-//boolean moved = baseIter.isBoundary(idx); // probably not particularly 
expensive
-//assert moved && current() == idx;
-
-// TODO fix: Would prefer to do "- 1" instead of "- 2" but 
CustomSeparatorBreakIterator has a bug.
-int current = baseIter.following(idx - 2);
-assert current == idx : "following() didn't move us to the expected 
index.";
-return idx;
+// moveToBreak is necessary for when getSummaryPassagesNoHighlight calls 
next and current() is used
+return currentCache = afterIdx;
   }
 
   // called at start of new Passage given first word start offset
   @Override
   public int preceding(int matchStartIndex) {
 final int targetIdx = (matchStartIndex - 1) - (int)(lengthGoal * 
fragmentAlignment);
 if (targetIdx <= 0) {
-  return 0;
+  if (currentCache == baseIter.first()) {
+return DONE;
+  }
+  return currentCache = baseIter.first();
 }
 final int beforeIdx = baseIter.preceding(targetIdx + 1);
 if (beforeIdx == DONE) {
-  return 0;
+  currentCache = baseIter.first();
+  return DONE;
 }
 if (beforeIdx == targetIdx) { // right on the money
-  return beforeIdx;
+  return currentCache = beforeIdx;
 }
 if (isMinimumLength) { // thus never undershoot
-  return beforeIdx;
+  return currentCache = beforeIdx;
 }
 
 // note: it is a shame that we invoke following() *one more time*; BI's 
are sometimes expensive.
 
 // Find closest break to target
 final int afterIdx = baseIter.following(targetIdx - 1);
 if (afterIdx - targetIdx < targetIdx - beforeIdx && afterIdx < 
matchStartIndex) {
-  return afterIdx;
+  return currentCache = afterIdx;
 }
-return beforeIdx;
+// moveToBreak is for consistency
 
 Review comment:
   I'll remove this comment on commit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1123: LUCENE-9093: Unified highlighter with word separator never gives context to the left

2019-12-30 Thread GitBox
dsmiley commented on a change in pull request #1123: LUCENE-9093: Unified 
highlighter with word separator never gives context to the left
URL: https://github.com/apache/lucene-solr/pull/1123#discussion_r362016176
 
 

 ##
 File path: 
lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/LengthGoalBreakIterator.java
 ##
 @@ -60,12 +83,15 @@ private LengthGoalBreakIterator(BreakIterator baseIter, 
int lengthGoal, boolean
   @Override
   public String toString() {
 String goalDesc = isMinimumLength ? "minLen" : "targetLen";
-return getClass().getSimpleName() + "{" + goalDesc + "=" + lengthGoal + ", 
baseIter=" + baseIter + "}";
+return getClass().getSimpleName() + "{" + goalDesc + "=" + lengthGoal + ", 
fragAlign=" + fragmentAlignment +
+", baseIter=" + baseIter + ", currentCache=" + currentCache + "}";
 
 Review comment:
   I'm going to remove currentCache from the toString when I commit because I 
don't think it belongs.  It's an internal detail.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9093) Unified highlighter with word separator never gives context to the left

2019-12-30 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005403#comment-17005403
 ] 

David Smiley commented on LUCENE-9093:
--

The PR is concluding and I want to summarize here:

Proposed LUCENE CHANGES.txt:

(two places so we draw attention to something)

API Changes:

LUCENE-9093: Not an API change but a change in behavior of the 
UnifiedHighlighter's LengthGoalBreakIterator that will yield Passages sized a 
little different due to the fact that the sizing pivot is now the center of the 
first match and not its left edge.

Improvements:

LUCENE-9093: UnifiedHighlighter's LengthGoalBreakIterator has a new 
fragmentAlighnment option to better center the first match in the passage.  
Also the sizing point now pivots at the center of the first match term and not 
its left edge.  This yileds Passages that won't be identical to the previous 
behavior. (Nándor Mátravölgyi, David Smiley)

Proposed SOLR CHANGES.txt:

Improvements:

LUCENE-9093: The Unified highlighter has two new passage sizing parameters, 
hl.fragAlignRatio and hl.fragsizeIsMinimum, with defaults that aim to better 
center matches in fragments than previously.  See the ref guide.  Regardless of 
the settings, the passages may be sized differently than before. (Nándor 
Mátravölgyi, David Smiley)

> Unified highlighter with word separator never gives context to the left
> ---
>
> Key: LUCENE-9093
> URL: https://issues.apache.org/jira/browse/LUCENE-9093
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: Tim Retout
>Priority: Major
> Attachments: LUCENE-9093.patch
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> When using the unified highlighter with hl.bs.type=WORD, I am not able to get 
> context to the left of the matches returned; only words to the right of each 
> match are shown.  I see this behaviour on both Solr 6.4 and Solr 7.1.
> Without context to the left of a match, the highlighted snippets are much 
> less useful for understanding where the match appears in a document.
> As an example, using the techproducts data with Solr 7.1, given a search for 
> "apple", highlighting the "features" field:
> http://localhost:8983/solr/techproducts/select?hl.fl=features&hl=on&q=apple&hl.bs.type=WORD&hl.fragsize=30&hl.method=unified
> I see this snippet:
> "Apple Lossless, H.264 video"
> Note that "Apple" is anchored to the left.  Compare with the original 
> highlighter:
> http://localhost:8983/solr/techproducts/select?hl.fl=features&hl=on&q=apple&hl.fragsize=30
> And the match has context either side:
> ", Audible, Apple Lossless, H.264 video"
> (To complicate this, in general I am not sure that the unified highlighter is 
> respecting the hl.fragsize parameter, although [SOLR-9935] suggests support 
> was added.  I included the hl.fragsize param in the unified URL too, but it's 
> making no difference unless set to 0.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13813) Shared storage online split support

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005409#comment-17005409
 ] 

ASF subversion and git services commented on SOLR-13813:


Commit 8429a1c7f6cc2c9e3fcf017212975ade845836a7 in lucene-solr's branch 
refs/heads/jira/SOLR-13101 from Yonik Seeley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8429a1c ]

SOLR-13813: split-test: prohibit update failures, randomize commits


> Shared storage online split support
> ---
>
> Key: SOLR-13813
> URL: https://issues.apache.org/jira/browse/SOLR-13813
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The strategy for online shard splitting is the same as that for normal (non 
> SHARED shards.)
> During a split, the leader will forward updates to sub-shard leaders, those 
> updates will be buffered by the transaction log while the split is in 
> progress, and then the buffered updates are replayed.
> One change that was added was to push the local index to blob store after 
> buffered updates are applied (but before it is marked as ACTIVE):
> See 
> https://github.com/apache/lucene-solr/commit/fe17c813f5fe6773c0527f639b9e5c598b98c7d4#diff-081b7c2242d674bb175b41b6afc21663
> This issue is about adding tests and ensuring that online shard splitting 
> (while updates are flowing) works reliably.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14109) zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005420#comment-17005420
 ] 

ASF subversion and git services commented on SOLR-14109:


Commit 523b783f6336395d0bbe45bcf43f8235fe4637f7 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=523b783 ]

SOLR-14109: Always log to stdout from 
server/scripts/cloud-scripts/zkcli.{bat|sh} (#1130)

(cherry picked from commit 33bd811fb8b2a9bee595548e96c2a74721aa11b3)


> zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set
> 
>
> Key: SOLR-14109
> URL: https://issues.apache.org/jira/browse/SOLR-14109
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I noticed this when running {{zkcli.sh}} from solr's docker image. [The 
> docker image sets the variable 
> {{LOG4J_PROPS}}|https://github.com/docker-solr/docker-solr/blob/master/8.3/Dockerfile],
>  causing the zkcli script to pick up and use that logger instead of the 
> console logger. Problem with that is that Solr's log4j2 config relies on the 
> {{solr.log.dir}} sysprop being set, which it is not when running this script.
> So either fix the wrapper script to set {{solr.log.dir}} or, better, always 
> log to stdout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14109) zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005424#comment-17005424
 ] 

ASF subversion and git services commented on SOLR-14109:


Commit 2df5274617205a1b61bbb42cecb305eb39fb7a54 in lucene-solr's branch 
refs/heads/branch_8_4 from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2df5274 ]

SOLR-14109: Always log to stdout from 
server/scripts/cloud-scripts/zkcli.{bat|sh} (#1130)

(cherry picked from commit 33bd811fb8b2a9bee595548e96c2a74721aa11b3)


> zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set
> 
>
> Key: SOLR-14109
> URL: https://issues.apache.org/jira/browse/SOLR-14109
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I noticed this when running {{zkcli.sh}} from solr's docker image. [The 
> docker image sets the variable 
> {{LOG4J_PROPS}}|https://github.com/docker-solr/docker-solr/blob/master/8.3/Dockerfile],
>  causing the zkcli script to pick up and use that logger instead of the 
> console logger. Problem with that is that Solr's log4j2 config relies on the 
> {{solr.log.dir}} sysprop being set, which it is not when running this script.
> So either fix the wrapper script to set {{solr.log.dir}} or, better, always 
> log to stdout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14109) zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set

2019-12-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-14109.

Resolution: Fixed

> zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set
> 
>
> Key: SOLR-14109
> URL: https://issues.apache.org/jira/browse/SOLR-14109
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.5, 8.4.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I noticed this when running {{zkcli.sh}} from solr's docker image. [The 
> docker image sets the variable 
> {{LOG4J_PROPS}}|https://github.com/docker-solr/docker-solr/blob/master/8.3/Dockerfile],
>  causing the zkcli script to pick up and use that logger instead of the 
> console logger. Problem with that is that Solr's log4j2 config relies on the 
> {{solr.log.dir}} sysprop being set, which it is not when running this script.
> So either fix the wrapper script to set {{solr.log.dir}} or, better, always 
> log to stdout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14109) zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set

2019-12-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-14109:
---
Fix Version/s: 8.4.1
   8.5

> zkcli.sh and zkcli.bat barfs when LOG4J_PROPS is set
> 
>
> Key: SOLR-14109
> URL: https://issues.apache.org/jira/browse/SOLR-14109
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.5, 8.4.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I noticed this when running {{zkcli.sh}} from solr's docker image. [The 
> docker image sets the variable 
> {{LOG4J_PROPS}}|https://github.com/docker-solr/docker-solr/blob/master/8.3/Dockerfile],
>  causing the zkcli script to pick up and use that logger instead of the 
> console logger. Problem with that is that Solr's log4j2 config relies on the 
> {{solr.log.dir}} sysprop being set, which it is not when running this script.
> So either fix the wrapper script to set {{solr.log.dir}} or, better, always 
> log to stdout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9105) UniformSplit postings format should detect corrupted index

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005429#comment-17005429
 ] 

ASF subversion and git services commented on LUCENE-9105:
-

Commit 43e9897a237a067ffe81d647f40e4b4ab0324198 in lucene-solr's branch 
refs/heads/branch_8x from Bruno Roustant
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=43e9897 ]

LUCENE-9105: UniformSplit postings format detects corrupted index and better 
handles IO exceptions.


> UniformSplit postings format should detect corrupted index
> --
>
> Key: LUCENE-9105
> URL: https://issues.apache.org/jira/browse/LUCENE-9105
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Assignee: Bruno Roustant
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> BlockTree postings format has some checks when reading index metadata to 
> detect index corruption. UniformSplit should have the same. Additionally 
> UniformSplit has assertions in BlockReader that should be runtime checks to 
> also detect index corruption (this case has been encountered in production 
> environment).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #1127: Minor fixes to the release wizard.

2019-12-30 Thread GitBox
janhoy merged pull request #1127: Minor fixes to the release wizard.
URL: https://github.com/apache/lucene-solr/pull/1127
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14129) Reuse Jackson ObjectMapper in AuditLoggerPlugin

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005440#comment-17005440
 ] 

ASF subversion and git services commented on SOLR-14129:


Commit c4993bc99ca4e9b1780c900e8bfa242d540ff8b5 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c4993bc ]

SOLR-14129: Reuse Jackson ObjectMapper in AuditLoggerPlugin (#1104)



> Reuse Jackson ObjectMapper in AuditLoggerPlugin
> ---
>
> Key: SOLR-14129
> URL: https://issues.apache.org/jira/browse/SOLR-14129
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Auditlogging
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: perfomance
> Fix For: 8.5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As reported in 
> [https://lists.apache.org/thread.html/7565410ab2d9429b5cada98c70dfde18d9543b63ef8a5cf8723d99d8%40%3Cdev.lucene.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #1104: SOLR-14129: Reuse Jackson ObjectMapper in AuditLoggerPlugin

2019-12-30 Thread GitBox
janhoy merged pull request #1104: SOLR-14129: Reuse Jackson ObjectMapper in 
AuditLoggerPlugin
URL: https://github.com/apache/lucene-solr/pull/1104
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14129) Reuse Jackson ObjectMapper in AuditLoggerPlugin

2019-12-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005442#comment-17005442
 ] 

ASF subversion and git services commented on SOLR-14129:


Commit 6eff727590f47a32cbd696614e36be9f4c2ff318 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6eff727 ]

SOLR-14129: Reuse Jackson ObjectMapper in AuditLoggerPlugin (#1104)

(cherry picked from commit c4993bc99ca4e9b1780c900e8bfa242d540ff8b5)


> Reuse Jackson ObjectMapper in AuditLoggerPlugin
> ---
>
> Key: SOLR-14129
> URL: https://issues.apache.org/jira/browse/SOLR-14129
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Auditlogging
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: perfomance
> Fix For: 8.5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As reported in 
> [https://lists.apache.org/thread.html/7565410ab2d9429b5cada98c70dfde18d9543b63ef8a5cf8723d99d8%40%3Cdev.lucene.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14129) Reuse Jackson ObjectMapper in AuditLoggerPlugin

2019-12-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-14129.

Resolution: Fixed

> Reuse Jackson ObjectMapper in AuditLoggerPlugin
> ---
>
> Key: SOLR-14129
> URL: https://issues.apache.org/jira/browse/SOLR-14129
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Auditlogging
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: perfomance
> Fix For: 8.5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As reported in 
> [https://lists.apache.org/thread.html/7565410ab2d9429b5cada98c70dfde18d9543b63ef8a5cf8723d99d8%40%3Cdev.lucene.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13756) ivy cannot download org.restlet.ext.servlet jar

2019-12-30 Thread Zsolt Gyulavari (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005462#comment-17005462
 ] 

Zsolt Gyulavari commented on SOLR-13756:


[~jbernste] I will definitely looking into the hosting question.

I also suspected the redirect, and could build newer versions by configuring 
the 
redirected address ( 
[https://maven.restlet.talend.com|https://maven.restlet.talend.com/org/restlet/jee/org.restlet.ext.servlet/2.1.4/org.restlet.ext.servlet-2.1.4.pom]
 ) to the properties file for ivy.

I'm still working on older versions though.

> ivy cannot download org.restlet.ext.servlet jar
> ---
>
> Key: SOLR-13756
> URL: https://issues.apache.org/jira/browse/SOLR-13756
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chongchen Chen
>Priority: Major
>
> I checkout the project and run `ant idea`, it will try to download jars. But  
> https://repo1.maven.org/maven2/org/restlet/jee/org.restlet.ext.servlet/2.3.0/org.restlet.ext.servlet-2.3.0.jar
>  will return 404 now.  
> [ivy:retrieve] public: tried
> [ivy:retrieve]  
> https://repo1.maven.org/maven2/org/restlet/jee/org.restlet.ext.servlet/2.3.0/org.restlet.ext.servlet-2.3.0.jar
> [ivy:retrieve]::
> [ivy:retrieve]::  FAILED DOWNLOADS::
> [ivy:retrieve]:: ^ see resolution messages for details  ^ ::
> [ivy:retrieve]::
> [ivy:retrieve]:: 
> org.restlet.jee#org.restlet;2.3.0!org.restlet.jar
> [ivy:retrieve]:: 
> org.restlet.jee#org.restlet.ext.servlet;2.3.0!org.restlet.ext.servlet.jar
> [ivy:retrieve]::



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9112) OpenNLP tokenizer is fooled by text containing spurious punctuation

2019-12-30 Thread Markus Jelsma (Jira)
Markus Jelsma created LUCENE-9112:
-

 Summary: OpenNLP tokenizer is fooled by text containing spurious 
punctuation
 Key: LUCENE-9112
 URL: https://issues.apache.org/jira/browse/LUCENE-9112
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: master (9.0)
Reporter: Markus Jelsma
 Fix For: master (9.0)


The OpenNLP tokenizer show weird behaviour when text contains spurious 
punctuation such as having triple dots trailing a sentence...

# the first dot becomes part of the token, having 'sentence.' becomes the token
# much further down the text, a seemingly unrelated token is then suddenly 
split up, in my example the name 'Baron' is  split into 'Baro' and 'n', this is 
the real problem

The problems never seem to occur when using small texts in unit tests but it 
certainly does in real world examples. Depending on how many 'spurious' dots, a 
completely different term can become split, or the same term in just a 
different location.

I am not too sure if this is actually a problem in the Lucene code, but it is a 
problem and i have a Lucene unit test proving the problem.







--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9112) OpenNLP tokenizer is fooled by text containing spurious punctuation

2019-12-30 Thread Markus Jelsma (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated LUCENE-9112:
--
Attachment: (was: LUCENE-8740.patch)

> OpenNLP tokenizer is fooled by text containing spurious punctuation
> ---
>
> Key: LUCENE-9112
> URL: https://issues.apache.org/jira/browse/LUCENE-9112
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (9.0)
>Reporter: Markus Jelsma
>Priority: Major
>  Labels: opennlp
> Fix For: master (9.0)
>
>
> The OpenNLP tokenizer show weird behaviour when text contains spurious 
> punctuation such as having triple dots trailing a sentence...
> # the first dot becomes part of the token, having 'sentence.' becomes the 
> token
> # much further down the text, a seemingly unrelated token is then suddenly 
> split up, in my example the name 'Baron' is  split into 'Baro' and 'n', this 
> is the real problem
> The problems never seem to occur when using small texts in unit tests but it 
> certainly does in real world examples. Depending on how many 'spurious' dots, 
> a completely different term can become split, or the same term in just a 
> different location.
> I am not too sure if this is actually a problem in the Lucene code, but it is 
> a problem and i have a Lucene unit test proving the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9112) OpenNLP tokenizer is fooled by text containing spurious punctuation

2019-12-30 Thread Markus Jelsma (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated LUCENE-9112:
--
Attachment: LUCENE-8740.patch

> OpenNLP tokenizer is fooled by text containing spurious punctuation
> ---
>
> Key: LUCENE-9112
> URL: https://issues.apache.org/jira/browse/LUCENE-9112
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (9.0)
>Reporter: Markus Jelsma
>Priority: Major
>  Labels: opennlp
> Fix For: master (9.0)
>
>
> The OpenNLP tokenizer show weird behaviour when text contains spurious 
> punctuation such as having triple dots trailing a sentence...
> # the first dot becomes part of the token, having 'sentence.' becomes the 
> token
> # much further down the text, a seemingly unrelated token is then suddenly 
> split up, in my example the name 'Baron' is  split into 'Baro' and 'n', this 
> is the real problem
> The problems never seem to occur when using small texts in unit tests but it 
> certainly does in real world examples. Depending on how many 'spurious' dots, 
> a completely different term can become split, or the same term in just a 
> different location.
> I am not too sure if this is actually a problem in the Lucene code, but it is 
> a problem and i have a Lucene unit test proving the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9112) OpenNLP tokenizer is fooled by text containing spurious punctuation

2019-12-30 Thread Markus Jelsma (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated LUCENE-9112:
--
Attachment: LUCENE-9112-unittest.patch

> OpenNLP tokenizer is fooled by text containing spurious punctuation
> ---
>
> Key: LUCENE-9112
> URL: https://issues.apache.org/jira/browse/LUCENE-9112
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (9.0)
>Reporter: Markus Jelsma
>Priority: Major
>  Labels: opennlp
> Fix For: master (9.0)
>
> Attachments: LUCENE-9112-unittest.patch
>
>
> The OpenNLP tokenizer show weird behaviour when text contains spurious 
> punctuation such as having triple dots trailing a sentence...
> # the first dot becomes part of the token, having 'sentence.' becomes the 
> token
> # much further down the text, a seemingly unrelated token is then suddenly 
> split up, in my example the name 'Baron' is  split into 'Baro' and 'n', this 
> is the real problem
> The problems never seem to occur when using small texts in unit tests but it 
> certainly does in real world examples. Depending on how many 'spurious' dots, 
> a completely different term can become split, or the same term in just a 
> different location.
> I am not too sure if this is actually a problem in the Lucene code, but it is 
> a problem and i have a Lucene unit test proving the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9112) OpenNLP tokenizer is fooled by text containing spurious punctuation

2019-12-30 Thread Markus Jelsma (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated LUCENE-9112:
--
Description: 
The OpenNLP tokenizer show weird behaviour when text contains spurious 
punctuation such as having triple dots trailing a sentence...

# the first dot becomes part of the token, having 'sentence.' becomes the token
# much further down the text, a seemingly unrelated token is then suddenly 
split up, in my example (see attached unit test) the name 'Baron' is  split 
into 'Baro' and 'n', this is the real problem

The problems never seem to occur when using small texts in unit tests but it 
certainly does in real world examples. Depending on how many 'spurious' dots, a 
completely different term can become split, or the same term in just a 
different location.

I am not too sure if this is actually a problem in the Lucene code, but it is a 
problem and i have a Lucene unit test proving the problem.





  was:
The OpenNLP tokenizer show weird behaviour when text contains spurious 
punctuation such as having triple dots trailing a sentence...

# the first dot becomes part of the token, having 'sentence.' becomes the token
# much further down the text, a seemingly unrelated token is then suddenly 
split up, in my example the name 'Baron' is  split into 'Baro' and 'n', this is 
the real problem

The problems never seem to occur when using small texts in unit tests but it 
certainly does in real world examples. Depending on how many 'spurious' dots, a 
completely different term can become split, or the same term in just a 
different location.

I am not too sure if this is actually a problem in the Lucene code, but it is a 
problem and i have a Lucene unit test proving the problem.






> OpenNLP tokenizer is fooled by text containing spurious punctuation
> ---
>
> Key: LUCENE-9112
> URL: https://issues.apache.org/jira/browse/LUCENE-9112
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (9.0)
>Reporter: Markus Jelsma
>Priority: Major
>  Labels: opennlp
> Fix For: master (9.0)
>
> Attachments: LUCENE-9112-unittest.patch
>
>
> The OpenNLP tokenizer show weird behaviour when text contains spurious 
> punctuation such as having triple dots trailing a sentence...
> # the first dot becomes part of the token, having 'sentence.' becomes the 
> token
> # much further down the text, a seemingly unrelated token is then suddenly 
> split up, in my example (see attached unit test) the name 'Baron' is  split 
> into 'Baro' and 'n', this is the real problem
> The problems never seem to occur when using small texts in unit tests but it 
> certainly does in real world examples. Depending on how many 'spurious' dots, 
> a completely different term can become split, or the same term in just a 
> different location.
> I am not too sure if this is actually a problem in the Lucene code, but it is 
> a problem and i have a Lucene unit test proving the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14106) SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0

2019-12-30 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005494#comment-17005494
 ] 

Jan Høydahl commented on SOLR-14106:


I merged to branch_8_4 today, only change in cherry-pick is changes entry. You 
may now resolve this jira.

> SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0
> ---
>
> Key: SOLR-14106
> URL: https://issues.apache.org/jira/browse/SOLR-14106
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.2, 8.3, 8.4, 8.3.1
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
>  Labels: jetty, ssl
> Fix For: 8.5, 8.4.1
>
> Attachments: SOLR-14106.patch, SOLR-14106.patch, SOLR-14106.patch, 
> deprecation-warning.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> For a client we use SSL certificate authentication with Solr through the 
> {{SOLR_SSL_NEED_CLIENT_AUTH=true}} setting. The client must then prove 
> through a local pem file that it has the correct client certificate.
> This works well until Solr 8.1.1, but fails with Solr 8.2 and also 8.3.1. 
> There has been a Jetty upgrade from from jetty-9.4.14 to jetty-9.4.19 and I 
> see some deprecation warnings in the log of 8.3.1:
> {noformat}
> o.e.j.x.XmlConfiguration Deprecated method public void 
> org.eclipse.jetty.util.ssl.SslContextFactory.setWantClientAuth(boolean) in 
> file:///opt/solr-8.3.1/server/etc/jetty-ssl.xml
> {noformat}
> I have made a simple reproduction script using Docker to reproduce first the 
> 8.1.1 behaviour that succeeds, then 8.3.1 which fails:
> {code}
> wget https://www.dropbox.com/s/fkjcez1i5anh42i/tls.tgz
> tar -xvzf tls.tgz
> cd tls
> ./repro.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13759) Optimize Queries when query filtering by TRA router.field

2019-12-30 Thread Gus Heck (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-13759:

Attachment: QueryVisitorExample.java

> Optimize Queries when query filtering by TRA router.field
> -
>
> Key: SOLR-13759
> URL: https://issues.apache.org/jira/browse/SOLR-13759
> Project: Solr
>  Issue Type: Sub-task
>Reporter: mosh
>Assignee: Gus Heck
>Priority: Minor
> Attachments: QueryVisitorExample.java, SOLR-13759.patch, 
> SOLR-13759.patch, image-2019-12-09-22-45-51-721.png
>
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries will be by filtering query by date range, by 
> that we will be able to querying the specific TRA collections taking 
> advantage of the TRA mechanism of partitioning data based on date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1033: SOLR-13965: Use Plugin to add new expressions to GraphHandler

2019-12-30 Thread GitBox
cpoerschke commented on a change in pull request #1033: SOLR-13965: Use Plugin 
to add new expressions to GraphHandler
URL: https://github.com/apache/lucene-solr/pull/1033#discussion_r362065317
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/handler/GraphHandler.java
 ##
 @@ -92,24 +104,29 @@ public void inform(SolrCore core) {
 }
 
 // This pulls all the overrides and additions from the config
+List pluginInfos = 
core.getSolrConfig().getPluginInfos(Expressible.class.getName());
+
+// Check deprecated approach.
 Object functionMappingsObj = initArgs.get("streamFunctions");
 if(null != functionMappingsObj){
+  log.warn("solrconfig.xml:  is deprecated for adding 
additional streaming functions to GraphHandler.");
   NamedList functionMappings = (NamedList)functionMappingsObj;
   for(Entry functionMapping : functionMappings) {
 String key = functionMapping.getKey();
 PluginInfo pluginInfo = new PluginInfo(key, 
Collections.singletonMap("class", functionMapping.getValue()));
-
-if (pluginInfo.pkgName == null) {
-  Class clazz = 
core.getResourceLoader().findClass((String) functionMapping.getValue(),
-  Expressible.class);
-  streamFactory.withFunctionName(key, clazz);
-} else {
-  StreamHandler.ExpressibleHolder holder = new 
StreamHandler.ExpressibleHolder(pluginInfo, core, 
SolrConfig.classVsSolrPluginInfo.get(Expressible.class));
-  streamFactory.withFunctionName(key, () -> holder.getClazz());
-}
-
+pluginInfos.add(pluginInfo);
   }
+}
 
+for (PluginInfo pluginInfo : pluginInfos) {
+  if (pluginInfo.pkgName != null) {
+ExpressibleHolder holder = new ExpressibleHolder(pluginInfo, core, 
SolrConfig.classVsSolrPluginInfo.get(Expressible.class));
+streamFactory.withFunctionName(pluginInfo.name,
+() -> holder.getClazz());
+  } else {
+Class clazz = 
core.getMemClassLoader().findClass(pluginInfo.className, Expressible.class);
 
 Review comment:
   Thanks @epugh for resolving the merge conflicts! The previous 
`getResourceLoader` is now replaced with `getMemClassLoader` here, just 
checking if that's intended or accidental perhaps, not yet looked what the 
difference (if any) between the two methods' behaviour is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13759) Optimize Queries when query filtering by TRA router.field

2019-12-30 Thread Gus Heck (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005625#comment-17005625
 ] 

Gus Heck commented on SOLR-13759:
-

After fiddling with Query visitor I admit it would be nicer if it had some 
better views into non-textual queries. In the attached example I was able to 
grab the dates from individual terms matching a particular field with only 
minimal hand parsing of strings. Something along these lines will at least make 
us robust against complex fq's containing other fields or multiple instances of 
the field we are interested in. 

> Optimize Queries when query filtering by TRA router.field
> -
>
> Key: SOLR-13759
> URL: https://issues.apache.org/jira/browse/SOLR-13759
> Project: Solr
>  Issue Type: Sub-task
>Reporter: mosh
>Assignee: Gus Heck
>Priority: Minor
> Attachments: QueryVisitorExample.java, SOLR-13759.patch, 
> SOLR-13759.patch, image-2019-12-09-22-45-51-721.png
>
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries will be by filtering query by date range, by 
> that we will be able to querying the specific TRA collections taking 
> advantage of the TRA mechanism of partitioning data based on date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1033: SOLR-13965: Use Plugin to add new expressions to GraphHandler

2019-12-30 Thread GitBox
cpoerschke commented on a change in pull request #1033: SOLR-13965: Use Plugin 
to add new expressions to GraphHandler
URL: https://github.com/apache/lucene-solr/pull/1033#discussion_r362070735
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/handler/GraphHandler.java
 ##
 @@ -92,24 +104,29 @@ public void inform(SolrCore core) {
 }
 
 // This pulls all the overrides and additions from the config
+List pluginInfos = 
core.getSolrConfig().getPluginInfos(Expressible.class.getName());
+
+// Check deprecated approach.
 Object functionMappingsObj = initArgs.get("streamFunctions");
 if(null != functionMappingsObj){
+  log.warn("solrconfig.xml:  is deprecated for adding 
additional streaming functions to GraphHandler.");
   NamedList functionMappings = (NamedList)functionMappingsObj;
   for(Entry functionMapping : functionMappings) {
 String key = functionMapping.getKey();
 PluginInfo pluginInfo = new PluginInfo(key, 
Collections.singletonMap("class", functionMapping.getValue()));
-
-if (pluginInfo.pkgName == null) {
-  Class clazz = 
core.getResourceLoader().findClass((String) functionMapping.getValue(),
-  Expressible.class);
-  streamFactory.withFunctionName(key, clazz);
-} else {
-  StreamHandler.ExpressibleHolder holder = new 
StreamHandler.ExpressibleHolder(pluginInfo, core, 
SolrConfig.classVsSolrPluginInfo.get(Expressible.class));
-  streamFactory.withFunctionName(key, () -> holder.getClazz());
-}
-
+pluginInfos.add(pluginInfo);
 
 Review comment:
   The `pluginInfos.add` here raises an interesting question w.r.t. both 
`streamFunctions` and `expressible` being present and what should happen if 
both were to configure the same 'key' ...
   
   My first thought was that simply disallowing the combination is easiest but 
actually that's not a good idea since `streamFunctions` is on a handler level 
whereas `expressible` is on the config level and so the presence of existing 
`expressible` elements would result in the also already present 
`streamFunctions` no longer working, which would not be very user backwards 
compatibility friendly.
   
   If use of both elements is supported though then the deprecated variant 
being appended to the list could (at least theoretically) result in the 
deprecated `streamFunctions` configuration for name 'foo' using 'FooClass' 
overriding (for the GraphHandler only) any `expressible` mapping from name 
'foo' to 'BarClass'. But then again, adding validation e.g. to reject any 
`streamFunctions` elements that are already present-and-different in 
`expressible` would add code complexity.
   
   @joel-bernstein would you have any thoughts on this? Beyond the scope of 
this change here the question essentially is about multiple 
[StreamFactory.withFunctionName](https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.4.0/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/expr/StreamFactory.java#L85-L88)
 calls for the same function name.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9112) OpenNLP tokenizer is fooled by text containing spurious punctuation

2019-12-30 Thread Steven Rowe (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005798#comment-17005798
 ] 

Steven Rowe commented on LUCENE-9112:
-

You unit test depends on a test model created with very little training data ( 
< 100 sentences; see {{opennlp/src/tools/test-model-data/tokenizer.txt}}), so 
it's not at all surprising that you see weird behavior.  I would not consider 
this indicative of a bug in Lucene's OpenNLP support.

I think you should open an OPENNLP issue for this problem, but it's likely that 
the most you'll get from them is a pointer to the training data they used to 
create the model they publish.  The most likely outcome is that you will have 
to create a training set that performs better against data you see, and then 
create a model from that.  If you can do that in a way that is shareable with 
other OpenNLP users, I'm sure they would be interested in your contribution.


> OpenNLP tokenizer is fooled by text containing spurious punctuation
> ---
>
> Key: LUCENE-9112
> URL: https://issues.apache.org/jira/browse/LUCENE-9112
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (9.0)
>Reporter: Markus Jelsma
>Priority: Major
>  Labels: opennlp
> Fix For: master (9.0)
>
> Attachments: LUCENE-9112-unittest.patch
>
>
> The OpenNLP tokenizer show weird behaviour when text contains spurious 
> punctuation such as having triple dots trailing a sentence...
> # the first dot becomes part of the token, having 'sentence.' becomes the 
> token
> # much further down the text, a seemingly unrelated token is then suddenly 
> split up, in my example (see attached unit test) the name 'Baron' is  split 
> into 'Baro' and 'n', this is the real problem
> The problems never seem to occur when using small texts in unit tests but it 
> certainly does in real world examples. Depending on how many 'spurious' dots, 
> a completely different term can become split, or the same term in just a 
> different location.
> I am not too sure if this is actually a problem in the Lucene code, but it is 
> a problem and i have a Lucene unit test proving the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1124: SOLR-14151 :Schema components to be loadable from packages

2019-12-30 Thread GitBox
dsmiley commented on a change in pull request #1124: SOLR-14151 :Schema 
components to be loadable from packages
URL: https://github.com/apache/lucene-solr/pull/1124#discussion_r362097171
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/core/ConfigSet.java
 ##
 @@ -51,8 +53,11 @@ public SolrConfig getSolrConfig() {
 return solrconfig;
   }
 
+  public Supplier getIndexSchemaSupplier() {
 
 Review comment:
   Nevermind Future; I was thinking FutureTask specifically but another look 
shows it'd block.  However Guava `Suppliers.memoize` is perfect


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] andyvuong opened a new pull request #1131: SOLR-14134 : Add lazy and time-based evictiction of shared core concurrency metada…

2019-12-30 Thread GitBox
andyvuong opened a new pull request #1131: SOLR-14134 : Add lazy and time-based 
evictiction of shared core concurrency metada…
URL: https://github.com/apache/lucene-solr/pull/1131
 
 
   Two Main Changes:
   - This change switches the cache from a ConcurrentHashMap to a 
TimeAwareLruCache with its underlying implementation simply being a Caffeine 
Cache. The default configurations, time delay before least-recently accessed 
(read or write) are evicted and maximum cache size are currently arbitrarily 
chosen and will later be configurable.
   - In ZkController#register, we evict any entry that already exists in the 
SharedCoreConcurrencyController cache if it already exists. This supports the 
collection reuse case described in the JIRA. This is done here because all 
ADDREPLICA paths flow through here logically and we want to evict entries 
before the replica gets marked ACTIVE (done here)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14106) SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0

2019-12-30 Thread Kevin Risden (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-14106:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~janhoy]

> SSL with SOLR_SSL_NEED_CLIENT_AUTH not working since v8.2.0
> ---
>
> Key: SOLR-14106
> URL: https://issues.apache.org/jira/browse/SOLR-14106
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.2, 8.3, 8.4, 8.3.1
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
>  Labels: jetty, ssl
> Fix For: 8.5, 8.4.1
>
> Attachments: SOLR-14106.patch, SOLR-14106.patch, SOLR-14106.patch, 
> deprecation-warning.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> For a client we use SSL certificate authentication with Solr through the 
> {{SOLR_SSL_NEED_CLIENT_AUTH=true}} setting. The client must then prove 
> through a local pem file that it has the correct client certificate.
> This works well until Solr 8.1.1, but fails with Solr 8.2 and also 8.3.1. 
> There has been a Jetty upgrade from from jetty-9.4.14 to jetty-9.4.19 and I 
> see some deprecation warnings in the log of 8.3.1:
> {noformat}
> o.e.j.x.XmlConfiguration Deprecated method public void 
> org.eclipse.jetty.util.ssl.SslContextFactory.setWantClientAuth(boolean) in 
> file:///opt/solr-8.3.1/server/etc/jetty-ssl.xml
> {noformat}
> I have made a simple reproduction script using Docker to reproduce first the 
> 8.1.1 behaviour that succeeds, then 8.3.1 which fails:
> {code}
> wget https://www.dropbox.com/s/fkjcez1i5anh42i/tls.tgz
> tar -xvzf tls.tgz
> cd tls
> ./repro.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14155) Load all other SolrCore plugins from packages

2019-12-30 Thread Noble Paul (Jira)
Noble Paul created SOLR-14155:
-

 Summary: Load all other SolrCore plugins from packages
 Key: SOLR-14155
 URL: https://issues.apache.org/jira/browse/SOLR-14155
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14130) Add postlogs command line tool for indexing Solr logs

2019-12-30 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14130:
--
Attachment: SOLR-14130.patch

> Add postlogs command line tool for indexing Solr logs
> -
>
> Key: SOLR-14130
> URL: https://issues.apache.org/jira/browse/SOLR-14130
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-14130.patch, SOLR-14130.patch, SOLR-14130.patch, 
> Screen Shot 2019-12-19 at 2.04.41 PM.png, Screen Shot 2019-12-19 at 2.16.01 
> PM.png, Screen Shot 2019-12-19 at 2.35.41 PM.png, Screen Shot 2019-12-21 at 
> 8.46.51 AM.png
>
>
> This ticket adds a simple command line tool for posting Solr logs to a solr 
> index. The tool works with the out of the box Solr log format. Still a work 
> in progress but currently indexes:
>  * queries
>  * updates
>  * commits
>  * new searchers
>  * errors - including stack traces
> Attached are some sample visualizations using Solr Streaming Expressions and 
> Math Expressions after the data has been loaded. The visualizations show: 
> time series, scatter plots, histograms and quantile plots, but really this is 
> just scratching the surface of the visualizations that can be done with the 
> Solr logs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ben-manes commented on a change in pull request #1131: SOLR-14134: Add lazy and time-based evictiction of shared core concurrency metada…

2019-12-30 Thread GitBox
ben-manes commented on a change in pull request #1131: SOLR-14134: Add lazy and 
time-based evictiction of shared core concurrency metada…
URL: https://github.com/apache/lucene-solr/pull/1131#discussion_r362153253
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/store/shared/TimeAwareLruCache.java
 ##
 @@ -0,0 +1,137 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.store.shared;
+
+import java.util.concurrent.TimeUnit;
+import java.util.function.Function;
+
+import com.github.benmanes.caffeine.cache.Cache;
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.RemovalCause;
+import com.github.benmanes.caffeine.cache.RemovalListener;
+import com.google.common.annotations.VisibleForTesting;
+
+/**
+ * Abstract class representing an in-memory time-aware LRU cache 
+ * 
+ * Time-based LRU cache should evict entries that haven't been accessed
+ * recently and frequently as the size of the cache approaches the specified
+ * limit or if an entry expires before its accessed again
+ */
+public abstract class TimeAwareLruCache {
+
+  /**
+   * Retrieves an entry from the cache if it exists, otherwise invokes the
+   * callable to compute the value and adds it to the cache
+   */
+  public abstract V computeIfAbsent(K key, Function 
func) throws Exception;
+  
+  /**
+   * Retrieves an entry from the cache if it exists, returns null otherwise 
+   */
+  public abstract V get(K key);
+  
+  /**
+   * Adds or removes an entry from the cache 
+   */
+  public abstract void put(K key, V value);
+  
+  /**
+   * Removes an entry from the cache. Returns true if the entry existed and 
was removed
+   */
+  public abstract boolean invalidate(K key);
+  
+  /**
+   * Forces the cache to remove any expired entries, or entries to be removed 
as deemed
+   * by the underlying eviction policies 
+   */
+  public abstract void cleanUp();
+  
+  /**
+   * An object that performs some function when an entry is removed from the 
cache. This 
+   * should be used in cache implementations that support removal listeners
+   */
+  @FunctionalInterface
+  public interface CacheRemovalTrigger {
+/**
+ * @param key of the value being removed
+ * @param value of the key being removed
+ * @param desc string description of the removal cause
+ */
+void call(K key, V value, String desc);
+  }
+  
+  /**
+   * Caffeine-based implementation of a {@link TimeAwareLruCache}. By default, 
eviction policy
+   * work is performed asynchronously. Therefore entries are evicted after 
some read or write 
+   * operations. Calling cleanup will perform any pending maintenance 
operations required by 
+   * the cache
+   * https://github.com/ben-manes/caffeine/wiki/Cleanup
+   */
+  public static class EvictingCache extends TimeAwareLruCache {
+
+private Cache cache;
+
+public EvictingCache(int maxSize, long expirationTimeSeconds) {
+  this(maxSize, expirationTimeSeconds, null);
+}
+
+@VisibleForTesting
+protected EvictingCache(int maxSize, long expirationTimeSeconds, 
CacheRemovalTrigger removalTrigger) {
+  Caffeine builder = Caffeine.newBuilder();
+  builder.maximumSize(maxSize).expireAfterAccess(expirationTimeSeconds, 
TimeUnit.SECONDS);
+  if (removalTrigger != null) {
+builder.removalListener(new RemovalListener() {
+  @Override
+  public void onRemoval(K key, V value, RemovalCause cause) {
+removalTrigger.call(key, value, cause.name());
+  }
+});
+  }
+  cache = builder.build();
+}  
+
+@Override
+public V computeIfAbsent(K key, Function func) 
throws Exception {
+  return cache.get(key, func);
+}
+
+@Override
+public V get(K key) {
+  return cache.getIfPresent(key);
+}
+
+@Override
+public void put(K key, V value) {
+  cache.put(key, value);
+}
+
+@Override
+public void cleanUp() {
+  cache.cleanUp();
+}
+
+@Override
+public boolean invalidate(K key) {
 
 Review comment:
   This could be a one-liner :)
   ```java
   return cache.asMap().remove(key) != null;
   ```

-

[GitHub] [lucene-solr] ben-manes commented on issue #1131: SOLR-14134: Add lazy and time-based evictiction of shared core concurrency metada…

2019-12-30 Thread GitBox
ben-manes commented on issue #1131: SOLR-14134: Add lazy and time-based 
evictiction of shared core concurrency metada…
URL: https://github.com/apache/lucene-solr/pull/1131#issuecomment-569870591
 
 
   Just curious if you foresee alternative implementations? If not, you might 
consider just using the API directly to avoid the facade and test cases that 
assert basic implementation details that the library's tests check as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gunasekhardora opened a new pull request #1132: SOLR-13839:maxScore shouldn't be NaN

2019-12-30 Thread GitBox
gunasekhardora opened a new pull request #1132: SOLR-13839:maxScore shouldn't 
be NaN
URL: https://github.com/apache/lucene-solr/pull/1132
 
 
   When group.query doesn't match anything maxScore would return 0 instead of 
NaN in the response.
   
   
   
   
   # Description
   
   Please provide a short description of the changes you're making with this 
pull request.
   
   # Solution
   
   Please provide a short description of the approach taken to implement your 
solution.
   
   # Tests
   
   Please describe the tests you've developed or run to confirm this patch 
implements the feature or solves the problem.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [ ] I have run `ant precommit` and the appropriate test suite.
   - [x] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org