[jira] [Commented] (SOLR-14357) solrj: using insecure namedCurves

2020-05-06 Thread Bernd Wahlen (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100542#comment-17100542
 ] 

Bernd Wahlen commented on SOLR-14357:
-

Update:
after i updated to AdoptOpenJDK (build 14.0.1+7) and my patched java.security 
file was overwritten with the jdk default accidentally, it still works without 
the error above. But i updated some other things in the meantime (mainly centos 
7.7->7.8 and solrj to 8.5.1). I will try to investigate how to reproduce later.

> solrj: using insecure namedCurves
> -
>
> Key: SOLR-14357
> URL: https://issues.apache.org/jira/browse/SOLR-14357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bernd Wahlen
>Priority: Major
>
> i tried to run our our backend with solrj 8.4.1 on jdk14 and get the 
> following error:
> Caused by: java.lang.IllegalArgumentException: Error in security property. 
> Constraint unknown: c2tnb191v1
> after i removed all the X9.62 algoriths from the property 
> jdk.disabled.namedCurves in
> /usr/lib/jvm/java-14-openjdk-14.0.0.36-1.rolling.el7.x86_64/conf/security/java.security
> everything is running.
> This does not happend on staging (i think because of only 1 solr node - not 
> using lb client).
> We do not set or change any ssl settings in solr.in.sh.
> I don't know how to fix that (default config?, apache client settings?), but 
> i think using insecure algorithms may be  a security risk and not only a 
> jdk14 issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14426) forbidden api error during precommit DateMathFunction

2020-05-06 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100563#comment-17100563
 ] 

Dawid Weiss commented on SOLR-14426:


Hmm... How do you reproduce the bahavior you guys describe? Because my thinking 
would be that if you touch the source file (DateMathFunction.java) both class 
files should be regenerated. And both class files should have source attribute 
pointing back at DateMathFunction.java (so tools should be able to tell where a 
class comes from).

> forbidden api error during precommit DateMathFunction
> -
>
> Key: SOLR-14426
> URL: https://issues.apache.org/jira/browse/SOLR-14426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mike Drob
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When running `./gradlew precommit` I'll occasionally see
> {code}
> * What went wrong:
> Execution failed for task ':solr:contrib:analytics:forbiddenApisMain'.
> > de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API 
> > calls failed while scanning class 
> > 'org.apache.solr.analytics.function.mapping.DateMathFunction' 
> > (DateMathFunction.java): java.lang.ClassNotFoundException: 
> > org.apache.solr.analytics.function.mapping.DateMathValueFunction (while 
> > looking up details about referenced class 
> > 'org.apache.solr.analytics.function.mapping.DateMathValueFunction')
> {code}
> `./gradlew clean` fixes this, but I don't understand what or why this 
> happens. Feels like a gradle issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] uschindler opened a new pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks

2020-05-06 Thread GitBox


uschindler opened a new pull request #1488:
URL: https://github.com/apache/lucene-solr/pull/1488


   This is a WIP task to convert the renderJavadoc task in the Gradle build to 
run 2 times:
   - Once for the Maven artifacts with absolute links (standard "javadoc" 
gradle task replacement. Output is project local). This task will be used for 
precommit (limited checks, as cross-module links cannot be checked. This task 
is for everyday use
   - another task that renders all javadocs to the global documentation folder 
(one for Lucene, one for Solr). All links inside will be relative.
   
   This PR currently contains:
   - Refactor task to own class, so we can create multiple named tasks
   - Add task property to generate relative links (a new closure was added to 
do this: It produces a relative link from the Gradle project path of current 
project to the linked project



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-06 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100582#comment-17100582
 ] 

Uwe Schindler commented on LUCENE-9321:
---

I started a PR with this idea: https://github.com/apache/lucene-solr/pull/1488

It's WIP, so have a look. Basically it refactors [~tomoko]'s code to a class 
with properties, so it can be reused for different tasks with completely 
independent setting. The whole thing does not yet work, I just wanted to post 
some eraly results.

The hardest part was to find a good way to create relative links. I started 
with URI#relativize, but this one is buggy, as it needs a common prefix and 
creates no "..".

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14457) SolrClient leaks connections on compressed responses if the response is malformed

2020-05-06 Thread Roger Lehmann (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roger Lehmann updated SOLR-14457:
-
Attachment: multiple-wrapped-entities.png
content-gzipped.png

> SolrClient leaks connections on compressed responses if the response is 
> malformed
> -
>
> Key: SOLR-14457
> URL: https://issues.apache.org/jira/browse/SOLR-14457
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.7.2
> Environment: Solr version: 7.7.2
> Solr cloud enabled
> Cluster topology: 6 nodes, 1 single collection, 10 shards and 3 replicas. 1 
> HTTP LB using
> Round Robin over all nodes
> All cluster nodes have gzip enabled for all paths, all HTTP verbs and all 
> MIME types.
> Solr client: HttpSolrClient targeting the HTTP LB
>Reporter: Samuel García Martínez
>Priority: Major
> Attachments: content-gzipped.png, multiple-wrapped-entities.png
>
>
> h3. Summary
> When the SolrJ receives a malformed response Entity, for example like the one 
> described in SOLR-14456, the client leaks the connection forever as it's 
> never released back to the pool.
> h3. Problem description
> HttpSolrClient should have compression enabled, so it uses the compression 
> interceptors.
> When the request is marked with "Content-Encoding: gzip" but for whatever 
> reason the response is not in GZIP format, when  HttpSolrClient tries to 
> close the connection using Utils.consumeFully(), it tries to create the 
> GzipInputStream failing and throwing an Exception. The exception thrown makes 
> it impossible to access the underlying InputStream to be closed, therefore 
> the connection is leaked.
> Despite that the content in the response should honour the headers specified 
> for it, SolrJ should be reliable enough to prevent the connection leak when 
> this scenario happens. On top of the bug itself, not being able to set a 
> timeout while waiting for a connection to be available, makes any application 
> unresponsive as it will run out of threads eventually.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9148) Move the BKD index to its own file.

2020-05-06 Thread Ignacio Vera (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100589#comment-17100589
 ] 

Ignacio Vera commented on LUCENE-9148:
--

It would be nice to consider LUCENE-9291 for this change. The idea of the 
refactoring was exactly to avoid spilling all those {{IndexOutput}} parameters 
in the BKDWriter interface making it even harder to read. Otherwise the change 
make sense to me +1. 

> Move the BKD index to its own file.
> ---
>
> Key: LUCENE-9148
> URL: https://issues.apache.org/jira/browse/LUCENE-9148
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lucene60PointsWriter stores both inner nodes and leaf nodes in the same file, 
> interleaved. For instance if you have two fields, you would have 
> {{}}. It's not 
> ideal since leaves and inner nodes have quite different access patterns. 
> Should we split this into two files? In the case when the BKD index is 
> off-heap, this would also help force it into RAM with 
> {{MMapDirectory#setPreload}}.
> Note that Lucene60PointsFormat already has a file that it calls "index" but 
> it's really only about mapping fields to file pointers in the other file and 
> not what I'm discussing here. But we could possibly store the BKD indices in 
> this existing file if we want to avoid creating a new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14457) SolrClient leaks connections on compressed responses if the response is malformed

2020-05-06 Thread Roger Lehmann (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100590#comment-17100590
 ] 

Roger Lehmann commented on SOLR-14457:
--

Sorry for the confusion!
We're using the HTTP1 client, since I think Http2SolrClient doesn't yet have 
allowCompression() available?

The bug happens exactly as you say, I was just confused by the multiple wrapped 
objects and the content buffer seemingly being gzipped (see Debug view of the 
screenshots below). I'm not a Java programmer, just stumbled upon this issue 
when we tried use GZIP in our production environment and it ran against the 
wall. So I can't help much with further debugging/developing a fix for it 
unfortunately.

I've used using a small test application here with a prefilled Solr running in 
a Docker container for testing this.
In our production environment we experience the same issue when using GZIP, not 
using Docker there.


{panel:title=Screenshots}
!multiple-wrapped-entities.png!

!content-gzipped.png! 
{panel}


> SolrClient leaks connections on compressed responses if the response is 
> malformed
> -
>
> Key: SOLR-14457
> URL: https://issues.apache.org/jira/browse/SOLR-14457
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.7.2
> Environment: Solr version: 7.7.2
> Solr cloud enabled
> Cluster topology: 6 nodes, 1 single collection, 10 shards and 3 replicas. 1 
> HTTP LB using
> Round Robin over all nodes
> All cluster nodes have gzip enabled for all paths, all HTTP verbs and all 
> MIME types.
> Solr client: HttpSolrClient targeting the HTTP LB
>Reporter: Samuel García Martínez
>Priority: Major
> Attachments: content-gzipped.png, multiple-wrapped-entities.png
>
>
> h3. Summary
> When the SolrJ receives a malformed response Entity, for example like the one 
> described in SOLR-14456, the client leaks the connection forever as it's 
> never released back to the pool.
> h3. Problem description
> HttpSolrClient should have compression enabled, so it uses the compression 
> interceptors.
> When the request is marked with "Content-Encoding: gzip" but for whatever 
> reason the response is not in GZIP format, when  HttpSolrClient tries to 
> close the connection using Utils.consumeFully(), it tries to create the 
> GzipInputStream failing and throwing an Exception. The exception thrown makes 
> it impossible to access the underlying InputStream to be closed, therefore 
> the connection is leaked.
> Despite that the content in the response should honour the headers specified 
> for it, SolrJ should be reliable enough to prevent the connection leak when 
> this scenario happens. On top of the bug itself, not being able to set a 
> timeout while waiting for a connection to be available, makes any application 
> unresponsive as it will run out of threads eventually.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14457) SolrClient leaks connections on compressed responses if the response is malformed

2020-05-06 Thread Roger Lehmann (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100590#comment-17100590
 ] 

Roger Lehmann edited comment on SOLR-14457 at 5/6/20, 8:40 AM:
---

Sorry for the confusion!
 We're using the HTTP1 client, since I think Http2SolrClient doesn't yet have 
allowCompression() available?

The bug happens exactly as you say, I was just confused by the multiple wrapped 
objects and the content buffer seemingly being gzipped (see Debug view of the 
screenshots below). I'm not a Java programmer, just stumbled upon this issue 
when we tried use GZIP in our production environment and it ran against the 
wall. So I can't help much with further debugging/developing a fix for it 
unfortunately.

I've used using a small test application here with a prefilled Solr running in 
a Docker container for testing this.
 In our production environment we experience the same issue when using GZIP, 
not using Docker there.

!multiple-wrapped-entities.png!

!content-gzipped.png!


was (Author: roger.lehmann):
Sorry for the confusion!
We're using the HTTP1 client, since I think Http2SolrClient doesn't yet have 
allowCompression() available?

The bug happens exactly as you say, I was just confused by the multiple wrapped 
objects and the content buffer seemingly being gzipped (see Debug view of the 
screenshots below). I'm not a Java programmer, just stumbled upon this issue 
when we tried use GZIP in our production environment and it ran against the 
wall. So I can't help much with further debugging/developing a fix for it 
unfortunately.

I've used using a small test application here with a prefilled Solr running in 
a Docker container for testing this.
In our production environment we experience the same issue when using GZIP, not 
using Docker there.


{panel:title=Screenshots}
!multiple-wrapped-entities.png!

!content-gzipped.png! 
{panel}


> SolrClient leaks connections on compressed responses if the response is 
> malformed
> -
>
> Key: SOLR-14457
> URL: https://issues.apache.org/jira/browse/SOLR-14457
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.7.2
> Environment: Solr version: 7.7.2
> Solr cloud enabled
> Cluster topology: 6 nodes, 1 single collection, 10 shards and 3 replicas. 1 
> HTTP LB using
> Round Robin over all nodes
> All cluster nodes have gzip enabled for all paths, all HTTP verbs and all 
> MIME types.
> Solr client: HttpSolrClient targeting the HTTP LB
>Reporter: Samuel García Martínez
>Priority: Major
> Attachments: content-gzipped.png, multiple-wrapped-entities.png
>
>
> h3. Summary
> When the SolrJ receives a malformed response Entity, for example like the one 
> described in SOLR-14456, the client leaks the connection forever as it's 
> never released back to the pool.
> h3. Problem description
> HttpSolrClient should have compression enabled, so it uses the compression 
> interceptors.
> When the request is marked with "Content-Encoding: gzip" but for whatever 
> reason the response is not in GZIP format, when  HttpSolrClient tries to 
> close the connection using Utils.consumeFully(), it tries to create the 
> GzipInputStream failing and throwing an Exception. The exception thrown makes 
> it impossible to access the underlying InputStream to be closed, therefore 
> the connection is leaked.
> Despite that the content in the response should honour the headers specified 
> for it, SolrJ should be reliable enough to prevent the connection leak when 
> this scenario happens. On top of the bug itself, not being able to set a 
> timeout while waiting for a connection to be available, makes any application 
> unresponsive as it will run out of threads eventually.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #1486: SOLR-14423: Use SolrClientCache instance managed by CoreContainer

2020-05-06 Thread GitBox


sigram commented on a change in pull request #1486:
URL: https://github.com/apache/lucene-solr/pull/1486#discussion_r420638269



##
File path: solr/core/src/java/org/apache/solr/handler/sql/CalciteSolrDriver.java
##
@@ -59,11 +81,346 @@ public Connection connect(String url, Properties info) 
throws SQLException {
 if(schemaName == null) {
   throw new SQLException("zk must be set");
 }
-rootSchema.add(schemaName, new SolrSchema(info));
+SolrSchema solrSchema = new SolrSchema(info);
+rootSchema.add(schemaName, solrSchema);
 
 // Set the default schema
 calciteConnection.setSchema(schemaName);
 
-return connection;
+return new SolrCalciteConnectionWrapper(calciteConnection, solrSchema);
+  }
+
+  // the sole purpose of this class is to be able to invoke SolrSchema.close()
+  // when the connection is closed.
+  private static final class SolrCalciteConnectionWrapper implements 
CalciteConnection {

Review comment:
   Sure, good idea.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #1486: SOLR-14423: Use SolrClientCache instance managed by CoreContainer

2020-05-06 Thread GitBox


cpoerschke commented on a change in pull request #1486:
URL: https://github.com/apache/lucene-solr/pull/1486#discussion_r420621435



##
File path: solr/core/src/java/org/apache/solr/handler/GraphHandler.java
##
@@ -83,6 +84,7 @@
   private StreamFactory streamFactory = new DefaultStreamFactory();
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
   private String coreName;
+  private SolrClientCache solrClientCache;

Review comment:
   minor: here the ordering is coreName/solrClientCache but in the inform 
method the use order is solrClientCache/coreName. no particular order seems to 
be necessary technically speaking and so it might be helpful in terms of code 
readability to use the same ordering (one or the other) in both places?

##
File path: solr/core/src/java/org/apache/solr/handler/sql/CalciteSolrDriver.java
##
@@ -59,11 +81,346 @@ public Connection connect(String url, Properties info) 
throws SQLException {
 if(schemaName == null) {
   throw new SQLException("zk must be set");
 }
-rootSchema.add(schemaName, new SolrSchema(info));
+SolrSchema solrSchema = new SolrSchema(info);
+rootSchema.add(schemaName, solrSchema);
 
 // Set the default schema
 calciteConnection.setSchema(schemaName);
 
-return connection;
+return new SolrCalciteConnectionWrapper(calciteConnection, solrSchema);
+  }
+
+  // the sole purpose of this class is to be able to invoke SolrSchema.close()
+  // when the connection is closed.
+  private static final class SolrCalciteConnectionWrapper implements 
CalciteConnection {

Review comment:
   > ... a new top level class that is a no-frills Wrapper ...
   
   You mean like a `FilterCalciteConnection` assuming `Filter...` is still the 
naming convention e.g. as in 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/lucene/core/src/java/org/apache/lucene/index/FilterCodecReader.java
 say?
   
   I was going to suggest w.r.t. test coverage to ensure all that needs to be 
overridden has been overridden (both now and in future) -- similar to 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/lucene/core/src/test/org/apache/lucene/index/TestFilterCodecReader.java#L29
 -- and yeah if the no-frills wrapper has the test coverage then it would be 
easier to see how `SolrCalciteConnectionWrapper extends 
FilterCalciteConnection` is only different w.r.t. the close() method.

##
File path: 
solr/core/src/java/org/apache/solr/metrics/reporters/solr/SolrReporter.java
##
@@ -306,11 +312,11 @@ public String toString() {
   // We delegate to registries anyway, so having a dummy registry is harmless.
   private static final MetricRegistry dummyRegistry = new MetricRegistry();
 
-  public SolrReporter(HttpClient httpClient, Supplier urlProvider, 
SolrMetricManager metricManager,
+  public SolrReporter(SolrClientCache solrClientCache, Supplier 
urlProvider, SolrMetricManager metricManager,
   List metrics, String handler,
   String reporterId, TimeUnit rateUnit, TimeUnit 
durationUnit,
   SolrParams params, boolean skipHistograms, boolean 
skipAggregateValues,
-  boolean cloudClient, boolean compact) {
+  boolean cloudClient, boolean compact, boolean 
closeClientCache) {

Review comment:
   minor/subjective: `SolrClientCache solrClientCache` and `boolean 
closeClientCache` arguments being adjacent might help with readability in the 
builder caller, though of course keeping all the `boolean` arguments together 
has advantages too

##
File path: solr/core/src/java/org/apache/solr/search/join/XCJFQuery.java
##
@@ -261,7 +261,7 @@ private TupleStream createSolrStream() {
 }
 
 private DocSet getDocSet() throws IOException {
-  SolrClientCache solrClientCache = new SolrClientCache();
+  SolrClientCache solrClientCache = 
searcher.getCore().getCoreContainer().getSolrClientCache();

Review comment:
   Need the `solrClientCache.close();` further down in the method be 
removed since a shared cache is now used?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9363) TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI

2020-05-06 Thread Simon Willnauer (Jira)
Simon Willnauer created LUCENE-9363:
---

 Summary: TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI
 Key: LUCENE-9363
 URL: https://issues.apache.org/jira/browse/LUCENE-9363
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Simon Willnauer


{noformat}
[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriterDelete 
-Dtests.method=testDeleteAllNoDeadLock -Dtests.seed=7C9973DFB2835976 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ro-MD 
-Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.25s J2 | TestIndexWriterDelete.testDeleteAllNoDeadLock <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([7C9973DFB2835976:4F809CC13240E320]:0)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.abortMerges(IndexWriter.java:2497)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2424)
   [junit4]>at 
org.apache.lucene.index.RandomIndexWriter.deleteAll(RandomIndexWriter.java:373)
   [junit4]>at 
org.apache.lucene.index.TestIndexWriterDelete.testDeleteAllNoDeadLock(TestIndexWriterDelete.java:348)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:832)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterDelete_7C9973DFB2835976-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene84): 
{field=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)), 
city=Lucene84, 
contents=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
 id=Lucene84, value=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
content=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
docValues:{dv=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=2033, 
maxMBSortInHeap=6.1560152382287825, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@c9cd3c9),
 locale=ro-MD, timezone=SystemV/AST4ADT
   [junit4]   2> NOTE: Linux 5.3.0-46-generic amd64/Oracle Corporation 15-ea 
(64-bit)/cpus=16,threads=1,free=389663704,total=518979584
   [junit4]   2> NOTE: All tests run in this JVM: [TestIntroSorter, 
TestNGramPhraseQuery, TestUpgradeIndexMergePolicy, TestBasicModelIne, 
TestTragicIndexWriterDeadlock, TestSimpleExplanations, TestDirectory, 
TestSegmentToThreadMapping, TestNIOFSDirectory, TestAxiomaticF3LOG, 
TestIndexOptions, TestSpanNotQuery, TestBinaryDocument, 
TestFixedLengthBytesRefArray, TestForUtil, TestParallelLeafReader, 
TestDirectoryReader, TestBytesRef, TestLongRange, TestByteSlices, 
TestRectangle2D, TestBufferedChecksum, TestIndexOrDocValuesQuery, 
TestFilterDirectoryReader, TestSleepingLockWrapper, TestDocIdSetIterator, 
TestMultiPhraseQuery, TestSearchWithThreads, TestPointQueries, 
TestForceMergeForever, TestNewestSegment, TestCrash, TestIntSet, 
Test4GBStoredFields, TestDirectPacked, TestLiveFieldValues, 
TestTopDocsCollector, Test2BPositions, TestAtomicUpdate, 
TestFSTDirectAddressing, TestXYPointQueries, TestLatLonLineShapeQueries, 
TestStressAdvance, TestBasics, TestLucene50StoredFieldsFormat, 
TestQueryRescorer, TestConstantScoreScorer, TestAssertions, TestDemo, 
TestExternalCodecs, TestMergeSchedulerExternal, TestAnalyzerWrapper, 
TestCachingTokenFilter, TestCharArrayMap, TestCharArraySet, TestCharFilter, 
TestCharacterUtils, TestDelegatingAnalyzerWrapper, TestGraphTokenFilter, 
TestGraphTokenizers, TestReusableStringReader, TestWordlistLoader, 
TestStandardAnalyzer, TestBytesRefAttImpl, TestCharTermAttributeImpl, 
TestCodecUtil, TestCompetitiveFreqNormAccumulator, TestFastCompressionMode, 
TestFastDecompressionMode, TestHighCompressionMode, 
TestLucene60FieldInfoFormat, TestLucene60PointsFormat, 
TestLucene70SegmentInfoFormat, TestDocument, TestDoubleRange, 
TestFeatureDoubleValues, TestFeatureField, TestFeatureSort, TestField, 
TestFieldType, TestFloatRange, TestLatLonMultiPolygonShapeQueries, 
TestXYMultiLineShapeQueries, TestXYPolygonShapeQueries, TestLine2D, 
TestTessellator, TestXYCircle, TestXYPolygon, Test2BDocs, 
TestAllFilesCheckIndexHeader, TestDocInverterPerFieldErrorInfo, 
TestDocsAndPositions, TestExceedMaxTermLength, TestFieldInfos, 
TestFieldInvertState, TestFieldUpdatesBuffer, TestFlex, TestIndexWriterCommit, 
TestIndexWriter

[jira] [Updated] (LUCENE-9363) TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI

2020-05-06 Thread Simon Willnauer (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-9363:

Issue Type: Test  (was: Improvement)

> TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI
> --
>
> Key: LUCENE-9363
> URL: https://issues.apache.org/jira/browse/LUCENE-9363
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Simon Willnauer
>Priority: Major
>
> {noformat}
> [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=7C9973DFB2835976 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=ro-MD -Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.25s J2 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([7C9973DFB2835976:4F809CC13240E320]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.abortMerges(IndexWriter.java:2497)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2424)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.deleteAll(RandomIndexWriter.java:373)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterDelete.testDeleteAllNoDeadLock(TestIndexWriterDelete.java:348)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:832)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterDelete_7C9973DFB2835976-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene84): 
> {field=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  city=Lucene84, 
> contents=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  id=Lucene84, value=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> content=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{dv=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=2033, 
> maxMBSortInHeap=6.1560152382287825, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@c9cd3c9),
>  locale=ro-MD, timezone=SystemV/AST4ADT
>[junit4]   2> NOTE: Linux 5.3.0-46-generic amd64/Oracle Corporation 15-ea 
> (64-bit)/cpus=16,threads=1,free=389663704,total=518979584
>[junit4]   2> NOTE: All tests run in this JVM: [TestIntroSorter, 
> TestNGramPhraseQuery, TestUpgradeIndexMergePolicy, TestBasicModelIne, 
> TestTragicIndexWriterDeadlock, TestSimpleExplanations, TestDirectory, 
> TestSegmentToThreadMapping, TestNIOFSDirectory, TestAxiomaticF3LOG, 
> TestIndexOptions, TestSpanNotQuery, TestBinaryDocument, 
> TestFixedLengthBytesRefArray, TestForUtil, TestParallelLeafReader, 
> TestDirectoryReader, TestBytesRef, TestLongRange, TestByteSlices, 
> TestRectangle2D, TestBufferedChecksum, TestIndexOrDocValuesQuery, 
> TestFilterDirectoryReader, TestSleepingLockWrapper, TestDocIdSetIterator, 
> TestMultiPhraseQuery, TestSearchWithThreads, TestPointQueries, 
> TestForceMergeForever, TestNewestSegment, TestCrash, TestIntSet, 
> Test4GBStoredFields, TestDirectPacked, TestLiveFieldValues, 
> TestTopDocsCollector, Test2BPositions, TestAtomicUpdate, 
> TestFSTDirectAddressing, TestXYPointQueries, TestLatLonLineShapeQueries, 
> TestStressAdvance, TestBasics, TestLucene50StoredFieldsFormat, 
> TestQueryRescorer, TestConstantScoreScorer, TestAssertions, TestDemo, 
> TestExternalCodecs, TestMergeSchedulerExternal, TestAnalyzerWrapper, 
> TestCachingTokenFilter, TestCharArrayMap, TestCharArraySet, TestCharFilter, 
> TestCharacterUtils, TestDelegatingAnalyzerWrapper, TestGraphTokenFilter, 
> TestGraphTokenizers, TestReusableStringReader, TestWordlistLoader, 
> TestStandardAnalyzer, TestBytesRefAttImpl, TestCharTermAttributeImpl, 
> TestCodecUtil, TestCompetitiveFreqNormAccumulator, TestFastCompressionMode, 
> TestFastDecompressionMode, TestHighCompressionMode, 
> TestLucene60FieldInfoFormat, TestLucene60PointsFormat, 
> TestLucene70SegmentInfoFormat, TestDocument, TestDoubleRange, 
> TestFeatureDoubleValues, TestFeatureField, TestFeatureSort, TestField, 
> TestFieldType, TestFloatRange, TestLatLonMultiPolygonShapeQueri

[GitHub] [lucene-solr] markharwood opened a new pull request #1489: RegEx querying - add support for Java’s predefined character classes like \d for digits

2020-05-06 Thread GitBox


markharwood opened a new pull request #1489:
URL: https://github.com/apache/lucene-solr/pull/1489


   Jira Issue [9336](https://issues.apache.org/jira/browse/LUCENE-9336) 
proposes adding support for common regex character classes like `\w`.
   This PR adds the code to RegExp.java and associated tests.
   
   The implementation could have gone one of two ways:
   1) Extend `Kind` to introduce new types for DIGIT/WHITESPACE etc and 
corresponding case statements for each type to `make[Type]`,  rendering 
toString, toStringTree and toAutomaton or
   2) Reuse existing Kinds like range etc by adding a simple piece of logic to 
the parser to expand `\d` into the [documented 
equivalent](https://docs.oracle.com/javase/tutorial/essential/regex/pre_char_classes.html#CHART)
 ie `[0-9]`.
   
   I went for option 2 which makes the code shorter/cleaner and the meaning of 
expressions like `\d` more easily readable in the code. The downside is that 
the `toString` representations of these inputs are not as succinct - rendering 
the fully expanded character lists rather than the shorthand `\x` type inputs 
that generated them.
   Happy to change if we feel this is the wrong trade-off.
   
   One other consideration is that the shorthand expressions list could perhaps 
be made configurable e.g. `\h` might be shorthand used to represent hashtags of 
the form `#\w*` if that was something users routinely searched for and wanted 
to add to the regex vocabulary.
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9336) RegExp.java - add support for character classes like \w

2020-05-06 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100659#comment-17100659
 ] 

Mark Harwood commented on LUCENE-9336:
--

I opened a [PR|https://github.com/apache/lucene-solr/pull/1489]

> RegExp.java - add support for character classes like \w
> ---
>
> Key: LUCENE-9336
> URL: https://issues.apache.org/jira/browse/LUCENE-9336
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Mark Harwood
>Assignee: Mark Harwood
>Priority: Major
>
> Character classes commonly used in regular expressions like \s for whitespace 
> are not currently supported and may well be returning false negatives because 
> they don't throw any "unsupported" errors. 
> The proposal is that the RegExp class add support for the set of character 
> classes [defined by Java's Pattern 
> API|https://docs.oracle.com/javase/tutorial/essential/regex/pre_char_classes.html]
> I can work on a patch if we think this is something we want to consider.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-06 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100666#comment-17100666
 ] 

Dawid Weiss commented on LUCENE-9321:
-

I looked but I think I'll wait with comments until it's closer to workable 
state. I don't know if we absolutely have to go through URI - I'd just use Path 
methods (even in convertPath2Link). But I may be talking nonsense here.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-06 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100701#comment-17100701
 ] 

Uwe Schindler commented on LUCENE-9321:
---

bq.  I'd just use Path methods (even in convertPath2Link). But I may be talking 
nonsense here.

I did try this, but breaks on windows and it is hard to handle with paths with 
non-existing files (Path append slashes on directories, but only if they 
exist). So working with virtual paths is hard. Other options were 
plexus-utils, but the code here is more or less the way how it works there 
(just much shorter because of Groovy). I think the code now is fine (drop as 
many elements which are common prefix for both paths, then add as many ".." in 
front to get out of the base path). I see this as done for now. Ideas still 
welcome.

I am currently cleaning up more, I removed hardcoded task names used to refer 
to "element-list". I think I can also cleanup the bugfix for the split package 
problem without using ant. The code is "unreadable" (it's "Ant style" how it is 
at the moment). It's a 5-liner in Groovy to filter the element-list.

I will then duplicate the "renderJavadocs" task and set the outputDirectory 
correctly. Currently its doing relative paths for testing purposes everywhere 
(debug output looks fine to me).

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-06 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100708#comment-17100708
 ] 

Uwe Schindler commented on LUCENE-9321:
---

bq. I did try this, but breaks on windows and it is hard to handle with paths 
with non-existing files (Path append slashes on directories, but only if they 
exist). So working with virtual paths is hard. 

Actually the code was twice as long as the current code, so simply doing some 
set magic was much easier and behaves consistent, as it's not relying on 
platform dependent things.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9321) Port documentation task to gradle

2020-05-06 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100708#comment-17100708
 ] 

Uwe Schindler edited comment on LUCENE-9321 at 5/6/20, 11:39 AM:
-

bq. I did try this, but breaks on windows and it is hard to handle with paths 
with non-existing files (Path append slashes on directories, but only if they 
exist). So working with virtual paths is hard. 

Actually the code using nio.Path was twice as long than the current code, so 
simply doing some set/list magic was much easier and behaves consistent, as 
it's not relying on platform dependent things.


was (Author: thetaphi):
bq. I did try this, but breaks on windows and it is hard to handle with paths 
with non-existing files (Path append slashes on directories, but only if they 
exist). So working with virtual paths is hard. 

Actually the code was twice as long as the current code, so simply doing some 
set magic was much easier and behaves consistent, as it's not relying on 
platform dependent things.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks

2020-05-06 Thread GitBox


mocobeta commented on a change in pull request #1488:
URL: https://github.com/apache/lucene-solr/pull/1488#discussion_r420737101



##
File path: gradle/render-javadoc.gradle
##
@@ -430,3 +305,180 @@ configure(subprojects.findAll { it.path in 
[':solr:solr-ref-guide', ':solr:serve
 project.tasks.findByPath("renderJavadoc").enabled = false
   }
 }
+
+class RenderJavadocTask extends DefaultTask {
+  @InputFiles
+  @SkipWhenEmpty
+  SourceDirectorySet srcDirSet;
+  
+  @OutputDirectory
+  File outputDir;
+  
+  @InputFiles
+  @Classpath
+  FileCollection classpath;
+  
+  @Input
+  boolean linksource = false;
+  
+  @Input
+  boolean linkJUnit = false;
+  
+  @Input
+  boolean relativeProjectLinks = true;
+
+  @Input
+  def linkProjects = [];
+  
+  @Input
+  def luceneDocUrl = project.propertyOrDefault('lucene.javadoc.url', 
"https://lucene.apache.org/core/${project.baseVersion.replace(".", "_")}")
+
+  @Input
+  def solrDocUrl = project.propertyOrDefault('solr.javadoc.url', 
"https://lucene.apache.org/solr/${project.baseVersion.replace(".", "_")}")
+  
+  @TaskAction
+  public void render() {
+def thispc = project.path.split(':').drop(1);
+// Converts absolute project path (e.g., ":lucene:analysis:common") to 
+// a link in the docs; relative to current, if needed for global 
documentation
+def convertPath2Link = { path ->
+  def pc = path.split(':').drop(1);
+  if (relativeProjectLinks) {
+int toDrop = 0;
+for (int i = 0; i < Math.min(pc.size(), thispc.size()); i++) {
+  if (pc[i] == thispc[i]) {
+toDrop++;
+  } else {
+break;
+  }
+}
+// only create relative path if there is actually anything removed 
from beginning (this implies absolute link solr -> lucene):
+if (toDrop > 0) {
+  return Collections.nCopies(thispc.size() - toDrop, 
'..').plus(pc.drop(toDrop) as List).join('/').concat('/');
+} 
+  }
+  return "${(pc[0] == 'lucene') ? luceneDocUrl : 
solrDocUrl}/${pc.drop(1).join('/')}/"
+}
+
+// escapes an option with single quotes or whitespace to be passed in the 
options.txt file for
+def escapeJavadocOption = { String s -> (s =~ /[ '"]/) ? ("'" + 
s.replaceAll(/[\\'"]/, /\\$0/) + "'") : s }
+
+def relativizeURL = { String from, String to ->
+  URI fromUri = URI.create(from).normalize();
+  URI toUri = URI.create(to).normalize();
+  if (fromUri.scheme != toUri.scheme || fromUri.authority != 
toUri.authority) {
+return to;
+  }
+  // because URI#relativice can't handle relative paths, we use Path class 
as workaround
+  Path fromPath = Paths.get("./${fromUri.path}");
+  Path toPath = Paths.get("./${toUri.path}");
+  return fromPath.relativize(toPath).toString().replace(File.separator, 
'/')
+}
+
+def libName = project.path.startsWith(":lucene") ? "Lucene" : "Solr"
+def title = "${libName} ${project.version} ${project.name} API"
+
+// absolute urls for "-linkoffline" option
+def javaSEDocUrl = "https://docs.oracle.com/en/java/javase/11/docs/api/";
+def junitDocUrl = "https://junit.org/junit4/javadoc/4.12/";
+
+def javadocCmd = 
org.gradle.internal.jvm.Jvm.current().getJavadocExecutable()
+
+def srcDirs = srcDirSet.srcDirs.findAll { dir -> dir.exists() }
+def optionsFile = project.file("${getTemporaryDir()}/javadoc-options.txt")
+
+def opts = []
+opts << [ '-overview', project.file("${srcDirs[0]}/overview.html") ]
+opts << [ '-sourcepath', srcDirs.join(File.pathSeparator) ]
+opts << [ '-subpackages', project.path.startsWith(':lucene') ? 
'org.apache.lucene' : 'org.apache.solr' ]
+opts << [ '-d', outputDir ]
+opts << '-protected'
+opts << [ '-encoding', 'UTF-8' ]
+opts << [ '-charset', 'UTF-8' ]
+opts << [ '-docencoding', 'UTF-8' ]
+opts << '-noindex'
+opts << '-author'
+opts << '-version'
+if (linksource) {
+  opts << '-linksource'
+}
+opts << '-use'
+opts << [ '-locale', 'en_US' ]
+opts << [ '-windowtitle', title ]
+opts << [ '-doctitle', title ]
+if (!classpath.isEmpty()) {
+  opts << [ '-classpath', classpath.asPath ]
+}
+opts << [ '-bottom', "Copyright © 2000-${project.buildYear} Apache 
Software Foundation. All Rights Reserved." ]
+
+opts << [ '-tag', 'lucene.experimental:a:WARNING: This API is experimental 
and might change in incompatible ways in the next release.' ]
+opts << [ '-tag', 'lucene.internal:a:NOTE: This API is for internal 
purposes only and might change in incompatible ways in the next release.' ]
+opts << [ '-tag', "lucene.spi:t:SPI Name (case-insensitive: if the name is 
'htmlStrip', 'htmlstrip' can be used when looking up the service)." ]
+
+// resolve links to JavaSE and JUnit API
+opts << [ '-linkoffline', javaSEDocUrl, 
project.project(':lucene').file('tools/javadoc/java11/') ]
+if (linkJUnit) {
+  opts <

[jira] [Commented] (LUCENE-9148) Move the BKD index to its own file.

2020-05-06 Thread Michael McCandless (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100741#comment-17100741
 ] 

Michael McCandless commented on LUCENE-9148:


{quote}Not one file per field, it would be horrible. :)
{quote}
OK phew :)
{quote}The motivation for splitting the index and data files is that they have 
different access patterns. For instance finding nearest neighbors is pretty 
intense on the index, and I believe some users might want to keep it in RAM so 
having it in a different file from the data file will help users leverage 
MmapDirectory#setPreload and FileSwitchDirectory to do so.
{quote}
This sounds great – the better locality should also be a performance win even 
if you do not explicitly warm, e.g. using {{setPreload}}.
{quote}It would be nice to consider LUCENE-9291 for this change. 
{quote}
Maybe keep these two changes separate if possible?  It is usually best to 
strongly separate rote refactoring (no functionality changed) from changes in 
functionality.  Or, perhaps this change could better inform specifically what 
approach/interfaces we would want in LUCENE-9291.

> Move the BKD index to its own file.
> ---
>
> Key: LUCENE-9148
> URL: https://issues.apache.org/jira/browse/LUCENE-9148
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lucene60PointsWriter stores both inner nodes and leaf nodes in the same file, 
> interleaved. For instance if you have two fields, you would have 
> {{}}. It's not 
> ideal since leaves and inner nodes have quite different access patterns. 
> Should we split this into two files? In the case when the BKD index is 
> off-heap, this would also help force it into RAM with 
> {{MMapDirectory#setPreload}}.
> Note that Lucene60PointsFormat already has a file that it calls "index" but 
> it's really only about mapping fields to file pointers in the other file and 
> not what I'm discussing here. But we could possibly store the BKD indices in 
> this existing file if we want to avoid creating a new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9364) Fix side effect of logging call in LogUpdateProcessorFactory

2020-05-06 Thread Erick Erickson (Jira)
Erick Erickson created LUCENE-9364:
--

 Summary: Fix side effect of logging call in 
LogUpdateProcessorFactory
 Key: LUCENE-9364
 URL: https://issues.apache.org/jira/browse/LUCENE-9364
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson


There's a logging calls in LogUpdateProcessorFactory like this:

if (log.isInfoEnabled()) {
log.info(getLogStringAndClearRspToLog());
  }

immediately followed a WARN level all to log slow queries if the query is slow.


getLogStringAndClearRspToLog has a call in it:
  *rsp.getToLog().clear();   // make it so SolrCore.exec won't log this 
again*

This has been true since at least Solr 7.5. It's wrong for two reasons:
1> logging calls shouldn't have side effects like this in the first place
2> Right after that call, there's also a call to (potentially) log slow 
requests, and the rsp.getToLog().clear() will already have been called if 
logging at info level.

I'll fix shortly, although since it's been like this for a long time, I'm not 
in a panic thinking I introduced this recently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #1486: SOLR-14423: Use SolrClientCache instance managed by CoreContainer

2020-05-06 Thread GitBox


sigram commented on a change in pull request #1486:
URL: https://github.com/apache/lucene-solr/pull/1486#discussion_r420757857



##
File path: solr/core/src/java/org/apache/solr/search/join/XCJFQuery.java
##
@@ -261,7 +261,7 @@ private TupleStream createSolrStream() {
 }
 
 private DocSet getDocSet() throws IOException {
-  SolrClientCache solrClientCache = new SolrClientCache();
+  SolrClientCache solrClientCache = 
searcher.getCore().getCoreContainer().getSolrClientCache();

Review comment:
   Yes, good catch!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14460) Fix side effect of logging call in LogUpdateProcessorFactory

2020-05-06 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14460:
-

 Summary: Fix side effect of logging call in 
LogUpdateProcessorFactory
 Key: SOLR-14460
 URL: https://issues.apache.org/jira/browse/SOLR-14460
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Affects Versions: 7.5
Reporter: Erick Erickson
Assignee: Erick Erickson


This section of code:
{code:java}
  if (log.isInfoEnabled()) {
log.info(getLogStringAndClearRspToLog());
  }

  if (log.isWarnEnabled() && slowUpdateThresholdMillis >= 0) {
final long elapsed = (long) req.getRequestTimer().getTime();
if (elapsed >= slowUpdateThresholdMillis) {
  log.warn("slow: " + getLogStringAndClearRspToLog());
}
  }
{code}
is wrong since getLogStringAndClearRspToLog() contains this bit:
{code:java}
  rsp.getToLog().clear();   // make it so SolrCore.exec won't log this again
{code}
so the second call, if both are executed, has already cleared the rsp.toLog.

Besides it's poor form to have this kind of side-effect in a logging call.

I'll fix soon, now that I'm not in a panic thinking I introduced this with the 
logging bits I did recently, this has been around since 7.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #1486: SOLR-14423: Use SolrClientCache instance managed by CoreContainer

2020-05-06 Thread GitBox


sigram commented on a change in pull request #1486:
URL: https://github.com/apache/lucene-solr/pull/1486#discussion_r420758305



##
File path: 
solr/core/src/java/org/apache/solr/metrics/reporters/solr/SolrReporter.java
##
@@ -306,11 +312,11 @@ public String toString() {
   // We delegate to registries anyway, so having a dummy registry is harmless.
   private static final MetricRegistry dummyRegistry = new MetricRegistry();
 
-  public SolrReporter(HttpClient httpClient, Supplier urlProvider, 
SolrMetricManager metricManager,
+  public SolrReporter(SolrClientCache solrClientCache, Supplier 
urlProvider, SolrMetricManager metricManager,
   List metrics, String handler,
   String reporterId, TimeUnit rateUnit, TimeUnit 
durationUnit,
   SolrParams params, boolean skipHistograms, boolean 
skipAggregateValues,
-  boolean cloudClient, boolean compact) {
+  boolean cloudClient, boolean compact, boolean 
closeClientCache) {

Review comment:
   I went a step back to keep the old constructor for back-compat - the 
class is public and its constructor is public too.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #1486: SOLR-14423: Use SolrClientCache instance managed by CoreContainer

2020-05-06 Thread GitBox


sigram commented on a change in pull request #1486:
URL: https://github.com/apache/lucene-solr/pull/1486#discussion_r420760142



##
File path: solr/core/src/java/org/apache/solr/handler/sql/CalciteSolrDriver.java
##
@@ -59,11 +81,346 @@ public Connection connect(String url, Properties info) 
throws SQLException {
 if(schemaName == null) {
   throw new SQLException("zk must be set");
 }
-rootSchema.add(schemaName, new SolrSchema(info));
+SolrSchema solrSchema = new SolrSchema(info);
+rootSchema.add(schemaName, solrSchema);
 
 // Set the default schema
 calciteConnection.setSchema(schemaName);
 
-return connection;
+return new SolrCalciteConnectionWrapper(calciteConnection, solrSchema);
+  }
+
+  // the sole purpose of this class is to be able to invoke SolrSchema.close()
+  // when the connection is closed.
+  private static final class SolrCalciteConnectionWrapper implements 
CalciteConnection {

Review comment:
   Fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14425) Fix ZK sync usage to be synchronous (blocking)

2020-05-06 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100778#comment-17100778
 ] 

Andrzej Bialecki commented on SOLR-14425:
-

[~dsmiley] could you please elaborate? The original description says that this 
may cause "spooky test failures", which seems to me worth investigating.

> Fix ZK sync usage to be synchronous (blocking)
> --
>
> Key: SOLR-14425
> URL: https://issues.apache.org/jira/browse/SOLR-14425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of this writing, we only use one call to ZK's "sync" method.  It's related 
> to collection aliases -- I added this.  I discovered I misunderstood the 
> semantics of the API; it syncs in the background and thus returns 
> immediately.  Looking at ZK's sync CLI command and Curator both made me 
> realize my folly.  I'm considering this only a "minor" issue because I'm not 
> sure I've seen a bug from this; or maybe I did in spooky test failures over a 
> year ago -- I'm not sure.  And we don't use this pervasively (yet).
> It occurred to me that if Solr embraced the Curator framework abstraction 
> over ZooKeeper, I would not have fallen into that trap.  I'll file a separate 
> issue for that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-06 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100779#comment-17100779
 ] 

Uwe Schindler commented on LUCENE-9321:
---

bq. I am currently cleaning up more, I removed hardcoded task names used to 
refer to "element-list". I think I can also cleanup the bugfix for the split 
package problem without using ant. The code is "unreadable" (it's "Ant style" 
how it is at the moment). It's a 5-liner in Groovy to filter the element-list.

Fixed. It's now a 5-liner, easy to read: 
https://github.com/apache/lucene-solr/pull/1488/commits/764f378bef47f1c375b100cf8755b712c5faca4b

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9148) Move the BKD index to its own file.

2020-05-06 Thread Ignacio Vera (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100784#comment-17100784
 ] 

Ignacio Vera commented on LUCENE-9148:
--

Ups, I did not mean to suggest to join the two efforts, just that at some point 
we would benefit in some refactoring to make the logic easier to follow for 
everyone.

It makes sense to wait for this change in order to have a better picture about 
which interfaces /approach can be taken.

> Move the BKD index to its own file.
> ---
>
> Key: LUCENE-9148
> URL: https://issues.apache.org/jira/browse/LUCENE-9148
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lucene60PointsWriter stores both inner nodes and leaf nodes in the same file, 
> interleaved. For instance if you have two fields, you would have 
> {{}}. It's not 
> ideal since leaves and inner nodes have quite different access patterns. 
> Should we split this into two files? In the case when the BKD index is 
> off-heap, this would also help force it into RAM with 
> {{MMapDirectory#setPreload}}.
> Note that Lucene60PointsFormat already has a file that it calls "index" but 
> it's really only about mapping fields to file pointers in the other file and 
> not what I'm discussing here. But we could possibly store the BKD indices in 
> this existing file if we want to avoid creating a new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14425) Fix ZK sync usage to be synchronous (blocking)

2020-05-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100795#comment-17100795
 ] 

David Smiley commented on SOLR-14425:
-

ZK sync enqueues a sync into a communication channel and returns immediately.  
The normal use is that you _then_ do a read of something, which will occur 
_after_ the sync is done.  So as a caller of ZK sync, you don't really need to 
add a callback unless there is no immediate read following after -- perhaps if 
you need to return to a CLI or another process that will then do something.

I do understand the cause of the spooky test failures -- my comment on that 
failure see 
https://issues.apache.org/jira/browse/SOLR-12386?focusedCommentId=17089813&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17089813
I've been mulling over what to do about this, and to educate myself further on 
ZK / Curator.

> Fix ZK sync usage to be synchronous (blocking)
> --
>
> Key: SOLR-14425
> URL: https://issues.apache.org/jira/browse/SOLR-14425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of this writing, we only use one call to ZK's "sync" method.  It's related 
> to collection aliases -- I added this.  I discovered I misunderstood the 
> semantics of the API; it syncs in the background and thus returns 
> immediately.  Looking at ZK's sync CLI command and Curator both made me 
> realize my folly.  I'm considering this only a "minor" issue because I'm not 
> sure I've seen a bug from this; or maybe I did in spooky test failures over a 
> year ago -- I'm not sure.  And we don't use this pervasively (yet).
> It occurred to me that if Solr embraced the Curator framework abstraction 
> over ZooKeeper, I would not have fallen into that trap.  I'll file a separate 
> issue for that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2020-05-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100807#comment-17100807
 ] 

David Smiley commented on SOLR-12386:
-

For the case of an incomplete configSet (i.e. one that didn't exist and is in 
the process of coming into existence), I think the solution is somewhat simple: 
 SOLR-14446 (Use Zk.multi when uploading configSets) in combination with a 
retry mechanism on creation of ZkSolrResourceLoader that ensures the ZK based 
configSet dir exists.  That retry mechanism would probably do a Zk.sync() 
in-between.  Not sure if that retry strategy should go in SOLR-14446 or be 
another issue.

A separate issue(s) is more comprehensively thinking about how to deal with 
_modifications_ to a configSet – e.g. for schema mutations, or otherwise.  That 
would probably involve a Zk.sync as well but with as yet unspecified means to 
ensure we don't call it too often.

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14425) Fix ZK sync usage to be synchronous (blocking)

2020-05-06 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100839#comment-17100839
 ] 

Andrzej Bialecki commented on SOLR-14425:
-

Ok, thanks for the explanation.

> Fix ZK sync usage to be synchronous (blocking)
> --
>
> Key: SOLR-14425
> URL: https://issues.apache.org/jira/browse/SOLR-14425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of this writing, we only use one call to ZK's "sync" method.  It's related 
> to collection aliases -- I added this.  I discovered I misunderstood the 
> semantics of the API; it syncs in the background and thus returns 
> immediately.  Looking at ZK's sync CLI command and Curator both made me 
> realize my folly.  I'm considering this only a "minor" issue because I'm not 
> sure I've seen a bug from this; or maybe I did in spooky test failures over a 
> year ago -- I'm not sure.  And we don't use this pervasively (yet).
> It occurred to me that if Solr embraced the Curator framework abstraction 
> over ZooKeeper, I would not have fallen into that trap.  I'll file a separate 
> issue for that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14459) Close SolrClientCache in ColStatus

2020-05-06 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14459.
-
Resolution: Duplicate

This is being fixed as part of SOLR-14423.

> Close SolrClientCache in ColStatus
> --
>
> Key: SOLR-14459
> URL: https://issues.apache.org/jira/browse/SOLR-14459
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>
> As David pointed out in SOLR-13292 {{ColStatus}} creates a new SCC and never 
> closes it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] uschindler commented on a change in pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks

2020-05-06 Thread GitBox


uschindler commented on a change in pull request #1488:
URL: https://github.com/apache/lucene-solr/pull/1488#discussion_r420868795



##
File path: gradle/render-javadoc.gradle
##
@@ -430,3 +305,180 @@ configure(subprojects.findAll { it.path in 
[':solr:solr-ref-guide', ':solr:serve
 project.tasks.findByPath("renderJavadoc").enabled = false
   }
 }
+
+class RenderJavadocTask extends DefaultTask {
+  @InputFiles
+  @SkipWhenEmpty
+  SourceDirectorySet srcDirSet;
+  
+  @OutputDirectory
+  File outputDir;
+  
+  @InputFiles
+  @Classpath
+  FileCollection classpath;
+  
+  @Input
+  boolean linksource = false;
+  
+  @Input
+  boolean linkJUnit = false;
+  
+  @Input
+  boolean relativeProjectLinks = true;
+
+  @Input
+  def linkProjects = [];
+  
+  @Input
+  def luceneDocUrl = project.propertyOrDefault('lucene.javadoc.url', 
"https://lucene.apache.org/core/${project.baseVersion.replace(".", "_")}")
+
+  @Input
+  def solrDocUrl = project.propertyOrDefault('solr.javadoc.url', 
"https://lucene.apache.org/solr/${project.baseVersion.replace(".", "_")}")
+  
+  @TaskAction
+  public void render() {
+def thispc = project.path.split(':').drop(1);
+// Converts absolute project path (e.g., ":lucene:analysis:common") to 
+// a link in the docs; relative to current, if needed for global 
documentation
+def convertPath2Link = { path ->
+  def pc = path.split(':').drop(1);
+  if (relativeProjectLinks) {
+int toDrop = 0;
+for (int i = 0; i < Math.min(pc.size(), thispc.size()); i++) {
+  if (pc[i] == thispc[i]) {
+toDrop++;
+  } else {
+break;
+  }
+}
+// only create relative path if there is actually anything removed 
from beginning (this implies absolute link solr -> lucene):
+if (toDrop > 0) {
+  return Collections.nCopies(thispc.size() - toDrop, 
'..').plus(pc.drop(toDrop) as List).join('/').concat('/');
+} 
+  }
+  return "${(pc[0] == 'lucene') ? luceneDocUrl : 
solrDocUrl}/${pc.drop(1).join('/')}/"
+}
+
+// escapes an option with single quotes or whitespace to be passed in the 
options.txt file for
+def escapeJavadocOption = { String s -> (s =~ /[ '"]/) ? ("'" + 
s.replaceAll(/[\\'"]/, /\\$0/) + "'") : s }
+
+def relativizeURL = { String from, String to ->
+  URI fromUri = URI.create(from).normalize();
+  URI toUri = URI.create(to).normalize();
+  if (fromUri.scheme != toUri.scheme || fromUri.authority != 
toUri.authority) {
+return to;
+  }
+  // because URI#relativice can't handle relative paths, we use Path class 
as workaround
+  Path fromPath = Paths.get("./${fromUri.path}");
+  Path toPath = Paths.get("./${toUri.path}");
+  return fromPath.relativize(toPath).toString().replace(File.separator, 
'/')
+}
+
+def libName = project.path.startsWith(":lucene") ? "Lucene" : "Solr"
+def title = "${libName} ${project.version} ${project.name} API"
+
+// absolute urls for "-linkoffline" option
+def javaSEDocUrl = "https://docs.oracle.com/en/java/javase/11/docs/api/";
+def junitDocUrl = "https://junit.org/junit4/javadoc/4.12/";
+
+def javadocCmd = 
org.gradle.internal.jvm.Jvm.current().getJavadocExecutable()
+
+def srcDirs = srcDirSet.srcDirs.findAll { dir -> dir.exists() }
+def optionsFile = project.file("${getTemporaryDir()}/javadoc-options.txt")
+
+def opts = []
+opts << [ '-overview', project.file("${srcDirs[0]}/overview.html") ]
+opts << [ '-sourcepath', srcDirs.join(File.pathSeparator) ]
+opts << [ '-subpackages', project.path.startsWith(':lucene') ? 
'org.apache.lucene' : 'org.apache.solr' ]
+opts << [ '-d', outputDir ]
+opts << '-protected'
+opts << [ '-encoding', 'UTF-8' ]
+opts << [ '-charset', 'UTF-8' ]
+opts << [ '-docencoding', 'UTF-8' ]
+opts << '-noindex'
+opts << '-author'
+opts << '-version'
+if (linksource) {
+  opts << '-linksource'
+}
+opts << '-use'
+opts << [ '-locale', 'en_US' ]
+opts << [ '-windowtitle', title ]
+opts << [ '-doctitle', title ]
+if (!classpath.isEmpty()) {
+  opts << [ '-classpath', classpath.asPath ]
+}
+opts << [ '-bottom', "Copyright © 2000-${project.buildYear} Apache 
Software Foundation. All Rights Reserved." ]
+
+opts << [ '-tag', 'lucene.experimental:a:WARNING: This API is experimental 
and might change in incompatible ways in the next release.' ]
+opts << [ '-tag', 'lucene.internal:a:NOTE: This API is for internal 
purposes only and might change in incompatible ways in the next release.' ]
+opts << [ '-tag', "lucene.spi:t:SPI Name (case-insensitive: if the name is 
'htmlStrip', 'htmlstrip' can be used when looking up the service)." ]
+
+// resolve links to JavaSE and JUnit API
+opts << [ '-linkoffline', javaSEDocUrl, 
project.project(':lucene').file('tools/javadoc/java11/') ]
+if (linkJUnit) {
+  opts

[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-06 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100889#comment-17100889
 ] 

Erick Erickson commented on SOLR-11934:
---

OK, getting back to this finally. I've analyzed one set of log files, with all 
the caveats that there may be special circumstances here. This is Solr 7x. I 
still have another set to go but wanted to start the discussion. Here's the 
summary of the classes that account for over 95% of the log messages in this 
sample.

I categorized these mostly by taking fragments of text from the logging calls, 
but for some I had to do something else, for instance HttpSolrCall just logs 
the call, there's no surrounding text so I bucketed on path instead.

It looks like this cluster was having some issues, so I'm not worried too much 
about SolrClientNodeStateProvider or RecoveryStrategy. But some of the others 
are...interesting.

I propose all the ones I've marked with an asterisk be moved to DEBUG level. 
The alternative is to set the entire class to default to WARN level in the 
logging config files.

Ones I'd like some comment on are marked with a ?. for instance, the ones in 
HttpSolrCall may be coming from outside? [~ab]  do you have any 
off-the-top-of-your-head comments about the admin/metrics calls?
{code:java}
 # calls %Level   class
 17,504 (pct: 0.010756) occurrences of WARN: 
o.a.s.c.s.i.SolrClientNodeStateProvider
 22,528 (pct: 0.013843) occurrences of INFO: 
o.a.s.c.s.i.SolrClientNodeStateProvider
 22,827 (pct: 0.014027) occurrences of INFO: o.a.s.c.c.ZkStateReader
 24,330 (pct: 0.014950) occurrences of INFO: o.a.s.c.RecoveryStrategy
 25,227 (pct: 0.015501) occurrences of INFO: o.a.s.c.S.Request
 74,128 (pct: 0.045549) occurrences of INFO: o.a.s.s.HttpSolrCall
139,203 (pct: 0.085536) occurrences of INFO: o.a.s.u.SolrIndexWriter
140,589 (pct: 0.086388) occurrences of INFO: o.a.s.s.SolrIndexSearcher
142,723 (pct: 0.087699) occurrences of INFO: o.a.s.c.SolrCore
281,150 (pct: 0.172758) occurrences of INFO: o.a.s.c.QuerySenderListener
284,356 (pct: 0.174728) occurrences of INFO: o.a.s.u.DirectUpdateHandler2
381,468 (pct: 0.234401) occurrences of INFO: 
o.a.s.u.p.LogUpdateProcessorFactory


Classes Of Interest (95% of log messages come from these classes)
   Lines in SolrCore
2, msg: ERROR
3, msg: core reload 
4, msg: CLOSING SolrCore
15, msg: Updating index properties...
*   1,047, msg: config update listener called for core
*   1,078, msg: Opening new SolrCore at
*  140,576, msg: Registered new searcher 
   Lines in LogUpdateProcessorFactory
*   381,468, msg: There's only one line
   Lines in SolrIndexWriter
*   139,203, msg: Calling setCommitData with IW:
? All of theseLines in HttpSolrCall
1, msg: path=/admin/configs
2, msg: path=/admin/zookeeper
5, msg: path=/admin/info/logging
9, msg: Unable to write response, client closed connection or we are 
shutting down
20, msg: path=/admin/info/system
63, msg: path=/cluster/autoscaling/suggestions
78, msg: path=/admin/collections
90, msg: ERROR
?   2,122, msg: path=/admin/cores
?   71,828, msg: path=/admin/metrics
  Lines in ZkStateReader
*   35, msg: Updated live nodes from ZooKeeper...
*   22,792, msg: A cluster state change: 
  Lines in RecoveryStrategy
1, msg: seconds before trying to recover again
4, msg: Replaying buffered documents.
287, msg: ERROR
289, msg: Updating version bucket highest from index after successful 
recovery.
290, msg: PeerSync Recovery was not successful - trying replication.
291, msg: Starting Replication Recovery.
560, msg: currentVersions size=
1,428, msg: startupVersions size=
1,672, msg: Failed to connect leader 
1,814, msg: Replaying updates buffered during PeerSync.
2,101, msg: No replay needed.
2,105, msg: Starting recovery process. recoveringAfterStartup=
2,106, msg: Sending prep recovery command to
  Lines in QuerySenderListener
*   140,575, msg: QuerySenderListener sending requests to 
  Lines in DirectUpdateHandler2
3, msg: WARN
*   1,983, msg: No uncommitted changes. Skipping IW.commit.
*   141,186, msg: start (note, probably mostly commit flush, but may be 
multiple)
*   141,187, msg: end_commit_flush
  Lines in SolrIndexSearcher
*140,589, msg: Opening 
  Lines in SolrClientNodeStateProvider
17,504, msg: WARN
22,528, msg: Error on getting remote info, trying again:

{code}

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging.

[jira] [Resolved] (LUCENE-9364) Fix side effect of logging call in LogUpdateProcessorFactory

2020-05-06 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-9364.

Resolution: Invalid

Oops, wrong project. I opened a Solr Jira for this.

> Fix side effect of logging call in LogUpdateProcessorFactory
> 
>
> Key: LUCENE-9364
> URL: https://issues.apache.org/jira/browse/LUCENE-9364
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> There's a logging calls in LogUpdateProcessorFactory like this:
> if (log.isInfoEnabled()) {
> log.info(getLogStringAndClearRspToLog());
>   }
> immediately followed a WARN level all to log slow queries if the query is 
> slow.
> getLogStringAndClearRspToLog has a call in it:
>   *rsp.getToLog().clear();   // make it so SolrCore.exec won't log this 
> again*
> This has been true since at least Solr 7.5. It's wrong for two reasons:
> 1> logging calls shouldn't have side effects like this in the first place
> 2> Right after that call, there's also a call to (potentially) log slow 
> requests, and the rsp.getToLog().clear() will already have been called if 
> logging at info level.
> I'll fix shortly, although since it's been like this for a long time, I'm not 
> in a panic thinking I introduced this recently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14460) Fix side effect of logging call in LogUpdateProcessorFactory

2020-05-06 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101030#comment-17101030
 ] 

Erick Erickson commented on SOLR-14460:
---

Hmmm, looking at this some more, it looks like the log should only be cleared 
if it's been logged at least once. The code is still incorrect as it stands 
since logging the slow query can be called after the rsp.toLog has been cleared 
by the INFO level call first.

> Fix side effect of logging call in LogUpdateProcessorFactory
> 
>
> Key: SOLR-14460
> URL: https://issues.apache.org/jira/browse/SOLR-14460
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 7.5
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> This section of code:
> {code:java}
>   if (log.isInfoEnabled()) {
> log.info(getLogStringAndClearRspToLog());
>   }
>   if (log.isWarnEnabled() && slowUpdateThresholdMillis >= 0) {
> final long elapsed = (long) req.getRequestTimer().getTime();
> if (elapsed >= slowUpdateThresholdMillis) {
>   log.warn("slow: " + getLogStringAndClearRspToLog());
> }
>   }
> {code}
> is wrong since getLogStringAndClearRspToLog() contains this bit:
> {code:java}
>   rsp.getToLog().clear();   // make it so SolrCore.exec won't log this 
> again
> {code}
> so the second call, if both are executed, has already cleared the rsp.toLog.
> Besides it's poor form to have this kind of side-effect in a logging call.
> I'll fix soon, now that I'm not in a panic thinking I introduced this with 
> the logging bits I did recently, this has been around since 7.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-06 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101036#comment-17101036
 ] 

Erick Erickson commented on SOLR-11934:
---

Alas, the second set of logs does not have INFO enabled, so it's not what I 
need.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-06 Thread Aaron Kalsnes (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101085#comment-17101085
 ] 

Aaron Kalsnes commented on SOLR-14105:
--

I'm seeing the same behavior with Solr 8.5.1:

{{java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
supported on *Server*}}

 

I am not a Java developer, but according to Jetty's 
([https://github.com/eclipse/jetty.project/issues/4425]), this error is 
happening because:
{quote}"The issue is that we had to split the {{SslContextFactory}} into a 
client and server version, rather than a single class for both.
If you have code that previously instantiated {{SslContextFactory}} directly, 
then it will mostly work other than SNI. The fix is to change to use
{{SslContextFactory.Server}} instead of just {{SslContextFactory}}."
{quote}
Looking at 
[https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java],
 I do not see ".Server" anywhere. I assume that "Server" in the error message 
is referring to "SslContextFactory.Server"

 

Here is the stack trace:
{noformat}
2020-05-06 13:18:18.149 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:599)
at 
org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:249)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.server.Server.start(Server.java:407)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at 
org.eclipse.jetty.se

[jira] [Comment Edited] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-06 Thread Aaron Kalsnes (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101085#comment-17101085
 ] 

Aaron Kalsnes edited comment on SOLR-14105 at 5/6/20, 6:48 PM:
---

I'm seeing the same behavior with Solr 8.5.1:

{{java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
supported on *Server*}}

 

I am not a Java developer, but according to an issue on Jetty's GitHub 
([https://github.com/eclipse/jetty.project/issues/4425]), this error is 
happening because:
{quote}"The issue is that we had to split the {{SslContextFactory}} into a 
client and server version, rather than a single class for both.
 If you have code that previously instantiated {{SslContextFactory}} directly, 
then it will mostly work other than SNI. The fix is to change to use
 {{SslContextFactory.Server}} instead of just {{SslContextFactory}}."
{quote}
Looking at 
[https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java],
 I do not see ".Server" anywhere. I assume that "Server" in the error message 
is referring to "SslContextFactory.Server"

 

Here is the stack trace:
{noformat}
2020-05-06 13:18:18.149 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:599)
at 
org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:249)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.server.Server.start(Server.java:407)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle

[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101112#comment-17101112
 ] 

Jan Høydahl commented on SOLR-14105:


The message you are seeing is misleading and is improved in Jetty 9.4.24 
(https://github.com/eclipse/jetty.project/issues/4425). The new message is:

> KeyStores with multiple certificates are not supported on the base class

I believe this means that your SSL client cert for Solr's client to use cannot 
be a wildcard cert and cannot have more than one host name (alias).

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101127#comment-17101127
 ] 

David Smiley commented on SOLR-11934:
-

The asterisked ones to remove seem mostly reasonable at a glance.  
*  Updated live nodes from ZooKeeper... This one is important to see 
when there is a change of nodes in the cluster.  "35" is a small number; why 
did it get your attention?

Just please ensure that if there is a commit, that there is a log message about 
this and how long it took.  I know there's a distinction between the physical 
flush aspect of the commit and opening of the searcher; ideally timing is 
available for both and even separately logged.

RE HttpSolrCall: as long as we log per HTTP request somehow.  For SolrCore 
handlers, I know the SolrCore will do so but for node level handlers, we want 
to ensure there is a log as well.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-06 Thread Akhmad Amirov (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101129#comment-17101129
 ] 

Akhmad Amirov commented on SOLR-14105:
--

As I stated above my log shows jetty-9.4.24.v20191120, which is part of latest 
Solr 8.5.1 package 

2020-05-06 13:16:26.831 INFO (main) [ ] o.e.j.u.log Logging initialized @738ms 
to org.eclipse.jetty.util.log.Slf4jLog
2020-05-06 13:16:26.894 INFO (main) [ ] o.e.j.u.TypeUtil JVM Runtime does not 
support Modules
2020-05-06 13:16:27.005 INFO (main) [ ] o.e.j.s.Server jetty-9.4.24.v20191120; 
built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; 
jvm 1.8.0_241-b07
2020-05-06 13:16:27.026 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider 
Deployment monitor [file:///app/solr-8.5.1/server/contexts/] at interval 0
2020-05-06 13:16:27.238 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO 
JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
2020-05-06 13:16:27.247 INFO (main) [ ] o.e.j.s.session DefaultSessionIdManager 
workerName=node0
2020-05-06 13:16:27.247 INFO (main) [ ] o.e.j.s.session No SessionScavenger 
set, using defaults
2020-05-06 13:16:27.248 INFO (main) [ ] o.e.j.s.session node0 Scavenging every 
60ms
2020-05-06 13:16:27.294 INFO (main) [ ] o.a.s.u.c.SSLConfigurations Setting 
javax.net.ssl.keyStorePassword
2020-05-06 13:16:27.294 INFO (main) [ ] o.a.s.u.c.SSLConfigurations Setting 
javax.net.ssl.trustStorePassword
2020-05-06 13:16:27.306 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Using logger 
factory org.apache.logging.slf4j.Log4jLoggerFactory
2020-05-06 13:16:27.309 INFO (main) [ ] o.a.s.s.SolrDispatchFilter ___ _ 
Welcome to Apache Solr™ version 8.5.1
2020-05-06 13:16:27.312 INFO (main) [ ] o.a.s.s.SolrDispatchFilter / __| ___| 
|_ _ Starting in cloud mode on port 8443
2020-05-06 13:16:27.312 INFO (main) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | 
'_| Install dir: /app/solr
2020-05-06 13:16:27.312 INFO (main) [ ] o.a.s.s.SolrDispatchFilter 
|___/\___/_|_| Start time: 2020-05-06T18:16:27.312Z
2020-05-06 13:16:27.330 INFO (main) [ ] o.a.s.c.SolrResourceLoader Using system 
property solr.solr.home: /app/solr/server/solr
2020-05-06 13:16:27.373 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting for 
client to connect to ZooKeeper
2020-05-06 13:16:27.395 INFO (zkConnectionManagerCallback-2-thread-1) [ ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-06 13:16:27.395 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is 
connected to ZooKeeper
2020-05-06 13:16:27.504 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Loading 
solr.xml from SolrHome (not found in ZooKeeper)
2020-05-06 13:16:27.506 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container 
configuration from /app/solr/server/solr/solr.xml
2020-05-06 13:16:27.556 INFO (main) [ ] o.a.s.c.SolrXmlConfig MBean server 
found: com.sun.jmx.mbeanserver.JmxMBeanServer@1e802ef9, but no JMX reporters 
were configured - adding default JMX reporter.
2020-05-06 13:16:27.946 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory Host 
whitelist initialized: WhitelistHostChecker [whitelistHosts=null, 
whitelistHostCheckingEnabled=true]
2020-05-06 13:16:27.972 WARN (main) [ ] o.a.s.c.s.i.Http2SolrClient Create 
Http2SolrClient with HTTP/1.1 transport since Java 8 or lower versions does not 
support SSL + HTTP/2
2020-05-06 13:16:28.310 INFO (main) [ ] o.e.j.u.s.SslContextFactory 
x509=X509@b5cc23a(node1.my.com,h=[10.32.101.240, node1.my.com],w=[]) for 
Client@69f63d95[provider=null,keyStore=file:///app/certificates/solr-ssl.keystore.p12,trustStore=file:///app/certificates/solr-ssl.truststore.p12]

2020-05-06 13:16:28.460 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server2020-05-06 13:16:28.460 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647) at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
 at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183) at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134) at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
 at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) 
at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742) 
at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining

[jira] [Comment Edited] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-06 Thread Akhmad Amirov (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101129#comment-17101129
 ] 

Akhmad Amirov edited comment on SOLR-14105 at 5/6/20, 7:54 PM:
---

As I stated above my log shows jetty-9.4.24.v20191120, which is part of latest 
Solr 8.5.1 package 

2020-05-06 13:16:26.831 INFO (main) [ ] o.e.j.u.log Logging initialized @738ms 
to org.eclipse.jetty.util.log.Slf4jLog
 2020-05-06 13:16:26.894 INFO (main) [ ] o.e.j.u.TypeUtil JVM Runtime does not 
support Modules
 2020-05-06 13:16:27.005 INFO (main) [ ] o.e.j.s.Server jetty-9.4.24.v20191120; 
built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; 
jvm 1.8.0_241-b07
 2020-05-06 13:16:27.026 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider 
Deployment monitor [file:///app/solr-8.5.1/server/contexts/] at interval 0
 2020-05-06 13:16:27.238 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO 
JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
 2020-05-06 13:16:27.247 INFO (main) [ ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
 2020-05-06 13:16:27.247 INFO (main) [ ] o.e.j.s.session No SessionScavenger 
set, using defaults
 2020-05-06 13:16:27.248 INFO (main) [ ] o.e.j.s.session node0 Scavenging every 
60ms
 2020-05-06 13:16:27.294 INFO (main) [ ] o.a.s.u.c.SSLConfigurations Setting 
javax.net.ssl.keyStorePassword
 2020-05-06 13:16:27.294 INFO (main) [ ] o.a.s.u.c.SSLConfigurations Setting 
javax.net.ssl.trustStorePassword
 2020-05-06 13:16:27.306 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Using 
logger factory org.apache.logging.slf4j.Log4jLoggerFactory
 2020-05-06 13:16:27.309 INFO (main) [ ] o.a.s.s.SolrDispatchFilter ___ _ 
Welcome to Apache Solr™ version 8.5.1
 2020-05-06 13:16:27.312 INFO (main) [ ] o.a.s.s.SolrDispatchFilter / __| ___| 
|_ _ Starting in cloud mode on port 8443
 2020-05-06 13:16:27.312 INFO (main) [ ] o.a.s.s.SolrDispatchFilter __ \/ _ \ | 
'_| Install dir: /app/solr
 2020-05-06 13:16:27.312 INFO (main) [ ] o.a.s.s.SolrDispatchFilter 
|___/___/_|_| Start time: 2020-05-06T18:16:27.312Z
 2020-05-06 13:16:27.330 INFO (main) [ ] o.a.s.c.SolrResourceLoader Using 
system property solr.solr.home: /app/solr/server/solr
 2020-05-06 13:16:27.373 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
 2020-05-06 13:16:27.395 INFO (zkConnectionManagerCallback-2-thread-1) [ ] 
o.a.s.c.c.ConnectionManager zkClient has connected
 2020-05-06 13:16:27.395 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is 
connected to ZooKeeper
 2020-05-06 13:16:27.504 INFO (main) [ ] o.a.s.s.SolrDispatchFilter Loading 
solr.xml from SolrHome (not found in ZooKeeper)
 2020-05-06 13:16:27.506 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading 
container configuration from /app/solr/server/solr/solr.xml
 2020-05-06 13:16:27.556 INFO (main) [ ] o.a.s.c.SolrXmlConfig MBean server 
found: com.sun.jmx.mbeanserver.JmxMBeanServer@1e802ef9, but no JMX reporters 
were configured - adding default JMX reporter.
 2020-05-06 13:16:27.946 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory Host 
whitelist initialized: WhitelistHostChecker [whitelistHosts=null, 
whitelistHostCheckingEnabled=true]
 2020-05-06 13:16:27.972 WARN (main) [ ] o.a.s.c.s.i.Http2SolrClient Create 
Http2SolrClient with HTTP/1.1 transport since Java 8 or lower versions does not 
support SSL + HTTP/2
 2020-05-06 13:16:28.310 INFO (main) [ ] o.e.j.u.s.SslContextFactory 
x509=X509@b5cc23a(node1.my.com,h=[11.111.111.111, node1.my.com],w=[]) for 
Client@69f63d95[provider=null,keyStore=file:///app/certificates/solr-ssl.keystore.p12,trustStore=file:///app/certificates/solr-ssl.truststore.p12]

2020-05-06 13:16:28.460 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server2020-05-06 13:16:28.460 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647) at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
 at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183) at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134) at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
 at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) 
at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.ja

[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101137#comment-17101137
 ] 

Jan Høydahl commented on SOLR-14105:


I meant to say that 8.5.1 uses Jetty 9.4.24 but the error message is improved 
in 9.4.25...

Try running {{keytool -list -v -keystore solr-ssl.keystore.p12}} and look for 
Extensions and if there may be several 'DNSName' entries there.

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman commented on a change in pull request #1480: SOLR-14456: Fix Content-Type header forwarding on compressed requests

2020-05-06 Thread GitBox


HoustonPutman commented on a change in pull request #1480:
URL: https://github.com/apache/lucene-solr/pull/1480#discussion_r421078522



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
##
@@ -568,12 +568,18 @@ private HttpEntityEnclosingRequestBase 
fillContentStream(SolrRequest request, Co
   // Read the contents
   entity = response.getEntity();
   respBody = entity.getContent();
-  Header ctHeader = response.getLastHeader("content-type");
-  String contentType;
-  if (ctHeader != null) {
-contentType = ctHeader.getValue();
-  } else {
-contentType = "";
+  String mimeType = null;
+  Charset charset = null;
+  String charsetName = null;
+
+  ContentType contentType = ContentType.get(entity);
+  if (contentType != null) {
+mimeType = contentType.getMimeType().trim().toLowerCase(Locale.ROOT);
+charset = contentType.getCharset();
+
+if (charset != null) {
+  charsetName = charset.name();
+}
   }

Review comment:
   I would recommend setting charset to FALLBACK_CHARSET if it's null here, 
and setting the charsetName afterwards.  You have potential for some 
nullPointerExceptions below. such as on lines 626 and 634





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14460) Fix side effect of logging call in LogUpdateProcessorFactory

2020-05-06 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-14460.
---
Resolution: Invalid

So I took a nap and things became clearer. The intent of response.toLog is to 
get published once, and that's what happens.

I was all panicky fearing I'd messed up in the recent mass changes I did for 
LUCENE-7788, but this has been here all along.

> Fix side effect of logging call in LogUpdateProcessorFactory
> 
>
> Key: SOLR-14460
> URL: https://issues.apache.org/jira/browse/SOLR-14460
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 7.5
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> This section of code:
> {code:java}
>   if (log.isInfoEnabled()) {
> log.info(getLogStringAndClearRspToLog());
>   }
>   if (log.isWarnEnabled() && slowUpdateThresholdMillis >= 0) {
> final long elapsed = (long) req.getRequestTimer().getTime();
> if (elapsed >= slowUpdateThresholdMillis) {
>   log.warn("slow: " + getLogStringAndClearRspToLog());
> }
>   }
> {code}
> is wrong since getLogStringAndClearRspToLog() contains this bit:
> {code:java}
>   rsp.getToLog().clear();   // make it so SolrCore.exec won't log this 
> again
> {code}
> so the second call, if both are executed, has already cleared the rsp.toLog.
> Besides it's poor form to have this kind of side-effect in a logging call.
> I'll fix soon, now that I'm not in a panic thinking I introduced this with 
> the logging bits I did recently, this has been around since 7.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14461) Replace commons-fileupload use with standard Servlet/Jetty

2020-05-06 Thread David Smiley (Jira)
David Smiley created SOLR-14461:
---

 Summary: Replace commons-fileupload use with standard Servlet/Jetty
 Key: SOLR-14461
 URL: https://issues.apache.org/jira/browse/SOLR-14461
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
Assignee: David Smiley


Commons-fileupload had utility back in the day before the Servlet 3.0 spec but 
I think it's now obsolete.  I'd rather not maintain this dependency, which 
includes keeping it up to date from security vulnerabilities.

(I have work in-progress)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] samuelgmartinez commented on a change in pull request #1480: SOLR-14456: Fix Content-Type header forwarding on compressed requests

2020-05-06 Thread GitBox


samuelgmartinez commented on a change in pull request #1480:
URL: https://github.com/apache/lucene-solr/pull/1480#discussion_r421106586



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
##
@@ -568,12 +568,18 @@ private HttpEntityEnclosingRequestBase 
fillContentStream(SolrRequest request, Co
   // Read the contents
   entity = response.getEntity();
   respBody = entity.getContent();
-  Header ctHeader = response.getLastHeader("content-type");
-  String contentType;
-  if (ctHeader != null) {
-contentType = ctHeader.getValue();
-  } else {
-contentType = "";
+  String mimeType = null;
+  Charset charset = null;
+  String charsetName = null;
+
+  ContentType contentType = ContentType.get(entity);
+  if (contentType != null) {
+mimeType = contentType.getMimeType().trim().toLowerCase(Locale.ROOT);
+charset = contentType.getCharset();
+
+if (charset != null) {
+  charsetName = charset.name();
+}
   }

Review comment:
   I made the change already and was abbutto push it when I just wondered 
if it's the right decision. Here is my thinking: 
   * The 626 log line wouldn't cause an NPE but just concat `null`. I agree 
that it should not be like that and should log a proper value.
   * I'm afraid of taking the decision of the default encoding in this class as 
'null' encoding for most operations would mean: "either default on 
Charset.defaultCharset() or take the decision yourself based on whatever you 
know you are parsing", specially regarding the `ResponseParser`s.
   
   For example, the Json parser is defaulting to UTF-8 when the encoding is 
passed as null, but it could be that any other parser defaults to JVM 
`file.encoding` or an XML parser defaults to `ISO-8859-1`. In the original PR I 
just replaced the hardcoded references to `UTF-8` to the properly referenced 
`FALLBACK_CHARSET`, while I kept the response's original encoding (whether it's 
null or not) usages; like the one in line 634, `processor.processResponse` is 
passing `null` in the original code also.
   
   Do you think that we can safely take the decision at this component layer 
(and hardcoded it to UTF-8) instead of delegating it to the `ResponseParser`s?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] samuelgmartinez commented on a change in pull request #1480: SOLR-14456: Fix Content-Type header forwarding on compressed requests

2020-05-06 Thread GitBox


samuelgmartinez commented on a change in pull request #1480:
URL: https://github.com/apache/lucene-solr/pull/1480#discussion_r421106586



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
##
@@ -568,12 +568,18 @@ private HttpEntityEnclosingRequestBase 
fillContentStream(SolrRequest request, Co
   // Read the contents
   entity = response.getEntity();
   respBody = entity.getContent();
-  Header ctHeader = response.getLastHeader("content-type");
-  String contentType;
-  if (ctHeader != null) {
-contentType = ctHeader.getValue();
-  } else {
-contentType = "";
+  String mimeType = null;
+  Charset charset = null;
+  String charsetName = null;
+
+  ContentType contentType = ContentType.get(entity);
+  if (contentType != null) {
+mimeType = contentType.getMimeType().trim().toLowerCase(Locale.ROOT);
+charset = contentType.getCharset();
+
+if (charset != null) {
+  charsetName = charset.name();
+}
   }

Review comment:
   I made the change already and was about to push it when I just wondered 
if it's the right decision. Here is my thinking: 
   * The 626 log line wouldn't cause an NPE but just concat `null`. I agree 
that it should not be like that and should log a proper value.
   * I'm afraid of taking the decision of the default encoding in this class as 
'null' encoding for most operations would mean: "either default on 
Charset.defaultCharset() or take the decision yourself based on whatever you 
know you are parsing", specially regarding the `ResponseParser`s.
   
   For example, the Json parser is defaulting to UTF-8 when the encoding is 
passed as null, but it could be that any other parser defaults to JVM 
`file.encoding` or an XML parser defaults to `ISO-8859-1`. In the original PR I 
just replaced the hardcoded references to `UTF-8` to the properly referenced 
`FALLBACK_CHARSET`, while I kept the response's original encoding (whether it's 
null or not) usages; like the one in line 634, `processor.processResponse` is 
passing `null` in the original code also.
   
   Do you think that we can safely take the decision at this component layer 
(and hardcoded it to UTF-8) instead of delegating it to the `ResponseParser`s?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-06 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101206#comment-17101206
 ] 

Erick Erickson commented on SOLR-11934:
---

Why did the Zookeeper call get my attention? How about because my bifocals made 
me think I was on the line below? Or, I threw that one in to see if anyone was 
paying attention? Yeah, that's the ticket!

That's totally unnecessary to change, sorry for the confusion.

The thing I'm wondering about with HttpSolrCall is why there are so many 
admin/metrics calls. One of the reasons I'd _really_ like some more logs is to 
see if this is just an anomaly with this particular data set. For all I know, 
an external client is hammering Solr for dashboard reasons, in which case these 
are fine.

And about commits. Again, I'd really like some more logs as it's quite possible 
that this is a result of commits being fired externally, in which case there's 
no reason to change the level. Let's claim this is internally generated for a 
minute. Do metrics give you the information you'd need? Or are you looking for 
individual anomalies/spikes and aggregated information isn't useful? Although 
the fact that it's in the handler implies that it's coming in externally, in 
which case it should stay at INFO.

 

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101220#comment-17101220
 ] 

David Smiley commented on SOLR-11934:
-

Fair point about metrics; no need to log the particular timings I mentioned.  
As long as we log a searcher was opened -- might as well say how long it took.  
Arguably the flush of index writing buffers might be deemed too much of an 
internal detail as it is not in and of itself a visible thing; it's more of an 
internal event.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman commented on a change in pull request #1480: SOLR-14456: Fix Content-Type header forwarding on compressed requests

2020-05-06 Thread GitBox


HoustonPutman commented on a change in pull request #1480:
URL: https://github.com/apache/lucene-solr/pull/1480#discussion_r421146348



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
##
@@ -568,12 +568,18 @@ private HttpEntityEnclosingRequestBase 
fillContentStream(SolrRequest request, Co
   // Read the contents
   entity = response.getEntity();
   respBody = entity.getContent();
-  Header ctHeader = response.getLastHeader("content-type");
-  String contentType;
-  if (ctHeader != null) {
-contentType = ctHeader.getValue();
-  } else {
-contentType = "";
+  String mimeType = null;
+  Charset charset = null;
+  String charsetName = null;
+
+  ContentType contentType = ContentType.get(entity);
+  if (contentType != null) {
+mimeType = contentType.getMimeType().trim().toLowerCase(Locale.ROOT);
+charset = contentType.getCharset();
+
+if (charset != null) {
+  charsetName = charset.name();
+}
   }

Review comment:
   That's completely fair. We don't want to dictate the default for each 
response parser. If you fixed that logging line, I think that's enough. It'd be 
nice to have UTF8 instead of `null`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-06 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101305#comment-17101305
 ] 

Erick Erickson commented on SOLR-11934:
---

I was able to look at a few more logs and... it's ambiguous of course. For 
instance, I don't see _any_ commit_flush messages. I believe that's because 
there was no indexing going on.

Looking at the data set I posted earlier, some interesting things:

I checked locally, and a commit (triggered from autocommit settings that open a 
new searcher) logs the following:
 * SolrCore logging that it's registered a new searcher
 * SolrIndexWriter logging that it's calling setCommitData
 * QuerySenderListener with a message about "QuerySenderListener sending 
requests to..."
 * DirectUpdateHandler2 logging it's started and ended a commit_flush
 * SolrIndexSearcher telling us it's opened a new searcher

So the above run had somewhere in the neighborhood of 140K commits (whether 
external or internal I don't think matters) that generated on the order of 840K 
messages. Oh, none of the searcher opening messages tell us anything about how 
long it took to open the searcher, that would be good to add as part of another 
JIRA.

So by setting the log level to debug for all the messages relating to opening a 
searcher except one (I think the one in SolrIndexSearcher is the logical one, 
I'll beef it up a little) and setting the call in LogUpdateProcessorFactory to 
debug I can drop the number of log messages from this kind of app by roughly 
2/3, which accomplishes the original intent.

Note that the original problem was the observation that, in an app that indexed 
one doc at a time and committed it (yeah, I know how bad that is and I told 
them so) the logging was verbose, which squares with the observations above.

This looks like an index-heavy kind of application with either external commits 
or short autocommit intervals.

I'd _really_ like to see a query-heavy type application. My first impulse when 
I thought of dropping logging the query to debug rather than INFO was NOO!. 
But with the slow query log would that be so bad? Tossing that out for 
discussion, my personal feeling is that analyzing how Solr is performing often 
depends s much on query response time that logging what we do now for the 
query results is a must.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are 

[GitHub] [lucene-solr] dsmiley opened a new pull request #1490: SOLR-14461: Replace commons-fileupload with Jetty

2020-05-06 Thread GitBox


dsmiley opened a new pull request #1490:
URL: https://github.com/apache/lucene-solr/pull/1490


   https://issues.apache.org/jira/browse/SOLR-14461



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14461) Replace commons-fileupload use with standard Servlet/Jetty

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101383#comment-17101383
 ] 

ASF subversion and git services commented on SOLR-14461:


Commit c0f98940411da91ce9838dd6a417d7c81ba9e2a3 in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c0f9894 ]

SOLR-14461: Replace commons-fileupload with Jetty


> Replace commons-fileupload use with standard Servlet/Jetty
> --
>
> Key: SOLR-14461
> URL: https://issues.apache.org/jira/browse/SOLR-14461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Commons-fileupload had utility back in the day before the Servlet 3.0 spec 
> but I think it's now obsolete.  I'd rather not maintain this dependency, 
> which includes keeping it up to date from security vulnerabilities.
> (I have work in-progress)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14461) Replace commons-fileupload use with standard Servlet/Jetty

2020-05-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101389#comment-17101389
 ] 

David Smiley commented on SOLR-14461:
-

BTW It was not my intention to push the feature branch here; I was playing 
around with IntelliJ's GUI for doing PRs.  

> Replace commons-fileupload use with standard Servlet/Jetty
> --
>
> Key: SOLR-14461
> URL: https://issues.apache.org/jira/browse/SOLR-14461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Commons-fileupload had utility back in the day before the Servlet 3.0 spec 
> but I think it's now obsolete.  I'd rather not maintain this dependency, 
> which includes keeping it up to date from security vulnerabilities.
> (I have work in-progress)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12320) Not all multi-part post requests should create tmp files.

2020-05-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101394#comment-17101394
 ] 

David Smiley commented on SOLR-12320:
-

I just worked on SOLR-14461 to stop using commons-fileupload in lieu of now 
standard Servlet 3.0 /Jetty implementation.  One of the settings is a threshold 
as to wether to write a file or use a memory buffer.  I chose 1MB but please 
provide input there if you think it's not appropriate.

I chose to do the cleanup at the end of the request, taking inspiration from a 
mechanism in Jetty.  No more deleter thread.

> Not all multi-part post requests should create tmp files.
> -
>
> Key: SOLR-12320
> URL: https://issues.apache.org/jira/browse/SOLR-12320
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
>
> We create tmp files for multi-part posts because often they are uploaded 
> files for Solr cell or something but we also sometimes write params only or 
> params and updates as multi-part post. These should not create any tmp files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12320) Not all multi-part post requests should create tmp files.

2020-05-06 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-12320:
---

Assignee: David Smiley  (was: Mark Miller)

> Not all multi-part post requests should create tmp files.
> -
>
> Key: SOLR-12320
> URL: https://issues.apache.org/jira/browse/SOLR-12320
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: David Smiley
>Priority: Minor
>
> We create tmp files for multi-part posts because often they are uploaded 
> files for Solr cell or something but we also sometimes write params only or 
> params and updates as multi-part post. These should not create any tmp files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13612) Error 500 with update extract handler on Solr 7.4.0

2020-05-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101398#comment-17101398
 ] 

David Smiley commented on SOLR-13612:
-

This may or may not be an issue after SOLR-14461; please try when you get a 
chance.  Still there ought to be limits on such things so I'd almost rather 
just close this issue as won't-fix.

> Error 500 with update extract handler on Solr 7.4.0
> ---
>
> Key: SOLR-13612
> URL: https://issues.apache.org/jira/browse/SOLR-13612
> Project: Solr
>  Issue Type: Bug
>  Components: UpdateRequestProcessors
>Affects Versions: 7.4
>Reporter: Julien Massiera
>Priority: Critical
>
> When sending a document via multipart POST update request, if a doc parameter 
> name contains too much chars, the POST method fails with a 500 code error and 
> one can see the following exception in the logs : 
> {code:java}
> ERROR 2019-06-20T09:43:41,089 (qtp1625082366-13) - 
> Solr|Solr|solr.servlet.HttpSolrCall|[c:FileShare s:shard1 r:core_node2 
> x:FileShare_shard1_replica_n1] o.a.s.s.HttpSolrCall 
> null:org.apache.commons.fileupload.FileUploadException: Header section has 
> more than 10240 bytes (maybe it is not properly terminated)
>     at 
> org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362)
>     at 
> org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:115)
>     at 
> org.apache.solr.servlet.SolrRequestParsers$MultipartRequestParser.parseParamsAndFillStreams(SolrRequestParsers.java:602)
>     at 
> org.apache.solr.servlet.SolrRequestParsers$StandardRequestParser.parseParamsAndFillStreams(SolrRequestParsers.java:784)
>     at 
> org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:167)
>     at 
> org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:317)
>     at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
>     at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
>     at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
>     at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
>     at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>     at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>     at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>     at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>     at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>     at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>     at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>     at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>     at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>     at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>     at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>     at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>     at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>     at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>     at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>     at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>     at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>     at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>     at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>     at org.eclipse.jetty.server.Server.handle(Server.java:531)
>     at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
>     at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
>     at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
>     at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest

[jira] [Commented] (SOLR-14461) Replace commons-fileupload use with standard Servlet/Jetty

2020-05-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101412#comment-17101412
 ] 

Jan Høydahl commented on SOLR-14461:


I have done the same several times. When I think I am pushing to my fork, 
IntelliJ keeps pushing to asf git. Something not intuitive there :)

> Replace commons-fileupload use with standard Servlet/Jetty
> --
>
> Key: SOLR-14461
> URL: https://issues.apache.org/jira/browse/SOLR-14461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Commons-fileupload had utility back in the day before the Servlet 3.0 spec 
> but I think it's now obsolete.  I'd rather not maintain this dependency, 
> which includes keeping it up to date from security vulnerabilities.
> (I have work in-progress)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org