[GitHub] [lucene-solr] dweiss commented on a change in pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks

2020-05-07 Thread GitBox


dweiss commented on a change in pull request #1488:
URL: https://github.com/apache/lucene-solr/pull/1488#discussion_r421333692



##
File path: gradle/render-javadoc.gradle
##
@@ -57,20 +57,11 @@ configure(subprojects.findAll { 
it.path.startsWith(':lucene') && it.path != ':lu
   doLast {
 // fix for Java 11 Javadoc tool that cannot handle split packages 
between modules correctly (by removing all the packages which are part of 
lucene-core)
 // problem description: 
[https://issues.apache.org/jira/browse/LUCENE-8738?focusedCommentId=16818106&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16818106]
-ant.local(name: "element-list-regex") // contains a regex for all 
package names which are in lucene-core's javadoc
-ant.loadfile(property: "element-list-regex", srcFile: 
"${project(':lucene:core').tasks[name].outputDir}/element-list", encoding: 
"utf-8") {
-  filterchain {
-tokenfilter(delimoutput: "|") {
-  replacestring(from: ".", to: "\\.")
-}
-  }
-}
-ant.replaceregexp(
-encoding: "UTF-8",
-file: "${project.javadoc.destinationDir}/element-list",
-byline: "true",
-match: "^(\${element-list-regex})\$",
-replace: "")
+Set luceneCorePackages = 
file("${project(':lucene:core').tasks[name].outputDir}/element-list").readLines('UTF-8').toSet();
+File elementFile = file("${outputDir}/element-list");
+List elements = elementFile.readLines('UTF-8');
+elements.removeAll{ luceneCorePackages.contains(it) }
+elementFile.write(elements.join('\n').concat('\n'), 'UTF-8');

Review comment:
   Nice!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #1482: LUCENE-7822: CodecUtil#checkFooter should throw a CorruptIndexException as the main exception.

2020-05-07 Thread GitBox


jpountz commented on a change in pull request #1482:
URL: https://github.com/apache/lucene-solr/pull/1482#discussion_r421333675



##
File path: lucene/core/src/java/org/apache/lucene/codecs/CodecUtil.java
##
@@ -448,24 +448,27 @@ public static void checkFooter(ChecksumIndexInput in, 
Throwable priorException)
   checkFooter(in);
 } else {
   try {
+// If we have evidence of corruption then we return the corruption as 
the
+// main exception and the prior exception gets suppressed. Otherwise we
+// return the prior exception with a suppressed exception that notifies
+// the user that checksums matched.
 long remaining = in.length() - in.getFilePointer();
 if (remaining < footerLength()) {
   // corruption caused us to read into the checksum footer already: we 
can't proceed
-  priorException.addSuppressed(new CorruptIndexException("checksum 
status indeterminate: remaining=" + remaining +
- ", please run 
checkindex for more details", in));
+  throw new CorruptIndexException("checksum status indeterminate: 
remaining=" + remaining +
+  ", please run checkindex for more 
details", in);
 } else {
   // otherwise, skip any unread bytes.
   in.skipBytes(remaining - footerLength());
   
   // now check the footer
-  try {
-long checksum = checkFooter(in);
-priorException.addSuppressed(new CorruptIndexException("checksum 
passed (" + Long.toHexString(checksum) + 
-   "). 
possibly transient resource issue, or a Lucene or JVM bug", in));
-  } catch (CorruptIndexException t) {
-priorException.addSuppressed(t);
-  }
+  long checksum = checkFooter(in);
+  priorException.addSuppressed(new CorruptIndexException("checksum 
passed (" + Long.toHexString(checksum) +

Review comment:
   This case is interesting indeed, it could either be a Lucene bug or a 
silent corruption.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #1473: LUCENE-9353: Move terms metadata to its own file.

2020-05-07 Thread GitBox


jpountz commented on a change in pull request #1473:
URL: https://github.com/apache/lucene-solr/pull/1473#discussion_r421336366



##
File path: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/BlockTreeTermsWriter.java
##
@@ -1060,36 +1052,35 @@ public void close() throws IOException {
   return;
 }
 closed = true;
-
+
+final String metaName = 
IndexFileNames.segmentFileName(state.segmentInfo.name, state.segmentSuffix, 
BlockTreeTermsReader.TERMS_META_EXTENSION);
 boolean success = false;
-try {
-  
-  final long dirStart = termsOut.getFilePointer();
-  final long indexDirStart = indexOut.getFilePointer();
+try (IndexOutput metaOut = state.directory.createOutput(metaName, 
state.context)) {
+  CodecUtil.writeIndexHeader(metaOut, 
BlockTreeTermsReader.TERMS_META_CODEC_NAME, 
BlockTreeTermsReader.VERSION_CURRENT,
+  state.segmentInfo.getId(), state.segmentSuffix);
 
-  termsOut.writeVInt(fields.size());
+  metaOut.writeVInt(fields.size());

Review comment:
   I prefer decoupling changes when possible.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12865) Custom JSON parser's nested documents example does not work

2020-05-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101512#comment-17101512
 ] 

Jan Høydahl commented on SOLR-12865:


Have you tried the newest example for 8.x? 
https://lucene.apache.org/solr/guide/8_5/transforming-and-indexing-custom-json.html#indexing-nested-documents
 

It looks like the example has been heavily re-worked since 7.5.

> Custom JSON parser's nested documents example does not work
> ---
>
> Key: SOLR-12865
> URL: https://issues.apache.org/jira/browse/SOLR-12865
> Project: Solr
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Priority: Major
>  Labels: json
>
> The only exam we have for indexing nested JSON using JSON parser does not 
> seem to work: 
> [https://lucene.apache.org/solr/guide/7_5/transforming-and-indexing-custom-json.html#indexing-nested-documents]
> Attempt 1, using default schemaless mode:
>  # bin/solr create -c json_basic
>  # Example command in V1 format (with core name switched to above)
>  # Indexing fails with: *"msg":"[doc=null] missing required field: id"*. My 
> guess it is because the URPs chain do not apply to inner children records
> Attempt 2, using techproducts schema configuration:
>  # bin/solr create -c json_tp -d sample_techproducts_configs
>  # Same example command with new core
>  # Indexing fails with: *"msg":"Raw data can be stored only if split=/"* (due 
> to presence of srcField in the params.json)
> Attempt 3, continuing the above example, but taking out srcField 
> configuration:
>  # Update params.json to remove srcField
>  # Same example command
>  # It indexes (but not commits)
>  # curl http://localhost:8983/solr/json_tp/update/json -v -d '\{commit:{}}
>  # The core now contains only one document with auto-generated "id" and 
> "_version_" field (because we have mapUniqueKeyOnly in params.json)
> Attempt 4, removing more keys
>  # Update params.json to remove mapUniqueKeyOnly
>  # Same example command
>  # Indexing fails with: *"msg":"Document is missing mandatory uniqueKey 
> field: id"*
> There does not seem to be way to index the nested JSON using the transformer 
> approach.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9350) Don't cache automata on FuzzyQuery

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101530#comment-17101530
 ] 

ASF subversion and git services commented on LUCENE-9350:
-

Commit c6d4aeab3f9c7df028fdd77f9fb5c8c30e839482 in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c6d4aea ]

LUCENE-9350: Don't hold references to large automata on FuzzyQuery (#1467)

LUCENE-9068 moved fuzzy automata construction into FuzzyQuery itself. However,
this has the nasty side-effect of blowing up query caches that expect queries 
to be
fairly small. This commit restores the previous behaviour of caching the large 
automata
on an AttributeSource shared between segments, while making the construction a
bit clearer by factoring it out into a package-private FuzzyAutomatonBuilder.

> Don't cache automata on FuzzyQuery
> --
>
> Key: LUCENE-9350
> URL: https://issues.apache.org/jira/browse/LUCENE-9350
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> LUCENE-9068 moved construction of FuzzyQuery's automaton directly onto the 
> query itself.  However, as SOLR-14428 demonstrates, this ends up blowing up 
> query caches that assume query objects are fairly small when calculating 
> their memory usage.  We should move automaton construction back into 
> FuzzyTermsEnum, while keeping as much of the nice refactoring of LUCENE-9068 
> as possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9068) Build FuzzyQuery automata up-front

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101531#comment-17101531
 ] 

ASF subversion and git services commented on LUCENE-9068:
-

Commit c6d4aeab3f9c7df028fdd77f9fb5c8c30e839482 in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c6d4aea ]

LUCENE-9350: Don't hold references to large automata on FuzzyQuery (#1467)

LUCENE-9068 moved fuzzy automata construction into FuzzyQuery itself. However,
this has the nasty side-effect of blowing up query caches that expect queries 
to be
fairly small. This commit restores the previous behaviour of caching the large 
automata
on an AttributeSource shared between segments, while making the construction a
bit clearer by factoring it out into a package-private FuzzyAutomatonBuilder.

> Build FuzzyQuery automata up-front
> --
>
> Key: LUCENE-9068
> URL: https://issues.apache.org/jira/browse/LUCENE-9068
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.5
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> FuzzyQuery builds a set of levenshtein automata (one for each possible edit 
> distance) at rewrite time, and passes them between different TermsEnum 
> invocations using an attribute source.  This seems a bit needlessly 
> complicated, and also means that things like visiting a query end up building 
> the automata again.  We should instead build the automata at query 
> construction time, which is how AutomatonQuery does it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9350) Don't cache automata on FuzzyQuery

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101536#comment-17101536
 ] 

ASF subversion and git services commented on LUCENE-9350:
-

Commit b68afeddbc9cc0c731077349b34b67695772e74a in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b68afed ]

LUCENE-9350: Don't hold references to large automata on FuzzyQuery (#1467)

LUCENE-9068 moved fuzzy automata construction into FuzzyQuery itself. However,
this has the nasty side-effect of blowing up query caches that expect queries 
to be
fairly small. This commit restores the previous behaviour of caching the large 
automata
on an AttributeSource shared between segments, while making the construction a
bit clearer by factoring it out into a package-private FuzzyAutomatonBuilder.


> Don't cache automata on FuzzyQuery
> --
>
> Key: LUCENE-9350
> URL: https://issues.apache.org/jira/browse/LUCENE-9350
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> LUCENE-9068 moved construction of FuzzyQuery's automaton directly onto the 
> query itself.  However, as SOLR-14428 demonstrates, this ends up blowing up 
> query caches that assume query objects are fairly small when calculating 
> their memory usage.  We should move automaton construction back into 
> FuzzyTermsEnum, while keeping as much of the nice refactoring of LUCENE-9068 
> as possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9350) Don't cache automata on FuzzyQuery

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101539#comment-17101539
 ] 

ASF subversion and git services commented on LUCENE-9350:
-

Commit e28663893ec7eb3919ceadb219fecaeb7e6d59c5 in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e286638 ]

LUCENE-9350: Add changes entry


> Don't cache automata on FuzzyQuery
> --
>
> Key: LUCENE-9350
> URL: https://issues.apache.org/jira/browse/LUCENE-9350
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> LUCENE-9068 moved construction of FuzzyQuery's automaton directly onto the 
> query itself.  However, as SOLR-14428 demonstrates, this ends up blowing up 
> query caches that assume query objects are fairly small when calculating 
> their memory usage.  We should move automaton construction back into 
> FuzzyTermsEnum, while keeping as much of the nice refactoring of LUCENE-9068 
> as possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9068) Build FuzzyQuery automata up-front

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101537#comment-17101537
 ] 

ASF subversion and git services commented on LUCENE-9068:
-

Commit b68afeddbc9cc0c731077349b34b67695772e74a in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b68afed ]

LUCENE-9350: Don't hold references to large automata on FuzzyQuery (#1467)

LUCENE-9068 moved fuzzy automata construction into FuzzyQuery itself. However,
this has the nasty side-effect of blowing up query caches that expect queries 
to be
fairly small. This commit restores the previous behaviour of caching the large 
automata
on an AttributeSource shared between segments, while making the construction a
bit clearer by factoring it out into a package-private FuzzyAutomatonBuilder.


> Build FuzzyQuery automata up-front
> --
>
> Key: LUCENE-9068
> URL: https://issues.apache.org/jira/browse/LUCENE-9068
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.5
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> FuzzyQuery builds a set of levenshtein automata (one for each possible edit 
> distance) at rewrite time, and passes them between different TermsEnum 
> invocations using an attribute source.  This seems a bit needlessly 
> complicated, and also means that things like visiting a query end up building 
> the automata again.  We should instead build the automata at query 
> construction time, which is how AutomatonQuery does it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9350) Don't cache automata on FuzzyQuery

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101538#comment-17101538
 ] 

ASF subversion and git services commented on LUCENE-9350:
-

Commit c2db05ca208a34dd5cf42e1b392b1a17f2751c73 in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c2db05c ]

LUCENE-9350: Add changes entry


> Don't cache automata on FuzzyQuery
> --
>
> Key: LUCENE-9350
> URL: https://issues.apache.org/jira/browse/LUCENE-9350
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> LUCENE-9068 moved construction of FuzzyQuery's automaton directly onto the 
> query itself.  However, as SOLR-14428 demonstrates, this ends up blowing up 
> query caches that assume query objects are fairly small when calculating 
> their memory usage.  We should move automaton construction back into 
> FuzzyTermsEnum, while keeping as much of the nice refactoring of LUCENE-9068 
> as possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14428) FuzzyQuery has severe memory usage in 8.5

2020-05-07 Thread Alan Woodward (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned SOLR-14428:


Assignee: Alan Woodward  (was: Andrzej Bialecki)

> FuzzyQuery has severe memory usage in 8.5
> -
>
> Key: SOLR-14428
> URL: https://issues.apache.org/jira/browse/SOLR-14428
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.5, 8.5.1
>Reporter: Colvin Cowie
>Assignee: Alan Woodward
>Priority: Major
> Attachments: FuzzyHammer.java, SOLR-14428-WeakReferences.patch, 
> image-2020-04-23-09-18-06-070.png, image-2020-04-24-20-09-31-179.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I sent this to the mailing list
> I'm moving from 8.3.1 to 8.5.1, and started getting Out Of Memory Errors 
> while running our normal tests. After profiling it was clear that the 
> majority of the heap was allocated through FuzzyQuery.
> LUCENE-9068 moved construction of the automata from the FuzzyTermsEnum to the 
> FuzzyQuery's constructor.
> I created a little test ( [^FuzzyHammer.java] ) that fires off fuzzy queries 
> from random UUID strings for 5 minutes
> {code}
> FIELD_NAME + ":" + UUID.randomUUID().toString().replace("-", "") + "~2"
> {code}
> When running against a vanilla Solr 8.31 and 8.4.1 there is no problem, while 
> the memory usage has increased drastically on 8.5.0 and 8.5.1.
> Comparison of heap usage while running the attached test against Solr 8.3.1 
> and 8.5.1 with a single (empty) shard and 4GB heap:
> !image-2020-04-23-09-18-06-070.png! 
> And with 4 shards on 8.4.1 and 8.5.0:
>  !screenshot-2.png! 
> I'm guessing that the memory might be being leaked if the FuzzyQuery objects 
> are referenced from the cache, while the FuzzyTermsEnum would not have been.
> Query Result Cache on 8.5.1:
>  !screenshot-3.png! 
> ~316mb in the cache
> QRC on 8.3.1
>  !screenshot-4.png! 
> <1mb
> With an empty cache, running this query 
> _field_s:e41848af85d24ac197c71db6888e17bc~2_ results in the following memory 
> allocation
> {noformat}
> 8.3.1: CACHE.searcher.queryResultCache.ramBytesUsed:  1520
> 8.5.1: CACHE.searcher.queryResultCache.ramBytesUsed:648855
> {noformat}
> ~1 gives 98253 and ~0 gives 6339 on 8.5.1. 8.3.1 is constant at 1520



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14428) FuzzyQuery has severe memory usage in 8.5

2020-05-07 Thread Alan Woodward (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-14428.
--
Fix Version/s: 8.6
   Resolution: Fixed

Fixed by LUCENE-9350

> FuzzyQuery has severe memory usage in 8.5
> -
>
> Key: SOLR-14428
> URL: https://issues.apache.org/jira/browse/SOLR-14428
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.5, 8.5.1
>Reporter: Colvin Cowie
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.6
>
> Attachments: FuzzyHammer.java, SOLR-14428-WeakReferences.patch, 
> image-2020-04-23-09-18-06-070.png, image-2020-04-24-20-09-31-179.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I sent this to the mailing list
> I'm moving from 8.3.1 to 8.5.1, and started getting Out Of Memory Errors 
> while running our normal tests. After profiling it was clear that the 
> majority of the heap was allocated through FuzzyQuery.
> LUCENE-9068 moved construction of the automata from the FuzzyTermsEnum to the 
> FuzzyQuery's constructor.
> I created a little test ( [^FuzzyHammer.java] ) that fires off fuzzy queries 
> from random UUID strings for 5 minutes
> {code}
> FIELD_NAME + ":" + UUID.randomUUID().toString().replace("-", "") + "~2"
> {code}
> When running against a vanilla Solr 8.31 and 8.4.1 there is no problem, while 
> the memory usage has increased drastically on 8.5.0 and 8.5.1.
> Comparison of heap usage while running the attached test against Solr 8.3.1 
> and 8.5.1 with a single (empty) shard and 4GB heap:
> !image-2020-04-23-09-18-06-070.png! 
> And with 4 shards on 8.4.1 and 8.5.0:
>  !screenshot-2.png! 
> I'm guessing that the memory might be being leaked if the FuzzyQuery objects 
> are referenced from the cache, while the FuzzyTermsEnum would not have been.
> Query Result Cache on 8.5.1:
>  !screenshot-3.png! 
> ~316mb in the cache
> QRC on 8.3.1
>  !screenshot-4.png! 
> <1mb
> With an empty cache, running this query 
> _field_s:e41848af85d24ac197c71db6888e17bc~2_ results in the following memory 
> allocation
> {noformat}
> 8.3.1: CACHE.searcher.queryResultCache.ramBytesUsed:  1520
> 8.5.1: CACHE.searcher.queryResultCache.ramBytesUsed:648855
> {noformat}
> ~1 gives 98253 and ~0 gives 6339 on 8.5.1. 8.3.1 is constant at 1520



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9350) Don't cache automata on FuzzyQuery

2020-05-07 Thread Alan Woodward (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-9350.
---
Fix Version/s: 8.6
   Resolution: Fixed

> Don't cache automata on FuzzyQuery
> --
>
> Key: LUCENE-9350
> URL: https://issues.apache.org/jira/browse/LUCENE-9350
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> LUCENE-9068 moved construction of FuzzyQuery's automaton directly onto the 
> query itself.  However, as SOLR-14428 demonstrates, this ends up blowing up 
> query caches that assume query objects are fairly small when calculating 
> their memory usage.  We should move automaton construction back into 
> FuzzyTermsEnum, while keeping as much of the nice refactoring of LUCENE-9068 
> as possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)
Mark Harwood created LUCENE-9365:


 Summary: Fuzzy query has a false negative when prefix length == 
search term length 
 Key: LUCENE-9365
 URL: https://issues.apache.org/jira/browse/LUCENE-9365
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Reporter: Mark Harwood


When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
with an edit distance of 1 and prefix length of 3.

In FuzzyQuery an automaton is created for the "suffix" part of the search 
string which in this case is an empty string.

In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of the 
following form :

{code:java}
searchString + "?" 
{code}

.. where there's an appropriate number of ? characters according to the edit 
distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7822) IllegalArgumentException thrown instead of a CorruptIndexException

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101551#comment-17101551
 ] 

ASF subversion and git services commented on LUCENE-7822:
-

Commit 583499243a2176bef503859218fe6ba4ebb547d2 in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5834992 ]

LUCENE-7822: CodecUtil#checkFooter should throw a CorruptIndexException as the 
main exception. (#1482)



> IllegalArgumentException thrown instead of a CorruptIndexException
> --
>
> Key: LUCENE-7822
> URL: https://issues.apache.org/jira/browse/LUCENE-7822
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.5.1
>Reporter: Martin Amirault
>Priority: Minor
> Attachments: LUCENE-7822.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Similarly to LUCENE-7592 , When an {{*.si}} file is corrupted on very 
> specific part an IllegalArgumentException is thrown instead of a 
> CorruptIndexException.
> StackTrace (Lucene 6.5.1):
> {code}
> java.lang.IllegalArgumentException: Illegal minor version: 12517381
>   at 
> __randomizedtesting.SeedInfo.seed([1FEB5987CFA44BE:B8755B5574F9F3BF]:0)
>   at org.apache.lucene.util.Version.(Version.java:385)
>   at org.apache.lucene.util.Version.(Version.java:371)
>   at org.apache.lucene.util.Version.fromBits(Version.java:353)
>   at 
> org.apache.lucene.codecs.lucene62.Lucene62SegmentInfoFormat.read(Lucene62SegmentInfoFormat.java:97)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:357)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:288)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:448)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:445)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:692)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:644)
>   at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:450)
>   at 
> org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
> {code}
> Simple fix would be to add IllegalArgumentException to the catch list at 
> {{org/apache/lucene/index/SegmentInfos.java:289}}
> Other variations for the stacktraces:
> {code}
> java.lang.IllegalArgumentException: invalid codec filename '_�.cfs', must 
> match: _[a-z0-9]+(_.*)?\..*
>   at 
> __randomizedtesting.SeedInfo.seed([8B3FDE317B8D634A:A8EE07E5EB4B0B13]:0)
>   at 
> org.apache.lucene.index.SegmentInfo.checkFileNames(SegmentInfo.java:270)
>   at org.apache.lucene.index.SegmentInfo.addFiles(SegmentInfo.java:252)
>   at org.apache.lucene.index.SegmentInfo.setFiles(SegmentInfo.java:246)
>   at 
> org.apache.lucene.codecs.lucene62.Lucene62SegmentInfoFormat.read(Lucene62SegmentInfoFormat.java:248)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:357)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:288)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:448)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:445)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:692)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:644)
>   at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:450)
>   at 
> org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
> {code}
> {code}
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'LucenI62' does not exist.  You need 
> to add the corresponding JAR file supporting this SPI to your classpath.  The 
> current classpath supports the following names: [Lucene62, Lucene50, 
> Lucene53, Lucene54, Lucene60]
>   at 
> __randomizedtesting.SeedInfo.seed([925DE160F7260F99:B026EB9373CB6368]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:116)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:424)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:356)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:288)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:448)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:445)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:692)
>   at 
> org.a

[jira] [Resolved] (LUCENE-7822) IllegalArgumentException thrown instead of a CorruptIndexException

2020-05-07 Thread Adrien Grand (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7822.
--
Fix Version/s: 7.8
   Resolution: Fixed

> IllegalArgumentException thrown instead of a CorruptIndexException
> --
>
> Key: LUCENE-7822
> URL: https://issues.apache.org/jira/browse/LUCENE-7822
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.5.1
>Reporter: Martin Amirault
>Priority: Minor
> Fix For: 7.8
>
> Attachments: LUCENE-7822.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Similarly to LUCENE-7592 , When an {{*.si}} file is corrupted on very 
> specific part an IllegalArgumentException is thrown instead of a 
> CorruptIndexException.
> StackTrace (Lucene 6.5.1):
> {code}
> java.lang.IllegalArgumentException: Illegal minor version: 12517381
>   at 
> __randomizedtesting.SeedInfo.seed([1FEB5987CFA44BE:B8755B5574F9F3BF]:0)
>   at org.apache.lucene.util.Version.(Version.java:385)
>   at org.apache.lucene.util.Version.(Version.java:371)
>   at org.apache.lucene.util.Version.fromBits(Version.java:353)
>   at 
> org.apache.lucene.codecs.lucene62.Lucene62SegmentInfoFormat.read(Lucene62SegmentInfoFormat.java:97)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:357)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:288)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:448)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:445)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:692)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:644)
>   at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:450)
>   at 
> org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
> {code}
> Simple fix would be to add IllegalArgumentException to the catch list at 
> {{org/apache/lucene/index/SegmentInfos.java:289}}
> Other variations for the stacktraces:
> {code}
> java.lang.IllegalArgumentException: invalid codec filename '_�.cfs', must 
> match: _[a-z0-9]+(_.*)?\..*
>   at 
> __randomizedtesting.SeedInfo.seed([8B3FDE317B8D634A:A8EE07E5EB4B0B13]:0)
>   at 
> org.apache.lucene.index.SegmentInfo.checkFileNames(SegmentInfo.java:270)
>   at org.apache.lucene.index.SegmentInfo.addFiles(SegmentInfo.java:252)
>   at org.apache.lucene.index.SegmentInfo.setFiles(SegmentInfo.java:246)
>   at 
> org.apache.lucene.codecs.lucene62.Lucene62SegmentInfoFormat.read(Lucene62SegmentInfoFormat.java:248)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:357)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:288)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:448)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:445)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:692)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:644)
>   at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:450)
>   at 
> org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
> {code}
> {code}
> java.lang.IllegalArgumentException: An SPI class of type 
> org.apache.lucene.codecs.Codec with name 'LucenI62' does not exist.  You need 
> to add the corresponding JAR file supporting this SPI to your classpath.  The 
> current classpath supports the following names: [Lucene62, Lucene50, 
> Lucene53, Lucene54, Lucene60]
>   at 
> __randomizedtesting.SeedInfo.seed([925DE160F7260F99:B026EB9373CB6368]:0)
>   at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:116)
>   at org.apache.lucene.codecs.Codec.forName(Codec.java:116)
>   at org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:424)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:356)
>   at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:288)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:448)
>   at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:445)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:692)
>   at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:644)
>   at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:450)
>   at 
> org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
> {code}



--
This message was sent by At

[jira] [Created] (LUCENE-9366) Unused maxDoc parameter in DocValues.emptySortedNumeric()

2020-05-07 Thread Alan Woodward (Jira)
Alan Woodward created LUCENE-9366:
-

 Summary: Unused maxDoc parameter in DocValues.emptySortedNumeric()
 Key: LUCENE-9366
 URL: https://issues.apache.org/jira/browse/LUCENE-9366
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward


DocValues.emptySortedNumeric() currently takes a maxDoc parameter, which is 
unused.  We can just remove it, which simplifies a couple of call sites and 
will make LUCENE-9330 a bit tidier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] romseygeek opened a new pull request #1491: LUCENE-9366: Remove unused maxDoc parameter from DocValues.emptySortedNumeric()

2020-05-07 Thread GitBox


romseygeek opened a new pull request #1491:
URL: https://github.com/apache/lucene-solr/pull/1491


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] romseygeek commented on pull request #1491: LUCENE-9366: Remove unused maxDoc parameter from DocValues.emptySortedNumeric()

2020-05-07 Thread GitBox


romseygeek commented on pull request #1491:
URL: https://github.com/apache/lucene-solr/pull/1491#issuecomment-625202975


   > Is this change master-only or 8.x too?
   
   Given that it's a trivial change to make and in a fairly expert API, I think 
this is OK to backport to 8.x



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Alan Woodward (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101579#comment-17101579
 ] 

Alan Woodward commented on LUCENE-9365:
---

> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form

+1, I think this works as a line of least surprise.

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-07 Thread Tomoko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101589#comment-17101589
 ] 

Tomoko Uchida commented on LUCENE-9321:
---

I'm not certain if it affects discussion here though, there's a thing that is 
not very clear to me. (Sorry if I missed something.)

Once "documentation" task is done, I plan to work on "checkBrokenLink" task 
(equivalent of Ant's "check-broken-links" macro). For the purpose of the task, 
it will depend on "documentation" (since it checks the javadocs w/ relative 
paths generated by "documentation").

My question is: when the "checkBrokenLink" should be invoked ? If we want to 
include it in "precommit", "documentation" will also be (cascadingly) called 
when people call precommit every time. If it is not preferred, we'd need to 
consider where the check belong to (only Jenkins and release manager care of 
it?).

 

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ErickErickson opened a new pull request #1492: SOLR-11934: Visit Solr logging, it's too noisy.

2020-05-07 Thread GitBox


ErickErickson opened a new pull request #1492:
URL: https://github.com/apache/lucene-solr/pull/1492


   Preliminary changes for discussion.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-07 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101598#comment-17101598
 ] 

Dawid Weiss commented on LUCENE-9321:
-

I'd say make it {{rootProject.check.dependsOn checkBrokenLinks}} - precommit is 
already lengthy. Ideally documentation would be a separate subproject so that 
it can have its own tasks and checks but I wouldn't want to touch the ant part 
as it's fragile? 


> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-07 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101604#comment-17101604
 ] 

Uwe Schindler commented on LUCENE-9321:
---

In my opinion, only the Smoke Tester on Jenkins should run the full 
documentation analysis. The liklyhood that a single commit breaks this is not 
large. If we check that the links inside a single module are fine, I think t's 
OK for a precommit check. The global link checking mainly only fails when you 
do structural changes to the build (like adding new modules). And in that case 
you would run the whole build anyways.

For everyday use, I would make the precommit checks as fast as possible and let 
Jenkins + Smoketester do the rest. But that's just my opinion.

Sorry for not proceeding, very busy at moment. Tro to get on that later. I need 
lunch first.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-07 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101605#comment-17101605
 ] 

Uwe Schindler commented on LUCENE-9321:
---

bq. Ideally documentation would be a separate subproject so that it can have 
its own tasks and checks but I wouldn't want to touch the ant part as it's 
fragile?

That was also my idea. We can work on that later once ANT was switched off. So 
we can move the code I develop here to a separate module which produces the 
documentation folder in its own build dir (currently it's in 
lucene/build/documentation). But I agree it's too fragile, don't touch javadcos 
and documentation in Ant...

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-05-07 Thread Tomoko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101631#comment-17101631
 ] 

Tomoko Uchida commented on LUCENE-9321:
---

Thanks, "check" task looks good to me for holding such project-global task. And 
I'd agree with precommit checks should run as fast as possible.

I think we'll be able to discuss it in details later if needed, just wanted to 
share my question (don't intend to hurry this issue at all, sorry).

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] romseygeek commented on pull request #1485: LUCENE-9362: Fix rewriting check in ExpressionValueSource

2020-05-07 Thread GitBox


romseygeek commented on pull request #1485:
URL: https://github.com/apache/lucene-solr/pull/1485#issuecomment-625242791


   Thanks for this, the fix looks good and it's a sneaky bug.  Can you add a 
CHANGES entry for 8.6 as well, and I will merge?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9366) Unused maxDoc parameter in DocValues.emptySortedNumeric()

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101676#comment-17101676
 ] 

ASF subversion and git services commented on LUCENE-9366:
-

Commit d06294e6abac140a069ddc1b60cf563f38ae797f in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d06294e ]

LUCENE-9366: Remove unused maxDoc parameter from DocValues.emptySortedNumeric() 
(#1491)



> Unused maxDoc parameter in DocValues.emptySortedNumeric()
> -
>
> Key: LUCENE-9366
> URL: https://issues.apache.org/jira/browse/LUCENE-9366
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> DocValues.emptySortedNumeric() currently takes a maxDoc parameter, which is 
> unused.  We can just remove it, which simplifies a couple of call sites and 
> will make LUCENE-9330 a bit tidier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-07 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101692#comment-17101692
 ] 

David Smiley commented on SOLR-11934:
-

Ugh.  Trade-offs.

I really value LogUpdateProcessorFactory; I insist we keep it at INFO by 
default.  It goes to great lengths to generate a single succinct log message 
that is useful.  If we keep anything related to indexing (includes commit), it 
needs to be this and I'm a big fan of it.  BTW if you set it to WARN threshold, 
I can see a small bug where it skips the URP altogether and thus also misses 
logging the slow queries at WARN.

I can live with losing the inner details of commit but need that one 
SolrIndexSearcher for opening it.  We agree on this I think.

Also I insist that by default, every query is logged.  Lets also ensure that 
this log category is easily separated out so that it's easy to manipulate 
without messing with other logs in whatever class this happens to be in.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9366) Unused maxDoc parameter in DocValues.emptySortedNumeric()

2020-05-07 Thread Alan Woodward (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-9366.
---
Fix Version/s: 8.6
   Resolution: Fixed

> Unused maxDoc parameter in DocValues.emptySortedNumeric()
> -
>
> Key: LUCENE-9366
> URL: https://issues.apache.org/jira/browse/LUCENE-9366
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> DocValues.emptySortedNumeric() currently takes a maxDoc parameter, which is 
> unused.  We can just remove it, which simplifies a couple of call sites and 
> will make LUCENE-9330 a bit tidier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9366) Unused maxDoc parameter in DocValues.emptySortedNumeric()

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101693#comment-17101693
 ] 

ASF subversion and git services commented on LUCENE-9366:
-

Commit e84efa9a5e3284696019722d5dba51c2c4b8f3cb in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e84efa9 ]

LUCENE-9366: Remove unused maxDoc parameter from DocValues.emptySortedNumeric() 
(#1491)


> Unused maxDoc parameter in DocValues.emptySortedNumeric()
> -
>
> Key: LUCENE-9366
> URL: https://issues.apache.org/jira/browse/LUCENE-9366
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> DocValues.emptySortedNumeric() currently takes a maxDoc parameter, which is 
> unused.  We can just remove it, which simplifies a couple of call sites and 
> will make LUCENE-9330 a bit tidier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood opened a new pull request #1493: Bug fix for false negatives produced by FuzzyQuery

2020-05-07 Thread GitBox


markharwood opened a new pull request #1493:
URL: https://github.com/apache/lucene-solr/pull/1493


   Fix for Jira issue [9365](https://issues.apache.org/jira/browse/LUCENE-9365) 
where search for `abc` doesn't match doc `abcd` if prefixlength = 3 and edit 
distance =1.
   
   The solution is to rewrite the FuzzyQuery as a RegExpQuery given this 
condition.
   Added related test to TestFuzzyQuery.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101694#comment-17101694
 ] 

Mark Harwood commented on LUCENE-9365:
--

I opened a [PR|https://github.com/apache/lucene-solr/pull/1493]

The fix required a Regex query rather than a wildcardquery because wildcard 
syntax doesn't support a variable number of changeable characters

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101712#comment-17101712
 ] 

Mark Harwood edited comment on LUCENE-9365 at 5/7/20, 2:11 PM:
---

Fixed in 
https://github.com/apache/lucene-solr/commit/28e47549c8ba1a7c17ffe7d9e791e88983ef46c2

Thanks for the review, Alan


was (Author: markh):
Fixed in 
https://github.com/apache/lucene-solr/commit/28e47549c8ba1a7c17ffe7d9e791e88983ef46c2

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101712#comment-17101712
 ] 

Mark Harwood commented on LUCENE-9365:
--

Fixed in 
https://github.com/apache/lucene-solr/commit/28e47549c8ba1a7c17ffe7d9e791e88983ef46c2

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14462) Autoscaling placement wrong with concurrent collection creations

2020-05-07 Thread Ilan Ginzburg (Jira)
Ilan Ginzburg created SOLR-14462:


 Summary: Autoscaling placement wrong with concurrent collection 
creations
 Key: SOLR-14462
 URL: https://issues.apache.org/jira/browse/SOLR-14462
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Affects Versions: master (9.0)
Reporter: Ilan Ginzburg
 Attachments: policylogs.txt

Under concurrent collection creation, wrong Autoscaling placement decisions can 
lead to severely unbalanced clusters.
 Sequential creation of the same collections is handled correctly and the 
cluster is balanced.

*TL;DR;* under high load, the way sessions that cache future changes to 
Zookeeper are managed cause placement decisions of multiple concurrent 
Collection API calls to ignore each other, be based on identical “initial” 
cluster state, possibly leading to identical placement decisions and as a 
consequence cluster imbalance.

*Some context first* for those less familiar with how Autoscaling deals with 
cluster state change: a PolicyHelper.Session is created with a snapshot of the 
Zookeeper cluster state and is used to track already decided but not yet 
persisted to Zookeeper cluster state changes so that Collection API commands 
can make the right placement decisions.
 A Collection API command either uses an existing cached Session (that includes 
changes computed by previous command(s)) or creates a new Session initialized 
from the Zookeeper cluster state (i.e. with only state changes already 
persisted).
 When a Collection API command requires a Session - and one is needed for any 
cluster state update computation - if one exists but is currently in use, the 
command can wait up to 10 seconds. If the session becomes available, it is 
reused. Otherwise, a new one is created.

The Session lifecycle is as follows: it is created in COMPUTING state by a 
Collection API command and is initialized with a snapshot of cluster state from 
Zookeeper (does not require a Zookeeper read, this is running on Overseer that 
maintains a cache of cluster state). The command has exclusive access to the 
Session and can change the state of the Session. When the command is done 
changing the Session, the Session is “returned” and its state changes to 
EXECUTING while the command continues to run to persist the state to Zookeeper 
and interact with the nodes, but no longer interacts with the Session. Another 
command can then grab a Session in EXECUTING state, change its state to 
COMPUTING to compute new changes taking into account previous changes. When all 
commands having used the session have completed their work, the session is 
“released” and destroyed (at this stage, Zookeeper contains all the state 
changes that were computed using that Session).

The issue arises when multiple Collection API commands are executed at once. A 
first Session is created and commands start using it one by one. In a simple 1 
shard 1 replica collection creation test run with 100 parallel Collection API 
requests (see debug logs from PolicyHelper in file policy.logs), this Session 
update phase (Session in COMPUTING status in SessionWrapper) takes about 
250-300ms (MacBook Pro).

This means that about 40 commands can run by using in turn the same Session (45 
in the sample run). The commands that have been waiting for too long time out 
after 10 seconds, more or less all at the same time (at the rate at which they 
have been received by the OverseerCollectionMessageHandler, approx one per 
100ms in the sample run) and most/all independently decide to create a new 
Session. These new Sessions are based on Zookeeper state, they might or might 
not include some of the changes from the first 40 commands (depending on if 
these commands got their changes written to Zookeeper by the time of the 10 
seconds timeout, a few might have made it, see below).

These new Sessions (54 sessions in addition to the initial one) are based on 
more or less the same state, so all remaining commands are making placement 
decisions that do not take into account each other (and likely not much of the 
first 44 placement decisions either).

The sample run whose relevant logs are attached led for the 100 single shard 
single replica collection creations to 82 collections on the Overseer node, and 
5 and 13 collections on the two other nodes of a 3 nodes cluster. Given that 
the initial session was used 45 times (once initially then reused 44 times), 
one would have expected at least the first 45 collections to be evenly 
distributed, i.e. 15 replicas on each node. This was not the case, possibly a 
sign of other issues (other runs even ended up placing 0 replicas out of the 
100 on one of the nodes).

>From the client perspective, http admin collection CREATE requests averaged 
>19.5 seconds each and lasted between 7 and 28 secon

[jira] [Created] (SOLR-14463) solr admin ui: zkstatus: For input string: "null"

2020-05-07 Thread Bernd Wahlen (Jira)
Bernd Wahlen created SOLR-14463:
---

 Summary: solr admin ui: zkstatus: For input string: "null"
 Key: SOLR-14463
 URL: https://issues.apache.org/jira/browse/SOLR-14463
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 8.5.1
Reporter: Bernd Wahlen


When i select in solr 8.5.1 webinterface: Cloud - ZK Status i got
For input string: "null"

This happens after upgrading the leader to zookeeper 3.6.1 (upgrading the 
followers before was not a problem). From my view configuration is running, 
just the status web page doesn't.
I remember that there was similar problem before with solr/zk version 
incompatibilities. From zookeeper documentation 3.6.1 should be fully 
compatible with 3.5 clients.
Annoyingly there is no official zk 3.6.1 docker container available, but i 
think it is easy to reproduce.

and from solr.log file:

{code:java}
2020-05-07 15:58:33.194 ERROR (qtp1940055334-231) [   ] 
o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
"null"
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Integer.parseInt(Integer.java:652)
at java.base/java.lang.Integer.parseInt(Integer.java:770)
at 
org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
at 
org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:842)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:808)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:559)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at 
org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:500)
at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)

[jira] [Updated] (SOLR-14463) solr admin ui: zkstatus: For input string: "null" with zk 3.6.x

2020-05-07 Thread Bernd Wahlen (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Wahlen updated SOLR-14463:

Summary: solr admin ui: zkstatus: For input string: "null" with zk 3.6.x  
(was: solr admin ui: zkstatus: For input string: "null")

> solr admin ui: zkstatus: For input string: "null" with zk 3.6.x
> ---
>
> Key: SOLR-14463
> URL: https://issues.apache.org/jira/browse/SOLR-14463
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.5.1
>Reporter: Bernd Wahlen
>Priority: Major
>
> When i select in solr 8.5.1 webinterface: Cloud - ZK Status i got
> For input string: "null"
> This happens after upgrading the leader to zookeeper 3.6.1 (upgrading the 
> followers before was not a problem). From my view configuration is running, 
> just the status web page doesn't.
> I remember that there was similar problem before with solr/zk version 
> incompatibilities. From zookeeper documentation 3.6.1 should be fully 
> compatible with 3.5 clients.
> Annoyingly there is no official zk 3.6.1 docker container available, but i 
> think it is easy to reproduce.
> and from solr.log file:
> {code:java}
> 2020-05-07 15:58:33.194 ERROR (qtp1940055334-231) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
> at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.base/java.lang.Integer.parseInt(Integer.java:652)
> at java.base/java.lang.Integer.parseInt(Integer.java:770)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:842)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:808)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:559)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
> at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at org.eclipse.jetty.server.Server.handle(Server.java:500)
> at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
> at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
> at 
> org.eclipse.jetty.

[jira] [Created] (LUCENE-9367) Using a queryText which results in zero tokens causes a query to be built as null

2020-05-07 Thread Tim Brier (Jira)
Tim Brier created LUCENE-9367:
-

 Summary: Using a queryText which results in zero tokens causes a 
query to be built as null
 Key: LUCENE-9367
 URL: https://issues.apache.org/jira/browse/LUCENE-9367
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 7.2.1
Reporter: Tim Brier


If a queryText produces zero tokens after being processed by an Analyzer when 
you try to build a Query with it the result is null.

 

The following code reproduces this bug:
{code:java}
public class LuceneBug {
public Query buildQuery() throws IOException {
Analyzer analyzer = CustomAnalyzer.builder()
.withTokenizer(StandardTokenizerFactory.class)
.addTokenFilter(StopFilterFactory.class)
.build();

QueryBuilder queryBuilder = new QueryBuilder(analyzer);

String onlyStopWords = "the and it";
return queryBuilder.createPhraseQuery("AnyField", onlyStopWords);
}
}
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13862) JDK 13+Shenandoah stability/recovery problems

2020-05-07 Thread Bernd Wahlen (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101716#comment-17101716
 ] 

Bernd Wahlen edited comment on SOLR-13862 at 5/7/20, 2:20 PM:
--

I am trying again with solr 8.5.1 + 14.0.1 adopt and the same Shenandoah+Memory 
settings. I will let you know if the problems are gone.


was (Author: bwahlen):
I am trying again with solr 8.5.1 + 14.0.1 adopt and the same Shenandoah+Memory 
settings. I will let you know the problems are gone.

> JDK 13+Shenandoah stability/recovery problems
> -
>
> Key: SOLR-13862
> URL: https://issues.apache.org/jira/browse/SOLR-13862
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.2
>Reporter: Bernd Wahlen
>Priority: Major
>
> after updating my cluster (centos 7.7, solr 8.2, jdk12) to JDK13 (3 nodes, 4 
> collections, 1 shard) everything was running good (with lower p95) for some 
> hours. Then 2 nodes (not the leader) going to recovery state, but ~"Recovery 
> failed Error opening new searcher". I tried rolling restart the cluster, but 
> recovery is not working. After i switched to jdk11 recovery works again. In 
> summary jdk11 or jdk12 was running stable, jdk13 not.
> This is my solr.in.sh:
> GC_TUNE="-XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC"
>  SOLR_TIMEZONE="CET"
>  
> GC_LOG_OPTS="-Xlog:gc*:file=/var/log/solr/solr_gc.log:time:filecount=9,filesize=20M:safepoint"
> I also tried ADDREPLICA during my attempt to reapair the cluster, which 
> causes Out of Memory on JDK 13 and worked after going back to JDK 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13862) JDK 13+Shenandoah stability/recovery problems

2020-05-07 Thread Bernd Wahlen (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101716#comment-17101716
 ] 

Bernd Wahlen commented on SOLR-13862:
-

I am trying again with solr 8.5.1 + 14.0.1 adopt and the same Shenandoah+Memory 
settings. I will let you know the problems are gone.

> JDK 13+Shenandoah stability/recovery problems
> -
>
> Key: SOLR-13862
> URL: https://issues.apache.org/jira/browse/SOLR-13862
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.2
>Reporter: Bernd Wahlen
>Priority: Major
>
> after updating my cluster (centos 7.7, solr 8.2, jdk12) to JDK13 (3 nodes, 4 
> collections, 1 shard) everything was running good (with lower p95) for some 
> hours. Then 2 nodes (not the leader) going to recovery state, but ~"Recovery 
> failed Error opening new searcher". I tried rolling restart the cluster, but 
> recovery is not working. After i switched to jdk11 recovery works again. In 
> summary jdk11 or jdk12 was running stable, jdk13 not.
> This is my solr.in.sh:
> GC_TUNE="-XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC"
>  SOLR_TIMEZONE="CET"
>  
> GC_LOG_OPTS="-Xlog:gc*:file=/var/log/solr/solr_gc.log:time:filecount=9,filesize=20M:safepoint"
> I also tried ADDREPLICA during my attempt to reapair the cluster, which 
> causes Out of Memory on JDK 13 and worked after going back to JDK 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9367) Using a queryText which results in zero tokens causes a query to be built as null

2020-05-07 Thread Tim Brier (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Brier updated LUCENE-9367:
--
Description: 
If a queryText produces zero tokens after being processed by an Analyzer, when 
you try to build a Query with it the result is null.

 

The following code reproduces this bug:
{code:java}
public class LuceneBug {
public Query buildQuery() throws IOException {
Analyzer analyzer = CustomAnalyzer.builder()
.withTokenizer(StandardTokenizerFactory.class)
.addTokenFilter(StopFilterFactory.class)
.build();

QueryBuilder queryBuilder = new QueryBuilder(analyzer);

String onlyStopWords = "the and it";
return queryBuilder.createPhraseQuery("AnyField", onlyStopWords);
}
}
{code}
 

  was:
If a queryText produces zero tokens after being processed by an Analyzer when 
you try to build a Query with it the result is null.

 

The following code reproduces this bug:
{code:java}
public class LuceneBug {
public Query buildQuery() throws IOException {
Analyzer analyzer = CustomAnalyzer.builder()
.withTokenizer(StandardTokenizerFactory.class)
.addTokenFilter(StopFilterFactory.class)
.build();

QueryBuilder queryBuilder = new QueryBuilder(analyzer);

String onlyStopWords = "the and it";
return queryBuilder.createPhraseQuery("AnyField", onlyStopWords);
}
}
{code}
 


> Using a queryText which results in zero tokens causes a query to be built as 
> null
> -
>
> Key: LUCENE-9367
> URL: https://issues.apache.org/jira/browse/LUCENE-9367
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.2.1
>Reporter: Tim Brier
>Priority: Major
>
> If a queryText produces zero tokens after being processed by an Analyzer, 
> when you try to build a Query with it the result is null.
>  
> The following code reproduces this bug:
> {code:java}
> public class LuceneBug {
> public Query buildQuery() throws IOException {
> Analyzer analyzer = CustomAnalyzer.builder()
> .withTokenizer(StandardTokenizerFactory.class)
> .addTokenFilter(StopFilterFactory.class)
> .build();
> QueryBuilder queryBuilder = new QueryBuilder(analyzer);
> String onlyStopWords = "the and it";
> return queryBuilder.createPhraseQuery("AnyField", onlyStopWords);
> }
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14357) solrj: using insecure namedCurves

2020-05-07 Thread Bernd Wahlen (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100542#comment-17100542
 ] 

Bernd Wahlen edited comment on SOLR-14357 at 5/7/20, 2:31 PM:
--

Update:
after i updated to AdoptOpenJDK (build 14.0.1+7) and my patched java.security 
file was overwritten with the jdk default accidentally, it still works without 
the error above. But i updated some other things in the meantime (mainly centos 
7.7->7.8 and solrj to 8.5.1). I will try to investigate how to reproduce later.

I think negotiation of algorithm fails only in specific jvm/solr/solrj 
combinations.

not working:  server: jdk 11.0.6+solr 8.4.1, client jdk 14.0.0+solrj 8.4.1
working: server: jdk11.0.7+solr8.5.1, client jdk 14.0.1+solrj 8.5.1


was (Author: bwahlen):
Update:
after i updated to AdoptOpenJDK (build 14.0.1+7) and my patched java.security 
file was overwritten with the jdk default accidentally, it still works without 
the error above. But i updated some other things in the meantime (mainly centos 
7.7->7.8 and solrj to 8.5.1). I will try to investigate how to reproduce later.

> solrj: using insecure namedCurves
> -
>
> Key: SOLR-14357
> URL: https://issues.apache.org/jira/browse/SOLR-14357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bernd Wahlen
>Priority: Major
>
> i tried to run our our backend with solrj 8.4.1 on jdk14 and get the 
> following error:
> Caused by: java.lang.IllegalArgumentException: Error in security property. 
> Constraint unknown: c2tnb191v1
> after i removed all the X9.62 algoriths from the property 
> jdk.disabled.namedCurves in
> /usr/lib/jvm/java-14-openjdk-14.0.0.36-1.rolling.el7.x86_64/conf/security/java.security
> everything is running.
> This does not happend on staging (i think because of only 1 solr node - not 
> using lb client).
> We do not set or change any ssl settings in solr.in.sh.
> I don't know how to fix that (default config?, apache client settings?), but 
> i think using insecure algorithms may be  a security risk and not only a 
> jdk14 issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14357) solrj: using insecure namedCurves

2020-05-07 Thread Bernd Wahlen (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100542#comment-17100542
 ] 

Bernd Wahlen edited comment on SOLR-14357 at 5/7/20, 2:33 PM:
--

Update:
after i updated to AdoptOpenJDK (build 14.0.1+7) and my patched java.security 
file was overwritten with the jdk default accidentally, it still works without 
the error above. But i updated some other things in the meantime (mainly centos 
7.7->7.8 and solrj to 8.5.1). I will try to investigate how to reproduce later.

I think negotiation of algorithm fails only in specific jvm/solrj combinations.

not working:  server: jdk 11.0.6+solr 8.4.1, client jdk 14.0.0+solrj 8.4.1
working: server jdk 11.0.7+solr8.4.1, client jdk: 14.0.1+solrj8.5.1
working: server: jdk11.0.7+solr8.5.1, client jdk 14.0.1+solrj 8.5.1
working: server jdk 14.0.1+solr8.5.1, client jdk 14.0.1+solrj 8.5.1



was (Author: bwahlen):
Update:
after i updated to AdoptOpenJDK (build 14.0.1+7) and my patched java.security 
file was overwritten with the jdk default accidentally, it still works without 
the error above. But i updated some other things in the meantime (mainly centos 
7.7->7.8 and solrj to 8.5.1). I will try to investigate how to reproduce later.

I think negotiation of algorithm fails only in specific jvm/solr/solrj 
combinations.

not working:  server: jdk 11.0.6+solr 8.4.1, client jdk 14.0.0+solrj 8.4.1
working: server: jdk11.0.7+solr8.5.1, client jdk 14.0.1+solrj 8.5.1

> solrj: using insecure namedCurves
> -
>
> Key: SOLR-14357
> URL: https://issues.apache.org/jira/browse/SOLR-14357
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bernd Wahlen
>Priority: Major
>
> i tried to run our our backend with solrj 8.4.1 on jdk14 and get the 
> following error:
> Caused by: java.lang.IllegalArgumentException: Error in security property. 
> Constraint unknown: c2tnb191v1
> after i removed all the X9.62 algoriths from the property 
> jdk.disabled.namedCurves in
> /usr/lib/jvm/java-14-openjdk-14.0.0.36-1.rolling.el7.x86_64/conf/security/java.security
> everything is running.
> This does not happend on staging (i think because of only 1 solr node - not 
> using lb client).
> We do not set or change any ssl settings in solr.in.sh.
> I don't know how to fix that (default config?, apache client settings?), but 
> i think using insecure algorithms may be  a security risk and not only a 
> jdk14 issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood opened a new pull request #1494: Backport of FuzzyQuery bugfix for issue 9365

2020-05-07 Thread GitBox


markharwood opened a new pull request #1494:
URL: https://github.com/apache/lucene-solr/pull/1494


   Backport of 28e47549c8ba1a7c17ffe7d9e791e88983ef46c2
   Fixes false negative in FuzzyQuery when prefix length == search string 
length.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14463) solr admin ui: zkstatus: For input string: "null" with zk 3.6.x

2020-05-07 Thread Bernd Wahlen (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Wahlen updated SOLR-14463:

Description: 
When i select in solr 8.5.1 webinterface: Cloud - ZK Status i got
For input string: "null"

This happens after upgrading the leader to zookeeper 3.6.1 (upgrading the 
followers before was not a problem). From my view configuration is running, 
just the status web page doesn't.
I remember that there was similar problem before with solr/zk version 
incompatibilities. From zookeeper documentation 3.6.1 should be fully 
compatible with 3.5 clients.
Annoyingly there is no official zk 3.6.1 docker container available, but i 
think it is easy to reproduce.

and from solr.log file:

{code:java}
2020-05-07 15:58:33.194 ERROR (qtp1940055334-231) [   ] 
o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
"null"
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Integer.parseInt(Integer.java:652)
at java.base/java.lang.Integer.parseInt(Integer.java:770)
at 
org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
at 
org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:842)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:808)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:559)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at 
org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:500)
at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at 
org.eclipse.jetty.u

[GitHub] [lucene-solr] emetsds commented on pull request #1485: LUCENE-9362: Fix rewriting check in ExpressionValueSource

2020-05-07 Thread GitBox


emetsds commented on pull request #1485:
URL: https://github.com/apache/lucene-solr/pull/1485#issuecomment-625307912


   Done.
   Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14426) forbidden api error during precommit DateMathFunction

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101754#comment-17101754
 ] 

ASF subversion and git services commented on SOLR-14426:


Commit 31b350e8040cbe30c4e85b7fb82eab4b6afd81c7 in lucene-solr's branch 
refs/heads/master from Mike Drob
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=31b350e ]

SOLR-14426 Move auxiliary classes to nested classes (#1487)



> forbidden api error during precommit DateMathFunction
> -
>
> Key: SOLR-14426
> URL: https://issues.apache.org/jira/browse/SOLR-14426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mike Drob
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When running `./gradlew precommit` I'll occasionally see
> {code}
> * What went wrong:
> Execution failed for task ':solr:contrib:analytics:forbiddenApisMain'.
> > de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API 
> > calls failed while scanning class 
> > 'org.apache.solr.analytics.function.mapping.DateMathFunction' 
> > (DateMathFunction.java): java.lang.ClassNotFoundException: 
> > org.apache.solr.analytics.function.mapping.DateMathValueFunction (while 
> > looking up details about referenced class 
> > 'org.apache.solr.analytics.function.mapping.DateMathValueFunction')
> {code}
> `./gradlew clean` fixes this, but I don't understand what or why this 
> happens. Feels like a gradle issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14426) forbidden api error during precommit DateMathFunction

2020-05-07 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved SOLR-14426.
--
Fix Version/s: master (9.0)
 Assignee: Mike Drob
   Resolution: Fixed

resolving this for now, somebody please reopen if they see the issue again

> forbidden api error during precommit DateMathFunction
> -
>
> Key: SOLR-14426
> URL: https://issues.apache.org/jira/browse/SOLR-14426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When running `./gradlew precommit` I'll occasionally see
> {code}
> * What went wrong:
> Execution failed for task ':solr:contrib:analytics:forbiddenApisMain'.
> > de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API 
> > calls failed while scanning class 
> > 'org.apache.solr.analytics.function.mapping.DateMathFunction' 
> > (DateMathFunction.java): java.lang.ClassNotFoundException: 
> > org.apache.solr.analytics.function.mapping.DateMathValueFunction (while 
> > looking up details about referenced class 
> > 'org.apache.solr.analytics.function.mapping.DateMathValueFunction')
> {code}
> `./gradlew clean` fixes this, but I don't understand what or why this 
> happens. Feels like a gradle issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood opened a new pull request #1495: Revert "Bugfix for FuzzyQuery false negative (#1493)"

2020-05-07 Thread GitBox


markharwood opened a new pull request #1495:
URL: https://github.com/apache/lucene-solr/pull/1495


   This reverts commit 28e47549c8ba1a7c17ffe7d9e791e88983ef46c2.
   The use of RegExpQuery as a fallback has to consider that the search string 
may contain characters which are illegal regex syntax and need escaping.
   
   Will rethink the approach.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood commented on pull request #1494: Backport of FuzzyQuery bugfix for issue 9365

2020-05-07 Thread GitBox


markharwood commented on pull request #1494:
URL: https://github.com/apache/lucene-solr/pull/1494#issuecomment-625316015


   Aborting - there was a bug with this fix in master. Using RegExpQuery as a 
fallback is not appropriate because content needs escaping to be legal syntax. 
Will rework



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] romseygeek commented on pull request #1485: LUCENE-9362: Fix rewriting check in ExpressionValueSource

2020-05-07 Thread GitBox


romseygeek commented on pull request #1485:
URL: https://github.com/apache/lucene-solr/pull/1485#issuecomment-625319467


   Thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101769#comment-17101769
 ] 

Mark Harwood edited comment on LUCENE-9365 at 5/7/20, 3:18 PM:
---

Ignore last comment - I'm reverting that change because we can't fall back to a 
RegExp query with the user's search string without escaping it. It may contain 
illegal search syntax. Will re-think the approach and open another PR.


was (Author: markh):
Ignore last comment - I've reverting that change because we can't fall back to 
a RegExp query with the user's search string without escaping it. It may 
contain illegal search syntax. Will re-think the approach and open another PR.

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101769#comment-17101769
 ] 

Mark Harwood commented on LUCENE-9365:
--

Ignore last comment - I've reverting that change because we can't fall back to 
a RegExp query with the user's search string without escaping it. It may 
contain illegal search syntax. Will re-think the approach and open another PR.

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9362) ExpressionValueSource has buggy rewrite method

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101768#comment-17101768
 ] 

ASF subversion and git services commented on LUCENE-9362:
-

Commit 4c408a5820970e77c4278e97017d37fe75ea8950 in lucene-solr's branch 
refs/heads/master from Dmitry Emets
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4c408a5 ]

LUCENE-9362: Fix rewriting check in ExpressionValueSource (#1485)



> ExpressionValueSource has buggy rewrite method
> --
>
> Key: LUCENE-9362
> URL: https://issues.apache.org/jira/browse/LUCENE-9362
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/expressions
>Reporter: Dmitry Emets
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ExpressionValuesSource does not actually rewrite itself due to small mistake 
> in check of inner rewrites.
> {code:java}
> changed |= (rewritten[i] == variables[i]);
> {code}
> should be changed to
> {code:java}
> changed |= (rewritten[i] != variables[i]);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9362) ExpressionValueSource has buggy rewrite method

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101782#comment-17101782
 ] 

ASF subversion and git services commented on LUCENE-9362:
-

Commit 98660c8e41013deed8ce90650675abcf6a5792be in lucene-solr's branch 
refs/heads/branch_8x from Dmitry Emets
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=98660c8 ]

LUCENE-9362: Fix rewriting check in ExpressionValueSource (#1485)



> ExpressionValueSource has buggy rewrite method
> --
>
> Key: LUCENE-9362
> URL: https://issues.apache.org/jira/browse/LUCENE-9362
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/expressions
>Reporter: Dmitry Emets
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ExpressionValuesSource does not actually rewrite itself due to small mistake 
> in check of inner rewrites.
> {code:java}
> changed |= (rewritten[i] == variables[i]);
> {code}
> should be changed to
> {code:java}
> changed |= (rewritten[i] != variables[i]);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9362) ExpressionValueSource has buggy rewrite method

2020-05-07 Thread Alan Woodward (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-9362.
---
Fix Version/s: 8.6
   Resolution: Fixed

> ExpressionValueSource has buggy rewrite method
> --
>
> Key: LUCENE-9362
> URL: https://issues.apache.org/jira/browse/LUCENE-9362
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/expressions
>Reporter: Dmitry Emets
>Priority: Minor
> Fix For: 8.6
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ExpressionValuesSource does not actually rewrite itself due to small mistake 
> in check of inner rewrites.
> {code:java}
> changed |= (rewritten[i] == variables[i]);
> {code}
> should be changed to
> {code:java}
> changed |= (rewritten[i] != variables[i]);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-07 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101784#comment-17101784
 ] 

Erick Erickson commented on SOLR-11934:
---

{quote}I can live with losing the inner details of commit but need that one 
SolrIndexSearcher for opening it. We agree on this I think.
{quote}
It Depends (tm) on how valuable warmup times would be. The log message in 
SolrIndexSearcher comes very early in the process, long before autowarming is 
done. The one in SolrCore comes after autowarming and can trivially include the 
autowarm time with each message. WDYT?

 
{quote}{quote}I really value LogUpdateProcessorFactory; I insist we keep it at 
INFO by default.
{quote}{quote}
I looked some more to see how much I disagreed and... well I don't disagree any 
more. Turns out that both the client app that started this Jira and the logs I 
have to evaluate are ill-behaved. The overwhelming number of updates are a 
single document, not to mention external commits and the like. So if people 
insist on following this pattern, they can bump the log level to WARN for 
LogUpdateProcessorFactory in the log4j2 config files. It won't be nearly as 
egregious if sane patterns are followed.

 
{quote} BTW if you set it to WARN threshold, I can see a small bug where it 
skips the URP altogether and thus also misses logging the slow queries at WARN.
{quote}
 

I don't quite follow. Are you talking about what's done in
{code:java}
LogUpdateProcessorFactory.getLogStringAndClearRspToLog()?{code}
where the response.toLog is cleared? That looks...confused, perhaps another 
JIRA? The response.toLog is referenced in the LogUpdateProcessorFactory and 
cleared in the method I just mentioned (if called). Then, back in SolrCore the 
information is (potentially) logged in requestLog and/or slowLog, both of which 
are bypassed if either of the calls in LogUpdateProcessorFactory are called 
since the response.toLog is cleared. Which means that slow logging and request 
logging aren't working at all. Which isn't true AFAIK, so I'd rather not try to 
address it in this JIRA.

 
{quote} Lets also ensure that this log category is easily separated out so that 
it's easy to manipulate without messing with other logs in whatever class this 
happens to be in.
{quote}
Please raise another Jira if for that, I don't want too much feature creep 
here. I'm not quite sure what you mean by "this log category" here anyway, the 
requestLog?

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will 

[jira] [Commented] (LUCENE-9367) Using a queryText which results in zero tokens causes a query to be built as null

2020-05-07 Thread Alan Woodward (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101786#comment-17101786
 ] 

Alan Woodward commented on LUCENE-9367:
---

`null` does seem like the wrong thing to do here - maybe we should be returning 
a MatchNoDocsQuery instead?

> Using a queryText which results in zero tokens causes a query to be built as 
> null
> -
>
> Key: LUCENE-9367
> URL: https://issues.apache.org/jira/browse/LUCENE-9367
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.2.1
>Reporter: Tim Brier
>Priority: Major
>
> If a queryText produces zero tokens after being processed by an Analyzer, 
> when you try to build a Query with it the result is null.
>  
> The following code reproduces this bug:
> {code:java}
> public class LuceneBug {
> public Query buildQuery() throws IOException {
> Analyzer analyzer = CustomAnalyzer.builder()
> .withTokenizer(StandardTokenizerFactory.class)
> .addTokenFilter(StopFilterFactory.class)
> .build();
> QueryBuilder queryBuilder = new QueryBuilder(analyzer);
> String onlyStopWords = "the and it";
> return queryBuilder.createPhraseQuery("AnyField", onlyStopWords);
> }
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101829#comment-17101829
 ] 

Mark Harwood commented on LUCENE-9365:
--

Wildcard and Regexp are both awkward solutions to this problem - they both come 
with a syntax whose characters must be escaped in any prefix string.

Perhaps another way of addressing the problem is to enhance PrefixQuery to 
allow the option of limiting the number of chars allowed after the prefix 
rather than the current unlimited number.

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14462) Autoscaling placement wrong with concurrent collection creations

2020-05-07 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101830#comment-17101830
 ] 

Ilan Ginzburg commented on SOLR-14462:
--

Did a test with no wait before creating a new Session if current cached session 
is COMPUTING. Doesn't work (creates a new Session every time more or less, saw 
99 sessions for 100 collection creations) because the way the cache can only 
hold a single session make them non reusable.
Not waiting before creating a session implies being able to cache more than one.

Note the run was faster with better throughput of creations per second. Max 
time was significantly lower, cluster imbalance was similar. Measures below are 
http request times seen from JMeter for creation of 1 shard 1 replica 
collections.

Wait 10 seconds to see if session becomes available:

Avg 17879ms, min 7794, max 26063, 3.8/sec, 81/16/3 collections per node, 57 
Sessions created total.

 

Do not wait, create new session if cached one not available:

Avg 17721ms, min 7097, max 20951, 4.7/sec, 80/11/9 collections per node, 99 
Sessions created total.

> Autoscaling placement wrong with concurrent collection creations
> 
>
> Key: SOLR-14462
> URL: https://issues.apache.org/jira/browse/SOLR-14462
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Ilan Ginzburg
>Priority: Major
> Attachments: policylogs.txt
>
>
> Under concurrent collection creation, wrong Autoscaling placement decisions 
> can lead to severely unbalanced clusters.
>  Sequential creation of the same collections is handled correctly and the 
> cluster is balanced.
> *TL;DR;* under high load, the way sessions that cache future changes to 
> Zookeeper are managed cause placement decisions of multiple concurrent 
> Collection API calls to ignore each other, be based on identical “initial” 
> cluster state, possibly leading to identical placement decisions and as a 
> consequence cluster imbalance.
> *Some context first* for those less familiar with how Autoscaling deals with 
> cluster state change: a PolicyHelper.Session is created with a snapshot of 
> the Zookeeper cluster state and is used to track already decided but not yet 
> persisted to Zookeeper cluster state changes so that Collection API commands 
> can make the right placement decisions.
>  A Collection API command either uses an existing cached Session (that 
> includes changes computed by previous command(s)) or creates a new Session 
> initialized from the Zookeeper cluster state (i.e. with only state changes 
> already persisted).
>  When a Collection API command requires a Session - and one is needed for any 
> cluster state update computation - if one exists but is currently in use, the 
> command can wait up to 10 seconds. If the session becomes available, it is 
> reused. Otherwise, a new one is created.
> The Session lifecycle is as follows: it is created in COMPUTING state by a 
> Collection API command and is initialized with a snapshot of cluster state 
> from Zookeeper (does not require a Zookeeper read, this is running on 
> Overseer that maintains a cache of cluster state). The command has exclusive 
> access to the Session and can change the state of the Session. When the 
> command is done changing the Session, the Session is “returned” and its state 
> changes to EXECUTING while the command continues to run to persist the state 
> to Zookeeper and interact with the nodes, but no longer interacts with the 
> Session. Another command can then grab a Session in EXECUTING state, change 
> its state to COMPUTING to compute new changes taking into account previous 
> changes. When all commands having used the session have completed their work, 
> the session is “released” and destroyed (at this stage, Zookeeper contains 
> all the state changes that were computed using that Session).
> The issue arises when multiple Collection API commands are executed at once. 
> A first Session is created and commands start using it one by one. In a 
> simple 1 shard 1 replica collection creation test run with 100 parallel 
> Collection API requests (see debug logs from PolicyHelper in file 
> policy.logs), this Session update phase (Session in COMPUTING status in 
> SessionWrapper) takes about 250-300ms (MacBook Pro).
> This means that about 40 commands can run by using in turn the same Session 
> (45 in the sample run). The commands that have been waiting for too long time 
> out after 10 seconds, more or less all at the same time (at the rate at which 
> they have been received by the OverseerCollectionMessageHandler, approx one 
> per 100ms in the sample run) and most/all independen

[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Alan Woodward (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101835#comment-17101835
 ] 

Alan Woodward commented on LUCENE-9365:
---

I think you can do it as a plain AutomatonQuery?  You can build the relevant 
Automaton using Automata.makeString() and Automata.makeAnyChar() and combine 
with Operations.concatenate().

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood opened a new pull request #1496: Bugfix for FuzzyQuery false negative when prefix length == search term length

2020-05-07 Thread GitBox


markharwood opened a new pull request #1496:
URL: https://github.com/apache/lucene-solr/pull/1496


   Fix for Jira issue 9365 where search for `abc` doesn't match doc `abcd` if 
prefixlength = 3 and edit distance =1.
   The fix is to rewrite the FuzzyQuery as a specialised form of Automaton 
query when prefix length == search string length.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Mark Harwood (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101861#comment-17101861
 ] 

Mark Harwood commented on LUCENE-9365:
--

Thanks for the suggestion. I have a new PR based on that here 
https://github.com/apache/lucene-solr/pull/1496

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on a change in pull request #1496: Bugfix for FuzzyQuery false negative when prefix length == search term length

2020-05-07 Thread GitBox


madrob commented on a change in pull request #1496:
URL: https://github.com/apache/lucene-solr/pull/1496#discussion_r421656988



##
File path: lucene/core/src/java/org/apache/lucene/search/FuzzyQuery.java
##
@@ -99,7 +103,24 @@ public FuzzyQuery(Term term, int maxEdits, int 
prefixLength, int maxExpansions,
 this.prefixLength = prefixLength;
 this.transpositions = transpositions;
 this.maxExpansions = maxExpansions;
-setRewriteMethod(new 
MultiTermQuery.TopTermsBlendedFreqScoringRewrite(maxExpansions));
+if (term.text().length() == prefixLength) {
+  setRewriteAsPrefixedQuery();
+} else {
+  setRewriteMethod(new 
MultiTermQuery.TopTermsBlendedFreqScoringRewrite(maxExpansions));
+}
+  }
+
+  private void setRewriteAsPrefixedQuery() {

Review comment:
   Can we refactor this into a static inner class?

##
File path: lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
##
@@ -225,6 +225,39 @@ public void testFuzziness() throws Exception {
 directory.close();
   }
   
+  public void testPrefixLengthEqualStringLength() throws Exception {
+Directory directory = newDirectory();
+RandomIndexWriter writer = new RandomIndexWriter(random(), directory);
+addDoc("b*a", writer);

Review comment:
   Can you add a test case with multi-byte strings as well?

##
File path: lucene/core/src/java/org/apache/lucene/search/FuzzyQuery.java
##
@@ -99,7 +103,24 @@ public FuzzyQuery(Term term, int maxEdits, int 
prefixLength, int maxExpansions,
 this.prefixLength = prefixLength;
 this.transpositions = transpositions;
 this.maxExpansions = maxExpansions;
-setRewriteMethod(new 
MultiTermQuery.TopTermsBlendedFreqScoringRewrite(maxExpansions));
+if (term.text().length() == prefixLength) {

Review comment:
   I think the problem is that we're checking this in `getTermsEnum` and 
potentially returning a `SingleTermsEnum`. Checking it here and modifying 
rewriteMethod feels inconsistent.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob edited a comment on pull request #1496: Bugfix for FuzzyQuery false negative when prefix length == search term length

2020-05-07 Thread GitBox


madrob edited a comment on pull request #1496:
URL: https://github.com/apache/lucene-solr/pull/1496#issuecomment-625381389


   Thanks for catching this, this feels significant enough of a bug fix to 
benefit from a JIRA issue, could you file one and link to this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on pull request #1496: Bugfix for FuzzyQuery false negative when prefix length == search term length

2020-05-07 Thread GitBox


madrob commented on pull request #1496:
URL: https://github.com/apache/lucene-solr/pull/1496#issuecomment-625381389


   Thanks for catching this, this feels significant enough of a bug fix to 
benefit from a JIRA issue, could you file one at issues.apache.org/jira and 
link to this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14464) Loading data in solr from Neo4j Database

2020-05-07 Thread Narayana (Jira)
Narayana created SOLR-14464:
---

 Summary: Loading data in solr from Neo4j Database
 Key: SOLR-14464
 URL: https://issues.apache.org/jira/browse/SOLR-14464
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query
Affects Versions: 7.5
Reporter: Narayana
 Attachments: Solr Log Error.docx, neo4j-data-config.xml, solrconfig.xml

Hi Team, I have my Neo4j Data setup by loading/importing the data from .cvs 
file. Now I am trying to move the Neo4j data to my solr database. So in my solr 
database configuration file when trying to connect to my Neo4j giving necessary 
credentials it is not working .

My local : Necessary parameters provided in solr

name=employee
neo4j-hostname=bolt://localhost:7687
neo4j-username=neo4j
neo4j-password=**
solr.data.dir=C:\\Users\\.\\MySolr\\solr-7.5.0\\solr-7.5.0\\server\solr\\data

 

 

Attached is the  Query String present in neo4j-data-config.xml

 

After executing the import query - getting the following error that can be seen 
in : solr Log Error.

 

My finding was - there is a unicode character getting added up - ​

{{​}} It is Unicode zero-width space which seems to be generated when 
forwarding the Querystring.

 

Even tried adding this in xml beginning: "" And ""

Both didn't work.

 

Please let me know if any more details are still required.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-07 Thread Michael McCandless (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101903#comment-17101903
 ] 

Michael McCandless commented on LUCENE-9365:


Maybe we should disallow {{prefix == term.text().length()}} for {{FuzzyQuery}}? 
 It is sort of strange to use {{FuzzyQuery}} in this way :)

Does {{FuzzyQuery}} even catch mis-use, e.g. {{prefix > term.text().length()}}?

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood commented on pull request #1496: Bugfix for FuzzyQuery false negative when prefix length == search term length

2020-05-07 Thread GitBox


markharwood commented on pull request #1496:
URL: https://github.com/apache/lucene-solr/pull/1496#issuecomment-625442123


   >Thanks for catching this, this feels significant enough of a bug fix to 
benefit from a JIRA issue, could you file one and link to this PR?
   
   Thanks for reviewing. The Jira issue is 
[here](https://issues.apache.org/jira/browse/LUCENE-9365)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley opened a new pull request #1497: SOLR-8394: /admin/luke didn't computeindexHeapUsageBytes

2020-05-07 Thread GitBox


dsmiley opened a new pull request #1497:
URL: https://github.com/apache/lucene-solr/pull/1497


   Needed to call FilterLeafReader.unwrap.
   
   CC @sigram 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2020-05-07 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17101984#comment-17101984
 ] 

David Smiley commented on SOLR-8394:


I reworked it and submitted a PR.

[~ab], you may be interested. I wonder if the computation of this ought to also 
go onto {{SegmentsInfoRequestHandler}}?  These two request handlers overlap in 
purpose; they ought to each differentiate themselves in javadoc and have 
cross-link..

> Luke handler doesn't support FilterLeafReader
> -
>
> Key: SOLR-8394
> URL: https://issues.apache.org/jira/browse/SOLR-8394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-8394.patch, SOLR-8394.patch, SOLR-8394.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When fetching index information, luke handler only looks at ramBytesUsed for 
> SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no 
> RAM usage is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-07 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102019#comment-17102019
 ] 

David Smiley commented on SOLR-11934:
-

{quote}The one in SolrCore comes after autowarming and can trivially include 
the autowarm time with each message. WDYT?
{quote}
Agreed; keep the last one (SolrCore "Registered new searcher").  I wish it had 
timing info and ditched the too-detailed low level Reader metadata; alas.  I 
could file an issue for that or preferrably just share with you a little bit of 
code.

RE LogUpdateProcessorFactory and WARN

See getInstance() and notice that it only looks at the log level to skip the 
URP altogether.  It ought to check slowUpdateThresholdMillis >= 0 _as well_ but 
does not.  I suppose another Jira might be proper but it's not a big deal to do 
here if you want because it's minor; avoids the weight of our process.

RE the request log:  Nevermind; the current logs there are fine.  I just did a 
bit of experimenting to see.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14465:
-

 Summary: Reproducing seed in FuzzySearchTest
 Key: SOLR-14465
 URL: https://issues.apache.org/jira/browse/SOLR-14465
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson


reproduce with: ant test -Dtestcase=FuzzySearchTest 
-Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=yue-Hant-HK 
-Dtests.timezone=America/Dawson -Dtests.asserts=true -Dtests.file.encoding=UTF-8

 

[~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
changed the exception returned, but haven't looked very closely.


 [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
 [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
exception type, expected RemoteSolrException but got 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:51202/solr/c1]
 [junit4] > at 
__randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
 [junit4] > at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
 [junit4] > at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
 [junit4] > at 
org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
 [junit4] > at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [junit4] > at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 [junit4] > at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
 [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
 [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
live SolrServers available to handle this 
request:[http://127.0.0.1:51202/solr/c1]
 [junit4] > at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
 [junit4] > at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
 [junit4] > at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
 [junit4] > at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
 [junit4] > at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
 [junit4] > at 
org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
 [junit4] > at 
org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
 [junit4] > at 
org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
 [junit4] > at 
org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
 [junit4] > at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
 [junit4] > ... 41 more
 [junit4] > Caused by: 
org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
 [junit4] > at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
 [junit4] > at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
 [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14465:
--
Affects Version/s: master (9.0)

> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14465:
--
Component/s: Tests

> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-07 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102025#comment-17102025
 ] 

Erick Erickson commented on SOLR-11934:
---

{quote} I wish it had timing info and ditched the too-detailed low
{quote}
I can make that happen. It's a bit ugly, but looks something like this:
{code}
if (log.isdebugEnabled()) {
  // This form dumps a bunch of messages like
  //  Uninverting(_6(9.0.0):C177290:[diagnostics={lucene.version=9.0.0, 
java.vm.version=11.0.5+10,
  //  java.version=11.0.5, timestamp=1588849987808, os=Mac OS X,
  //  java.vendor=AdoptOpenJDK, os.version=10.15.4, 
java.runtime.version=11.0.5+10,
  //  os.arch=x86_64, 
source=flush}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]
  //  :id=ce2v4okod0tsdz9wxrl628t9s)
  // which I've never found particularly useful.
  // I'll take this comment out before pushing, this comment is here 
for discussion.
  // nocommit
  log.debug("{} Registered new searcher {} autowarm time: {} ms", 
logid, newSearcher, newSearcher.getWarmupTime());
} else if (log.isInfoEnabled()) {
  log.info("{} Registered new searcher autowarm time: {} ms", logid, 
newSearcher.getWarmupTime());
}
{code}
Or just get rid of all the low-level stuff altogether? Personally, I've never 
really found the low-level stuff useful and would happily get rid of it 
altogether. Once upon a time the Uninverting message was a red flag indicating 
that a field should have docvalues, but that disappeared.

bq. See getInstance() and notice that...

Ah, I was looking in a totally different place, I'll add that in.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102027#comment-17102027
 ] 

Mike Drob commented on SOLR-14465:
--

Yea, I think I have an idea of what broke here, looking.

> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14463) solr admin ui: zkstatus: For input string: "null" with zk 3.6.x

2020-05-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102047#comment-17102047
 ] 

Jan Høydahl commented on SOLR-14463:


Judging from the code, the reponse from 'mntr' command had 
'zk_server_state=leader' and 'zk_followers=null'. The code assumes that the 
'zk_followers' value is always available on the leader. Can you by any chance 
report the raw output you get from the mntr command:
{code}
echo mntr | nc localhost 2181
{code}

> solr admin ui: zkstatus: For input string: "null" with zk 3.6.x
> ---
>
> Key: SOLR-14463
> URL: https://issues.apache.org/jira/browse/SOLR-14463
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.5.1
>Reporter: Bernd Wahlen
>Priority: Major
>
> When i select in solr 8.5.1 webinterface: Cloud - ZK Status i got
> For input string: "null"
> This happens after upgrading the leader to zookeeper 3.6.1 (upgrading the 
> followers before was not a problem). From my view configuration is running, 
> just the status web page doesn't.
> I remember that there was similar problem before with solr/zk version 
> incompatibilities. From zookeeper documentation 3.6.1 should be fully 
> compatible with 3.5 clients.
> Annoyingly there is no official zk 3.6.1 docker container available, but i 
> think it is easy to reproduce.
> and from solr.log file:
> {code:java}
> 2020-05-07 15:58:33.194 ERROR (qtp1940055334-231) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
> at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.base/java.lang.Integer.parseInt(Integer.java:652)
> at java.base/java.lang.Integer.parseInt(Integer.java:770)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:842)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:808)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:559)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
> at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at org.eclipse.jetty.server.Server.handle(Server.java:500)
> at 
> org.eclipse.jetty.server.HttpChannel

[jira] [Assigned] (SOLR-14463) solr admin ui: zkstatus: For input string: "null" with zk 3.6.x

2020-05-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-14463:
--

Assignee: Jan Høydahl

> solr admin ui: zkstatus: For input string: "null" with zk 3.6.x
> ---
>
> Key: SOLR-14463
> URL: https://issues.apache.org/jira/browse/SOLR-14463
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.5.1
>Reporter: Bernd Wahlen
>Assignee: Jan Høydahl
>Priority: Major
>
> When i select in solr 8.5.1 webinterface: Cloud - ZK Status i got
> For input string: "null"
> This happens after upgrading the leader to zookeeper 3.6.1 (upgrading the 
> followers before was not a problem). From my view configuration is running, 
> just the status web page doesn't.
> I remember that there was similar problem before with solr/zk version 
> incompatibilities. From zookeeper documentation 3.6.1 should be fully 
> compatible with 3.5 clients.
> Annoyingly there is no official zk 3.6.1 docker container available, but i 
> think it is easy to reproduce.
> and from solr.log file:
> {code:java}
> 2020-05-07 15:58:33.194 ERROR (qtp1940055334-231) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
> at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.base/java.lang.Integer.parseInt(Integer.java:652)
> at java.base/java.lang.Integer.parseInt(Integer.java:770)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:842)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:808)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:559)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
> at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at org.eclipse.jetty.server.Server.handle(Server.java:500)
> at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
> at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)
> at

[jira] [Updated] (LUCENE-9328) SortingGroupHead to reuse DocValues

2020-05-07 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-9328:
-
Attachment: LUCENE-9328.patch
Status: Patch Available  (was: Patch Available)

> SortingGroupHead to reuse DocValues
> ---
>
> Key: LUCENE-9328
> URL: https://issues.apache.org/jira/browse/LUCENE-9328
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, 
> LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, 
> LUCENE-9328.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> That's why 
> https://issues.apache.org/jira/browse/LUCENE-7701?focusedCommentId=17084365&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17084365



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob reassigned SOLR-14465:


Assignee: Mike Drob

> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Assignee: Mike Drob
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102084#comment-17102084
 ] 

ASF subversion and git services commented on SOLR-14465:


Commit 03a60231e8e8570f47d3a64de5c6f03c26e7379a in lucene-solr's branch 
refs/heads/master from Mike Drob
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=03a6023 ]

SOLR-14465: Solr query handling code catches FuzzyTermsException

This reverts commit 7ea7ed72aca556f957a5de55911c852124db8715.


> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Assignee: Mike Drob
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14466) Upgrade log4j2 to latest release (2.13.2)

2020-05-07 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14466:
-

 Summary: Upgrade log4j2 to latest release (2.13.2)
 Key: SOLR-14466
 URL: https://issues.apache.org/jira/browse/SOLR-14466
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Reporter: Erick Erickson
Assignee: Erick Erickson


Before diving back into the occasional failures because, apparently, async 
logging isn't flushing all the buffered log messages as expected, I think it's 
worth upgrading log4j2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102088#comment-17102088
 ] 

Mike Drob commented on SOLR-14465:
--

[~erickerickson] - have you seen any similar failures on branch_8x? It looks 
like that test doesn't exist in the older version.

> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Assignee: Mike Drob
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102097#comment-17102097
 ] 

Erick Erickson edited comment on SOLR-14465 at 5/7/20, 11:23 PM:
-

[~mdrob] I just pulled the first one I saw from the dev list e-mails I get, 
didn't check whether it was master or branch. And I tested on master so likely 
it's not a problem on 8x.

BTW, thanks for hopping on it so quickly!


was (Author: erickerickson):
[~mdrob] I just pulled the first one I saw from the dev list e-mails I get, 
didn't check whether it was master or branch. And I tested on master so likely 
it's not a problem on 8x.

> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Assignee: Mike Drob
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14465) Reproducing seed in FuzzySearchTest

2020-05-07 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102097#comment-17102097
 ] 

Erick Erickson commented on SOLR-14465:
---

[~mdrob] I just pulled the first one I saw from the dev list e-mails I get, 
didn't check whether it was master or branch. And I tested on master so likely 
it's not a problem on 8x.

> Reproducing seed in FuzzySearchTest
> ---
>
> Key: SOLR-14465
> URL: https://issues.apache.org/jira/browse/SOLR-14465
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Erick Erickson
>Assignee: Mike Drob
>Priority: Major
>
> reproduce with: ant test -Dtestcase=FuzzySearchTest 
> -Dtests.method=testTooComplex -Dtests.seed=B6A1F20B2D413B4 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=yue-Hant-HK -Dtests.timezone=America/Dawson 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>  
> [~romseygeek] [~mdrob]  My guess is that changes in LUCENE-9350/LUCENE-9068 
> changed the exception returned, but haven't looked very closely.
>  [junit4] FAILURE 3.01s | FuzzySearchTest.testTooComplex <<<
>  [junit4] > Throwable #1: junit.framework.AssertionFailedError: Unexpected 
> exception type, expected RemoteSolrException but got 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> __randomizedtesting.SeedInfo.seed([B6A1F20B2D413B4:DDC85504AD0720FE]:0)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2751)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2739)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.testTooComplex(FuzzySearchTest.java:64)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  [junit4] > at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit4] > at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  [junit4] > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  [junit4] > at java.base/java.lang.Thread.run(Thread.java:834)
>  [junit4] > Caused by: org.apache.solr.client.solrj.SolrServerException: No 
> live SolrServers available to handle this 
> request:[http://127.0.0.1:51202/solr/c1]
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1147)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:910)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:842)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1004)
>  [junit4] > at 
> org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1019)
>  [junit4] > at 
> org.apache.solr.search.FuzzySearchTest.lambda$testTooComplex$0(FuzzySearchTest.java:64)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase._expectThrows(LuceneTestCase.java:2869)
>  [junit4] > at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2744)
>  [junit4] > ... 41 more
>  [junit4] > Caused by: 
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException: 
> Error from server at http://127.0.0.1:51202/solr/c1: Term too complex: 
> headquarters(在日米海軍横須賀基地司令部庁舎/旧横須賀鎮守府会議所・横須賀海軍艦船部)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
>  [junit4] > at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
>  [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14467) inconsistent server errors combining relatedness() with allBuckets:true

2020-05-07 Thread Chris M. Hostetter (Jira)
Chris M. Hostetter created SOLR-14467:
-

 Summary: inconsistent server errors combining relatedness() with 
allBuckets:true
 Key: SOLR-14467
 URL: https://issues.apache.org/jira/browse/SOLR-14467
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Chris M. Hostetter


While working on randomized testing for SOLR-13132 i discovered a variety of 
different ways that JSON Faceting's "allBuckets" option can fail when combined 
with the "relatedness()" function.

I haven't found a trivial way to manual reproduce this, but i have been able to 
trigger the failures with a trivial patch to {{TestCloudJSONFacetSKG}} which i 
will attach.

Based on the nature of the failures it looks like it may have something to do 
with multiple segments of different sizes, and or resizing the SlotAccs ?

The relatedness() function doesn't have much (any?) existing tests in place 
that leverage "allBuckets" so this is probably a bug that has always existed -- 
it's possible it may be excessively cumbersome to fix and we might nee/wnat to 
just document that incompatibility and add some code to try and detect if the 
user combines these options and if so fail with a 400 error?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14467) inconsistent server errors combining relatedness() with allBuckets:true

2020-05-07 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter updated SOLR-14467:
--
Attachment: SOLR-14467_test.patch
Status: Open  (was: Open)


Examples of the various (root cause) failures that can occur with the attached 
test patch...

{noformat}
   [junit4]   2> 5351 INFO  (qtp10892224-67) [n:127.0.0.1:40385_solr 
c:org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection s:shard1 
r:core_node3 
x:org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection_shard1_replica_n1
 ] o.a.s.c.S.Request 
[org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection_shard1_replica_n1]
  webapp=/solr path=/select 
params={df=text&distrib=false&_facet_={}&fl=id&fl=score&shards.purpose=1064964&start=0&fsv=true&back=*:*&shard.url=http://127.0.0.1:40385/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection_shard1_replica_n1/&rows=0&fore=(field_4_sds:17+OR+field_7_sds:49+OR+field_10_sds:23+OR+field_2_sdsS:7+OR+field_5_sdsS:31+OR+field_6_ss:20+OR+field_4_sds:43+OR+field_14_sdsS:61+OR+field_7_sds:34)&version=2&q=(field_3_ss:44+OR+field_14_sdsS:23+OR+field_13_sds:28+OR+field_6_ss:44)&json.facet={+processEmpty:+true,+facet_1+:+{+type:terms,+field:field_14_idsS,+limit:-1,+overrequest:0,+sort:+'skg+desc',+allBuckets:true,+refine:+true,+domain:+{+query:+'*:*'+},+facet:{+processEmpty:+true,+facet_2+:+{+type:terms,+field:field_8_idsS,+limit:67,+overrequest:0,+sort:+'index+asc',+allBuckets:true,+refine:+true,+domain:+{+query:+'*:*'+},+facet:{+processEmpty:+true,+skg+:+'relatedness($fore,$back)'}}+,skg+:+'relatedness($fore,$back)'}}+,facet_3+:+{+type:terms,+field:field_2_idsS,+limit:-1,+overrequest:0,+sort:+'skg+asc',+allBuckets:true,+refine:+true,+domain:+{+query:+'*:*'+},+facet:{+processEmpty:+true,+skg+:+'relatedness($fore,$back)'}}+}&omitHeader=false&NOW=1588893509649&isShard=true&wt=javabin}
 hits=12 status=500 QTime=56
   [junit4]   2> 5352 ERROR (qtp1523452543-62) [n:127.0.0.1:33775_solr 
c:org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection s:shard2 
r:core_node4 
x:org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection_shard2_replica_n2
 ] o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 
128 out of bounds for length 128
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.lambda$new$3(FacetFieldProcessorByHashDV.java:439)
   [junit4]   2>at 
org.apache.solr.search.facet.RelatednessAgg$SKGSlotAcc.processSlot(RelatednessAgg.java:189)
   [junit4]   2>at 
org.apache.solr.search.facet.RelatednessAgg$SKGSlotAcc.collect(RelatednessAgg.java:217)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessor$SpecialSlotAcc.collect(FacetFieldProcessor.java:772)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:289)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:430)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:389)
   [junit4]   2>at 
org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:278)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:374)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:248)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:215)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:416)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:475)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:432)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:416)
   [junit4]   2>at 
org.apache.solr.search.facet.FacetModule.process(FacetModule.java:147)



   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestCloudJSONFacetSKG -Dtests.method=testRandom 
-Dtests.seed=58E72A6ADD982BE0 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=it-VA -Dtests.timezone=Etc/GMT-13 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.11s | TestCloudJSONFacetSKG.testRandom <<<
   [junit4]> Throwable #1: java.lang.RuntimeException: init query failed: 
{main(q=(field_6_ss:29+OR+field_8_sdsS:9+OR+field_0_ss:36+OR+field_7_sds:15+OR+field_6_ss:54+OR+field_2_sdsS:50+OR+field_4_sds:47+OR+field_8_sdsS:21)&json.facet={+processEmpty:+true,+facet_1+

[jira] [Commented] (SOLR-13132) Improve JSON "terms" facet performance when sorted by relatedness

2020-05-07 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102143#comment-17102143
 ] 

Chris M. Hostetter commented on SOLR-13132:
---

I pushed a bunch of additional test improvements, but ran into SOLR-14467 along 
the way ... I've forgotten so much of how/why the code works the way it does 
(and i'm not sure that i ever relaly thought about {{allBuckets}} or looked at 
it that closely), that i'm not really sure what's going on there, so if you 
have any insights on ways ot fix that i'd appreciate it ... but if we can't 
find a "quick fix" then we can punt on it – since it's an existing bug (that's 
aparently existed from day 1 of {{relatedness()}} ) there's no reason for 
fixing that to completley block this issue.

> Improve JSON "terms" facet performance when sorted by relatedness 
> --
>
> Key: SOLR-13132
> URL: https://issues.apache.org/jira/browse/SOLR-13132
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Affects Versions: 7.4, master (9.0)
>Reporter: Michael Gibney
>Priority: Major
> Attachments: SOLR-13132-with-cache-01.patch, 
> SOLR-13132-with-cache.patch, SOLR-13132.patch, SOLR-13132_testSweep.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When sorting buckets by {{relatedness}}, JSON "terms" facet must calculate 
> {{relatedness}} for every term. 
> The current implementation uses a standard uninverted approach (either 
> {{docValues}} or {{UnInvertedField}}) to get facet counts over the domain 
> base docSet, and then uses that initial pass as a pre-filter for a 
> second-pass, inverted approach of fetching docSets for each relevant term 
> (i.e., {{count > minCount}}?) and calculating intersection size of those sets 
> with the domain base docSet.
> Over high-cardinality fields, the overhead of per-term docSet creation and 
> set intersection operations increases request latency to the point where 
> relatedness sort may not be usable in practice (for my use case, even after 
> applying the patch for SOLR-13108, for a field with ~220k unique terms per 
> core, QTime for high-cardinality domain docSets were, e.g.: cardinality 
> 1816684=9000ms, cardinality 5032902=18000ms).
> The attached patch brings the above example QTimes down to a manageable 
> ~300ms and ~250ms respectively. The approach calculates uninverted facet 
> counts over domain base, foreground, and background docSets in parallel in a 
> single pass. This allows us to take advantage of the efficiencies built into 
> the standard uninverted {{FacetFieldProcessorByArray[DV|UIF]}}), and avoids 
> the per-term docSet creation and set intersection overhead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9328) SortingGroupHead to reuse DocValues

2020-05-07 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102181#comment-17102181
 ] 

Lucene/Solr QA commented on LUCENE-9328:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} lucene_grouping generated 1 new + 100 unchanged - 0 fixed = 101 
total (was 100) {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} grouping in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} join in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} queries in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 30s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.TestGroupingSearch |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-9328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13002346/LUCENE-9328.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 03a60231e8e |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
| javac | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/270/artifact/out/diff-compile-javac-lucene_grouping.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/270/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/270/testReport/ |
| modules | C: lucene/grouping lucene/join lucene/queries lucene/test-framework 
solr/core U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/270/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> SortingGroupHead to reuse DocValues
> ---
>
> Key: LUCENE-9328
> URL: https://issues.apache.org/jira/browse/LUCENE-9328
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, 
> LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, LUCENE-9328.patch, 
> LUCENE-9328.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> That's why 
> https://issues.apache.org/jira/browse/LUCENE-7701?focusedCommentId=17084365&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17084365



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9363) TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102259#comment-17102259
 ] 

ASF subversion and git services commented on LUCENE-9363:
-

Commit 9efbbd4142b417e7dadf4bd9794b3b4591a3e7b2 in lucene-solr's branch 
refs/heads/branch_8x from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9efbbd4 ]

LUCENE-9363: Only assert for no merging segments we merges are disabled


> TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI
> --
>
> Key: LUCENE-9363
> URL: https://issues.apache.org/jira/browse/LUCENE-9363
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Simon Willnauer
>Priority: Major
>
> {noformat}
> [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=7C9973DFB2835976 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=ro-MD -Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.25s J2 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([7C9973DFB2835976:4F809CC13240E320]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.abortMerges(IndexWriter.java:2497)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2424)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.deleteAll(RandomIndexWriter.java:373)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterDelete.testDeleteAllNoDeadLock(TestIndexWriterDelete.java:348)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:832)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterDelete_7C9973DFB2835976-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene84): 
> {field=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  city=Lucene84, 
> contents=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  id=Lucene84, value=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> content=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{dv=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=2033, 
> maxMBSortInHeap=6.1560152382287825, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@c9cd3c9),
>  locale=ro-MD, timezone=SystemV/AST4ADT
>[junit4]   2> NOTE: Linux 5.3.0-46-generic amd64/Oracle Corporation 15-ea 
> (64-bit)/cpus=16,threads=1,free=389663704,total=518979584
>[junit4]   2> NOTE: All tests run in this JVM: [TestIntroSorter, 
> TestNGramPhraseQuery, TestUpgradeIndexMergePolicy, TestBasicModelIne, 
> TestTragicIndexWriterDeadlock, TestSimpleExplanations, TestDirectory, 
> TestSegmentToThreadMapping, TestNIOFSDirectory, TestAxiomaticF3LOG, 
> TestIndexOptions, TestSpanNotQuery, TestBinaryDocument, 
> TestFixedLengthBytesRefArray, TestForUtil, TestParallelLeafReader, 
> TestDirectoryReader, TestBytesRef, TestLongRange, TestByteSlices, 
> TestRectangle2D, TestBufferedChecksum, TestIndexOrDocValuesQuery, 
> TestFilterDirectoryReader, TestSleepingLockWrapper, TestDocIdSetIterator, 
> TestMultiPhraseQuery, TestSearchWithThreads, TestPointQueries, 
> TestForceMergeForever, TestNewestSegment, TestCrash, TestIntSet, 
> Test4GBStoredFields, TestDirectPacked, TestLiveFieldValues, 
> TestTopDocsCollector, Test2BPositions, TestAtomicUpdate, 
> TestFSTDirectAddressing, TestXYPointQueries, TestLatLonLineShapeQueries, 
> TestStressAdvance, TestBasics, TestLucene50StoredFieldsFormat, 
> TestQueryRescorer, TestConstantScoreScorer, TestAssertions, TestDemo, 
> TestExternalCodecs, TestMergeSchedulerExternal, TestAnalyzerWrapper, 
> TestCachingTokenFilter, TestCharArrayMap, TestCharArraySet, TestCharFilter, 
> TestCharacterUtils, TestDelegatingAnalyzerWrapper, TestGraphTokenFilter, 
> TestGraphTokenizers, TestReusableStringReader, TestWordlistLoader, 
> TestStandardAnalyzer, TestBytesRefAttImpl, TestCharTermAttributeImpl, 
> TestCodecUtil, TestCompetitiveFreqNormAccumulator, TestFastCompressionMode, 
> TestFastD

[jira] [Commented] (LUCENE-9363) TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI

2020-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102258#comment-17102258
 ] 

ASF subversion and git services commented on LUCENE-9363:
-

Commit 30ba8de40af7d76b910aebf62263c890173dec45 in lucene-solr's branch 
refs/heads/master from Simon Willnauer
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=30ba8de ]

LUCENE-9363: Only assert for no merging segments we merges are disabled


> TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI
> --
>
> Key: LUCENE-9363
> URL: https://issues.apache.org/jira/browse/LUCENE-9363
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Simon Willnauer
>Priority: Major
>
> {noformat}
> [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=7C9973DFB2835976 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=ro-MD -Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.25s J2 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([7C9973DFB2835976:4F809CC13240E320]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.abortMerges(IndexWriter.java:2497)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2424)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.deleteAll(RandomIndexWriter.java:373)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterDelete.testDeleteAllNoDeadLock(TestIndexWriterDelete.java:348)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:832)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterDelete_7C9973DFB2835976-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene84): 
> {field=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  city=Lucene84, 
> contents=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  id=Lucene84, value=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> content=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{dv=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=2033, 
> maxMBSortInHeap=6.1560152382287825, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@c9cd3c9),
>  locale=ro-MD, timezone=SystemV/AST4ADT
>[junit4]   2> NOTE: Linux 5.3.0-46-generic amd64/Oracle Corporation 15-ea 
> (64-bit)/cpus=16,threads=1,free=389663704,total=518979584
>[junit4]   2> NOTE: All tests run in this JVM: [TestIntroSorter, 
> TestNGramPhraseQuery, TestUpgradeIndexMergePolicy, TestBasicModelIne, 
> TestTragicIndexWriterDeadlock, TestSimpleExplanations, TestDirectory, 
> TestSegmentToThreadMapping, TestNIOFSDirectory, TestAxiomaticF3LOG, 
> TestIndexOptions, TestSpanNotQuery, TestBinaryDocument, 
> TestFixedLengthBytesRefArray, TestForUtil, TestParallelLeafReader, 
> TestDirectoryReader, TestBytesRef, TestLongRange, TestByteSlices, 
> TestRectangle2D, TestBufferedChecksum, TestIndexOrDocValuesQuery, 
> TestFilterDirectoryReader, TestSleepingLockWrapper, TestDocIdSetIterator, 
> TestMultiPhraseQuery, TestSearchWithThreads, TestPointQueries, 
> TestForceMergeForever, TestNewestSegment, TestCrash, TestIntSet, 
> Test4GBStoredFields, TestDirectPacked, TestLiveFieldValues, 
> TestTopDocsCollector, Test2BPositions, TestAtomicUpdate, 
> TestFSTDirectAddressing, TestXYPointQueries, TestLatLonLineShapeQueries, 
> TestStressAdvance, TestBasics, TestLucene50StoredFieldsFormat, 
> TestQueryRescorer, TestConstantScoreScorer, TestAssertions, TestDemo, 
> TestExternalCodecs, TestMergeSchedulerExternal, TestAnalyzerWrapper, 
> TestCachingTokenFilter, TestCharArrayMap, TestCharArraySet, TestCharFilter, 
> TestCharacterUtils, TestDelegatingAnalyzerWrapper, TestGraphTokenFilter, 
> TestGraphTokenizers, TestReusableStringReader, TestWordlistLoader, 
> TestStandardAnalyzer, TestBytesRefAttImpl, TestCharTermAttributeImpl, 
> TestCodecUtil, TestCompetitiveFreqNormAccumulator, TestFastCompressionMode, 
> TestFastDeco

[jira] [Resolved] (LUCENE-9363) TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI

2020-05-07 Thread Simon Willnauer (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-9363.
-
  Assignee: Simon Willnauer
Resolution: Fixed

> TestIndexWriterDelete.testDeleteAllNoDeadLock failed on CI
> --
>
> Key: LUCENE-9363
> URL: https://issues.apache.org/jira/browse/LUCENE-9363
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Major
>
> {noformat}
> [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=7C9973DFB2835976 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=ro-MD -Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.25s J2 | TestIndexWriterDelete.testDeleteAllNoDeadLock 
> <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([7C9973DFB2835976:4F809CC13240E320]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.abortMerges(IndexWriter.java:2497)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2424)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.deleteAll(RandomIndexWriter.java:373)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterDelete.testDeleteAllNoDeadLock(TestIndexWriterDelete.java:348)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:832)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterDelete_7C9973DFB2835976-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene84): 
> {field=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  city=Lucene84, 
> contents=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene84)),
>  id=Lucene84, value=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> content=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{dv=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=2033, 
> maxMBSortInHeap=6.1560152382287825, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@c9cd3c9),
>  locale=ro-MD, timezone=SystemV/AST4ADT
>[junit4]   2> NOTE: Linux 5.3.0-46-generic amd64/Oracle Corporation 15-ea 
> (64-bit)/cpus=16,threads=1,free=389663704,total=518979584
>[junit4]   2> NOTE: All tests run in this JVM: [TestIntroSorter, 
> TestNGramPhraseQuery, TestUpgradeIndexMergePolicy, TestBasicModelIne, 
> TestTragicIndexWriterDeadlock, TestSimpleExplanations, TestDirectory, 
> TestSegmentToThreadMapping, TestNIOFSDirectory, TestAxiomaticF3LOG, 
> TestIndexOptions, TestSpanNotQuery, TestBinaryDocument, 
> TestFixedLengthBytesRefArray, TestForUtil, TestParallelLeafReader, 
> TestDirectoryReader, TestBytesRef, TestLongRange, TestByteSlices, 
> TestRectangle2D, TestBufferedChecksum, TestIndexOrDocValuesQuery, 
> TestFilterDirectoryReader, TestSleepingLockWrapper, TestDocIdSetIterator, 
> TestMultiPhraseQuery, TestSearchWithThreads, TestPointQueries, 
> TestForceMergeForever, TestNewestSegment, TestCrash, TestIntSet, 
> Test4GBStoredFields, TestDirectPacked, TestLiveFieldValues, 
> TestTopDocsCollector, Test2BPositions, TestAtomicUpdate, 
> TestFSTDirectAddressing, TestXYPointQueries, TestLatLonLineShapeQueries, 
> TestStressAdvance, TestBasics, TestLucene50StoredFieldsFormat, 
> TestQueryRescorer, TestConstantScoreScorer, TestAssertions, TestDemo, 
> TestExternalCodecs, TestMergeSchedulerExternal, TestAnalyzerWrapper, 
> TestCachingTokenFilter, TestCharArrayMap, TestCharArraySet, TestCharFilter, 
> TestCharacterUtils, TestDelegatingAnalyzerWrapper, TestGraphTokenFilter, 
> TestGraphTokenizers, TestReusableStringReader, TestWordlistLoader, 
> TestStandardAnalyzer, TestBytesRefAttImpl, TestCharTermAttributeImpl, 
> TestCodecUtil, TestCompetitiveFreqNormAccumulator, TestFastCompressionMode, 
> TestFastDecompressionMode, TestHighCompressionMode, 
> TestLucene60FieldInfoFormat, TestLucene60PointsFormat, 
> TestLucene70SegmentInfoFormat, TestDocument, TestDoubleRange, 
> TestFeatureDoubleValues, TestFeatureField, TestFeatureSort, TestField, 
> TestField

[GitHub] [lucene-solr] dsmiley opened a new pull request #1498: SOLR-14351: commitScheduler was missing MDC logging

2020-05-07 Thread GitBox


dsmiley opened a new pull request #1498:
URL: https://github.com/apache/lucene-solr/pull/1498


   Adding this to https://issues.apache.org/jira/browse/SOLR-14351 even though 
it's not a perfect fit.
   
   As an aside, I've seen MDC missing logs from searcher warming 
QuerySenderListener.  There's another JIRA issue for that one indirectly fixed 
via stacking SolrRequestInfo.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org