[jira] [Resolved] (LUCENE-4510) when a test's heart beats it should also throw up (dump stack of all threads)

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-4510.
-
Resolution: Won't Fix

I don't think it's doable with gradle's default runner.

> when a test's heart beats it should also throw up (dump stack of all threads)
> -
>
> Key: LUCENE-4510
> URL: https://issues.apache.org/jira/browse/LUCENE-4510
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Dawid Weiss
>Priority: Major
>
> We've had numerous cases where tests were hung but the "operator" of that 
> particular Jenkins instance struggles to properly get a stack dump for all 
> threads and eg accidentally kills the process instead (rather awful that the 
> same powerful tool "kill" can be used to get stack traces and to destroy the 
> process...).
> Is there some way the test infra could do this for us, eg when it prints the 
> HEARTBEAT message?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5787) LuceneTestCase static leak checker interferes with Groovy unit tests

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5787.
-
Resolution: Abandoned

> LuceneTestCase static leak checker interferes with Groovy unit tests
> 
>
> Key: LUCENE-5787
> URL: https://issues.apache.org/jira/browse/LUCENE-5787
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7, 4.8.1
> Environment: Maven 3.0.5
> JUnit 4.11
>Reporter: John Gibson
>Assignee: Dawid Weiss
>Priority: Major
>
> {{LuceneTestCase}}'s static memory leak checker can break Groovy subclasses. 
> Specifically, Groovy classes have a synthetic static member variable of type 
> {{org.codehaus.groovy.reflection.ClassInfo}}. If this variable grows too 
> large then LTC will fail the test. Because the variable is added by the 
> Groovy runtime instead of by the developer there is no way for the developer 
> to clear the field themselves.
> Also note that the static leak checker does not ignore memory held by soft or 
> weak references. These should be ignored because the memory retained by such 
> fields will be reclaimed instead of triggering OutOfMemoryErrors.
> Note that because LTC is a base class for Solr's testing support classes this 
> also affects {{SolrTestCaseJ4}} and {{AbstractSolrTestCase}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7477) ExternalRefSorter should use OfflineSorter's actual writer for writing the input file

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-7477:

Fix Version/s: (was: 7.0)

> ExternalRefSorter should use OfflineSorter's actual writer for writing the 
> input file
> -
>
> Key: LUCENE-7477
> URL: https://issues.apache.org/jira/browse/LUCENE-7477
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: LUCENE-7477.patch
>
>
> Consider this constructor in ExternalRefSorter:
> {code}
>   public ExternalRefSorter(OfflineSorter sorter) throws IOException {
> this.sorter = sorter;
> this.input = 
> sorter.getDirectory().createTempOutput(sorter.getTempFileNamePrefix(), 
> "RefSorterRaw", IOContext.DEFAULT);
> this.writer = new OfflineSorter.ByteSequencesWriter(this.input);
>   }
> {code}
> The problem with it is that the writer for the initial input file is written 
> with the default {{OfflineSorter.ByteSequencesWriter}}, but the instance of 
> {{OfflineSorter}} may be unable to read it if it overrides {{getReader}} to 
> use something else than the default.
> While this works now, it should be cleaned up (I think). It'd be probably 
> ideal to allow {{OfflineSorter}} to generate its own temporary file and just 
> return the ByteSequencesWriter it chooses to use, so the above snippet would 
> read:
> {code}
>   public ExternalRefSorter(OfflineSorter sorter) throws IOException {
> this.sorter = sorter;
> this.writer = sorter.newUnsortedPartition();
>   }
> {code}
> This could be also extended so that {{OfflineSorter}} is in charge of 
> managing its own (sorted and unsorted) partitions. Then {{sort(String file)}} 
> would simply become {{ByteSequenceIterator sort()}} (or even 
> {{Stream sort()}} as Stream is conveniently {{AutoCloseable}}). If 
> we made {{OfflineSorter}} implement {{Closeable}} it could also take care of 
> cleaning up any resources it opens in the directory we pass to it. An 
> additional bonus would be the ability to dodge the final internal merge(1) -- 
> if we manage sorted and unsorted partitions then there are open possibilities 
> of returning an iterator that dynamically merges from multiple partitions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8668) Various JVM failures on PhaseIdealLoop::split_up

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-8668.
-
Resolution: Won't Fix

> Various JVM failures on PhaseIdealLoop::split_up
> 
>
> Key: LUCENE-8668
> URL: https://issues.apache.org/jira/browse/LUCENE-8668
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>  Labels: jvm
> Attachments: hs_err_pid10534.log, replay_pid10534.log
>
>
> Shows up on Jenkins in various contexts an on various JVMs, but all on Uwe's 
> jenkins machine. 
> Examples:
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7fe7a0b1a46c, pid=18527, tid=18552
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (12.0+23) (build 
> 12-ea+23)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (12-ea+23, mixed mode, 
> tiered, serial gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xce046c]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.38]+0x47c
> {code}
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7f8a1fcf713c, pid=8792, tid=8822
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (11.0+28) (build 11+28)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (11+28, mixed mode, tiered, 
> parallel gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xd3e13c]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.39]+0x47c
> {code}
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7f8cfcb0a13c, pid=27685, tid=27730
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (11.0+28) (build 11+28)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (11+28, mixed mode, tiered, 
> g1 gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xd3e13c]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.39]+0x47c
> {code}
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7fad0dea1409, pid=10534, tid=10604
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (10.0.1+10) (build 
> 10.0.1+10)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (10.0.1+10, mixed mode, 
> tiered, compressed oops, concurrent mark sweep gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xc48409]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.40]+0x619
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9162) Make heap and other test-jvm overrides self-described (gradlew testOpts)

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-9162.
-
Resolution: Duplicate

Duplicate of LUCENE-9491

> Make heap and other test-jvm overrides self-described (gradlew testOpts)
> 
>
> Key: LUCENE-9162
> URL: https://issues.apache.org/jira/browse/LUCENE-9162
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4665) lib configuration element should accept ant/maven-like glob patterns (**/*.jar)

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved SOLR-4665.
---
Resolution: Abandoned

> lib configuration element should accept ant/maven-like glob patterns 
> (**/*.jar)
> ---
>
> Key: SOLR-4665
> URL: https://issues.apache.org/jira/browse/SOLR-4665
> Project: Solr
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12852) NPE in ClusteringComponent

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved SOLR-12852.

Resolution: Cannot Reproduce

> NPE in ClusteringComponent
> --
>
> Key: SOLR-12852
> URL: https://issues.apache.org/jira/browse/SOLR-12852
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - Clustering
>Affects Versions: 7.5
>Reporter: Markus Jelsma
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-12852.patch, capture-1.png
>
>
> Got this exception:
> {code}
>  o.a.s.s.HttpSolrCall null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> {code}
> with this config (copied from reference guide, changed title and snippet 
> parameters)
> {code}
>class="solr.clustering.ClusteringComponent">
> 
> 
>   lingo
>name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm
> 
> 
> 
>   stc
>name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm
> 
>   
>   
>  class="solr.SearchHandler">
> 
>   true
>   true
>   
>   id
>   title_nl
>   content_nl
>   
>   100
>   *,score
> 
> 
> 
>   clustering
> 
>   
> {code}
> using this query:
> http://localhost:8983/solr/collection/clustering?q=*:*
> All libraries are present, Solr no longer complains about missing classes, 
> instead i got this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14100) System properties cross test suite boundary

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reassigned SOLR-14100:
--

Assignee: (was: Dawid Weiss)

> System properties cross test suite boundary
> ---
>
> Key: SOLR-14100
> URL: https://issues.apache.org/jira/browse/SOLR-14100
> Project: Solr
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Priority: Major
> Attachments: SOLR-14100_hack.patch, props-reads.log, props-writes.log
>
>
> At some point in time all system properties were saved/ restored in the top 
> test class. When security manager was added (a long time ago) as the default 
> this has been turned off (because the rule couldn't read all properties then) 
> and replaced with just a selected subset of properties to be checked (in 
> LuceneTestCase). Sadly, Solr's security policy allows all properties to be 
> written and I bet this also leads to complex interactions between tests.
> We can allow read access to all properties at first but all writeable/ 
> modifiable properties should be identified and added to a top-level restore 
> rule, along with security manager policy that selectively enables them (so 
> that we know they're saved and restored after each test).
> This is going to be a tedious task.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14151) Make schema components load from packages

2020-09-29 Thread Anshum Gupta (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203744#comment-17203744
 ] 

Anshum Gupta commented on SOLR-14151:
-

Hey [~noble.paul], It'll be really useful for everyone else looking at the 
commits if you would merge your commits with a comment that summarizes the 
change. 

> Make schema components load from packages
> -
>
> Key: SOLR-14151
> URL: https://issues.apache.org/jira/browse/SOLR-14151
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: packagemanager
> Fix For: 8.7
>
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Example:
> {code:xml}
>  
> 
>   
>generateNumberParts="0" catenateWords="0"
>   catenateNumbers="0" catenateAll="0"/>
>   
>   
> 
>   
> {code}
> * When a package is updated, the entire {{IndexSchema}} object is refreshed, 
> but the SolrCore object is not reloaded
> * Any component can be prefixed with the package name
> * The semantics of loading plugins remain the same as that of the components 
> in {{solrconfig.xml}}
> * Plugins can be registered using schema API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9546) Configure Nori and Kuromoji generation lazily when java plugin is applied to the projects

2020-09-29 Thread Dawid Weiss (Jira)
Dawid Weiss created LUCENE-9546:
---

 Summary: Configure Nori and Kuromoji generation lazily when java 
plugin is applied to the projects
 Key: LUCENE-9546
 URL: https://issues.apache.org/jira/browse/LUCENE-9546
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9547) Race condition in maven artifact configuration results in wrong group/ artifact name

2020-09-29 Thread Dawid Weiss (Jira)
Dawid Weiss created LUCENE-9547:
---

 Summary: Race condition in maven artifact configuration results in 
wrong group/ artifact name
 Key: LUCENE-9547
 URL: https://issues.apache.org/jira/browse/LUCENE-9547
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9546) Configure Nori and Kuromoji generation lazily when java plugin is applied to the projects

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-9546.
-
Fix Version/s: master (9.0)
   Resolution: Fixed

> Configure Nori and Kuromoji generation lazily when java plugin is applied to 
> the projects
> -
>
> Key: LUCENE-9546
> URL: https://issues.apache.org/jira/browse/LUCENE-9546
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (9.0)
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9547) Race condition in maven artifact configuration results in wrong group/ artifact name

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-9547.
-
Fix Version/s: master (9.0)
   Resolution: Fixed

> Race condition in maven artifact configuration results in wrong group/ 
> artifact name
> 
>
> Key: LUCENE-9547
> URL: https://issues.apache.org/jira/browse/LUCENE-9547
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Priority: Trivial
> Fix For: master (9.0)
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-9547) Race condition in maven artifact configuration results in wrong group/ artifact name

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reassigned LUCENE-9547:
---

Assignee: Dawid Weiss

> Race condition in maven artifact configuration results in wrong group/ 
> artifact name
> 
>
> Key: LUCENE-9547
> URL: https://issues.apache.org/jira/browse/LUCENE-9547
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (9.0)
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw commented on a change in pull request #1925: Cleanup DWPT state handling

2020-09-29 Thread GitBox


s1monw commented on a change in pull request #1925:
URL: https://github.com/apache/lucene-solr/pull/1925#discussion_r496489937



##
File path: 
lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
##
@@ -157,25 +157,18 @@ private boolean updatePeaks(long delta) {
   }
 
   DocumentsWriterPerThread doAfterDocument(DocumentsWriterPerThread perThread, 
boolean isUpdate) {
-final long delta = perThread.getCommitLastBytesUsedDelta();
+final long delta = perThread.commitLastBytesUsed();
 synchronized (this) {
-  // we need to commit this under lock but calculate it outside of the 
lock to minimize the time this lock is held
-  // per document. The reason we update this under lock is that we mark 
DWPTs as pending without acquiring it's
-  // lock in #setFlushPending and this also reads the committed bytes and 
modifies the flush/activeBytes.
-  // In the future we can clean this up to be more intuitive.
-  perThread.commitLastBytesUsed(delta);
   try {
 /*
  * We need to differentiate here if we are pending since 
setFlushPending
  * moves the perThread memory to the flushBytes and we could be set to
  * pending during a delete
  */
-if (perThread.isFlushPending()) {
-  flushBytes += delta;
-  assert updatePeaks(delta);
-} else {
-  activeBytes += delta;
-  assert updatePeaks(delta);
+activeBytes += delta;
+assert updatePeaks(delta);
+if (perThread.isFlushPending() == false) {
+  assert perThread.getState() == DocumentsWriterPerThread.State.ACTIVE 
: "expected ACTIVE state but was: " + perThread.getState();

Review comment:
   that makes sense to me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Dawid Weiss (Jira)
Dawid Weiss created LUCENE-9548:
---

 Summary: Publish master (9.x) snapshots to 
https://repository.apache.org
 Key: LUCENE-9548
 URL: https://issues.apache.org/jira/browse/LUCENE-9548
 Project: Lucene - Core
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss


We should start publishing snapshot JARs to Apache repositories. I'm not sure 
how to set it all up with gradle but maybe there are other Apache projects that 
use gradle and we could peek at their config? Mostly it's about signing 
artifacts (how to pass credentials for signing) and setting up Nexus deployment 
repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203767#comment-17203767
 ] 

ASF subversion and git services commented on SOLR-14850:


Commit c6b361df2de20dd191bad134d32b468ed5cfc14a in lucene-solr's branch 
refs/heads/branch_8_6 from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c6b361d ]

SOLR-14850: Fix ExactStatsCache NullPointerException when shards.tolerant=true.


> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnecti

[jira] [Updated] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-14850:

Fix Version/s: 8.6.3
   8.7

> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWha

[jira] [Commented] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203771#comment-17203771
 ] 

ASF subversion and git services commented on SOLR-14850:


Commit fafc71bc21c928d5c728cf3cfb4cef99eb8685ac in lucene-solr's branch 
refs/heads/branch_8_6 from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fafc71b ]

SOLR-14850: Correct the spelling in contributor's name.


> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.Abst

[jira] [Commented] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203773#comment-17203773
 ] 

ASF subversion and git services commented on SOLR-14850:


Commit be6d55ed74063f7e025579a5e1017e49ec2c0184 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=be6d55e ]

SOLR-14850: Correct the spelling in contributor's name.


> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.Abstr

[jira] [Commented] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203776#comment-17203776
 ] 

ASF subversion and git services commented on SOLR-14850:


Commit 8b329a09c215a5b5a744ec8575f19ea6c57ce5b6 in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8b329a0 ]

SOLR-14850: Correct the spelling in contributor's name.


> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.Abstract

[jira] [Resolved] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14850.
-
Resolution: Fixed

> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhat

[jira] [Commented] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203777#comment-17203777
 ] 

Andrzej Bialecki commented on SOLR-14850:
-

Done. I've merged this to {{branch_8_6}} so this will be included in the 
upcoming 8.6.3 release. Thank you!

> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclip

[jira] [Commented] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203780#comment-17203780
 ] 

Dawid Weiss commented on LUCENE-9548:
-

I have a patch for this but can't publish a snapshot (auth. error)
{code}
gradlew publishJarsPublicationToApacheSnapshotsRepository 
-PnexusUsername=dweiss -DnexusPassword=...
{code}

Do apache snapshot repositories require a specific permission (or user/pwd) to 
publish snapshots?

> Publish master (9.x) snapshots to https://repository.apache.org
> ---
>
> Key: LUCENE-9548
> URL: https://issues.apache.org/jira/browse/LUCENE-9548
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>
> We should start publishing snapshot JARs to Apache repositories. I'm not sure 
> how to set it all up with gradle but maybe there are other Apache projects 
> that use gradle and we could peek at their config? Mostly it's about signing 
> artifacts (how to pass credentials for signing) and setting up Nexus 
> deployment repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss opened a new pull request #1929: LUCENE-9548: Apache repository publishing

2020-09-29 Thread GitBox


dweiss opened a new pull request #1929:
URL: https://github.com/apache/lucene-solr/pull/1929


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9546) Configure Nori and Kuromoji generation lazily when java plugin is applied to the projects

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203781#comment-17203781
 ] 

ASF subversion and git services commented on LUCENE-9546:
-

Commit 3ae0b506463d76701659206f1636d99c439e982b in lucene-solr's branch 
refs/heads/master from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3ae0b50 ]

LUCENE-9546: Configure Nori and Kuromoji generation lazily when java plugin is 
applied to the projects


> Configure Nori and Kuromoji generation lazily when java plugin is applied to 
> the projects
> -
>
> Key: LUCENE-9546
> URL: https://issues.apache.org/jira/browse/LUCENE-9546
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (9.0)
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9547) Race condition in maven artifact configuration results in wrong group/ artifact name

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203782#comment-17203782
 ] 

ASF subversion and git services commented on LUCENE-9547:
-

Commit 2b692ccb714fe000bacceb4a5bcd21d5ae51930d in lucene-solr's branch 
refs/heads/master from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2b692cc ]

LUCENE-9547: Race condition in maven artifact configuration results in wrong 
group/ artifact name


> Race condition in maven artifact configuration results in wrong group/ 
> artifact name
> 
>
> Key: LUCENE-9547
> URL: https://issues.apache.org/jira/browse/LUCENE-9547
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (9.0)
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203787#comment-17203787
 ] 

Dawid Weiss commented on LUCENE-9548:
-

I'd love to hear from anybody who set up current maven artifact uploads on what 
magic is needed to upload to Apache Nexus. I think everything is ready - it's 
just the authorization bit that is missing and we can start publishing 
snapshots from master (which would be helpful to make Solr depend on Lucene 
snapshot packages rather than sources).

> Publish master (9.x) snapshots to https://repository.apache.org
> ---
>
> Key: LUCENE-9548
> URL: https://issues.apache.org/jira/browse/LUCENE-9548
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should start publishing snapshot JARs to Apache repositories. I'm not sure 
> how to set it all up with gradle but maybe there are other Apache projects 
> that use gradle and we could peek at their config? Mostly it's about signing 
> artifacts (how to pass credentials for signing) and setting up Nexus 
> deployment repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9077) Gradle build

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-9077:

Description: 
This task focuses on providing gradle-based build equivalent for Lucene and 
Solr (on master branch). See notes below on why this respin is needed.

The code lives on *gradle-master* branch. It is kept with sync with *master*. 
Try running the following to see an overview of helper guides concerning 
typical workflow, testing and ant-migration helpers:

gradlew :help

A list of items that needs to be added or requires work. If you'd like to work 
on any of these, please add your name to the list. Once you have a patch/ pull 
request let me (dweiss) know - I'll try to coordinate the merges.
 * (/) Apply forbiddenAPIs
 * (/) Generate hardware-aware gradle defaults for parallelism (count of 
workers and test JVMs).
 * (/) Fail the build if --tests filter is applied and no tests execute during 
the entire build (this allows for an empty set of filtered tests at single 
project level).
 * (/) Port other settings and randomizations from common-build.xml
 * (/) Configure security policy/ sandboxing for tests.
 * (/) test's console output on -Ptests.verbose=true
 * (/) add a :helpDeps explanation to how the dependency system works (palantir 
plugin, lockfile) and how to retrieve structured information about current 
dependencies of a given module (in a tree-like output).
 * (/) jar checksums, jar checksum computation and validation. This should be 
done without intermediate folders (directly on dependency sets).
 * (/) verify min. JVM version and exact gradle version on build startup to 
minimize odd build side-effects
 * (/) Repro-line for failed tests/ runs.
 * (/) add a top-level README note about building with gradle (and the required 
JVM).
 * (/) add an equivalent of 'validate-source-patterns' 
(check-source-patterns.groovy) to precommit.
 * (/) add an equivalent of 'rat-sources' to precommit.
 * (/) add an equivalent of 'check-example-lucene-match-version' (solr only) to 
precommit.
 * (/) javadoc compilation

Hard-to-implement stuff already investigated:
 * (/) (done)  -*Printing console output of failed tests.* There doesn't seem 
to be any way to do this in a reasonably efficient way. There are onOutput 
listeners but they're slow to operate and solr tests emit *tons* of output so 
it's an overkill.-
 * (!) (LUCENE-9120) *Tests working with security-debug logs or other JVM-early 
log output*. Gradle's test runner works by redirecting Java's stdout/ syserr so 
this just won't work. Perhaps we can spin the ant-based test runner for such 
corner-cases.

Of lesser importance:
 * (/) Add an equivalent of 'documentation-lint" to precommit.
 * (/) Do not require files to be committed before running precommit. (staged 
files are fine).
 * (/) add rendering of javadocs (gradlew javadoc)
 * (/) identify and port various "regenerate" tasks from ant builds (javacc, 
precompiled automata, etc.)
 * (/) Add Solr packaging for docs/* (see TODO in packaging/build.gradle; 
currently XSLT...)
 * I didn't bother adding Solr dist/test-framework to packaging (who'd use it 
from a binary distribution? 
 * (/) There is some python execution in check-broken-links and 
check-missing-javadocs, not sure if it's been ported
 * (/) Precommit doesn't catch unused imports
 * Attach javadocs to maven publications.
 * (/) Add test 'beasting' (rerunning the same suite multiple times). I'm 
afraid it'll be difficult to run it sensibly because gradle doesn't offer cwd 
separation for the forked test runners.
 * if you diff solr packaged distribution against ant-created distribution 
there are minor differences in library versions and some JARs are excluded/ 
moved around. I didn't try to force these as everything seems to work (tests, 
etc.) – perhaps these differences should  be fixed in the ant build instead.
 * (/) Fill in POM details in gradle/defaults-maven.gradle so that they reflect 
the previous content better (dependencies aside).
 * (/) Add any IDE integration layers that should be added (I use IntelliJ and 
it imports the project out of the box, without the need for any special tuning).

 

*{color:#ff}Note:{color}* this builds on the work done by Mark Miller and 
Cao Mạnh Đạt but also applies lessons learned from those two efforts:
 * *Do not try to do too many things at once*. If we deviate too far from 
master, the branch will be hard to merge.
 * *Do everything in baby-steps* and add small, independent build fragments 
replacing the old ant infrastructure.
 * *Try to engage people to run, test and contribute early*. It can't be a 
one-man effort. The more people understand and can contribute to the build, the 
more healthy it will be.

 

  was:
This task focuses on providing gradle-based build equivalent for Lucene and 
Solr (on master branch). See notes below on why this respin is needed.

The code lives on *g

[jira] [Resolved] (LUCENE-9077) Gradle build

2020-09-29 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-9077.
-
Resolution: Fixed

Gradle build is in place and replaced ant. Thank you for everyone who 
contributed to this effort and I hope it'll be beneficial to the future of the 
project.

> Gradle build
> 
>
> Key: LUCENE-9077
> URL: https://issues.apache.org/jira/browse/LUCENE-9077
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: LUCENE-9077-javadoc-locale-en-US.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This task focuses on providing gradle-based build equivalent for Lucene and 
> Solr (on master branch). See notes below on why this respin is needed.
> The code lives on *gradle-master* branch. It is kept with sync with *master*. 
> Try running the following to see an overview of helper guides concerning 
> typical workflow, testing and ant-migration helpers:
> gradlew :help
> A list of items that needs to be added or requires work. If you'd like to 
> work on any of these, please add your name to the list. Once you have a 
> patch/ pull request let me (dweiss) know - I'll try to coordinate the merges.
>  * (/) Apply forbiddenAPIs
>  * (/) Generate hardware-aware gradle defaults for parallelism (count of 
> workers and test JVMs).
>  * (/) Fail the build if --tests filter is applied and no tests execute 
> during the entire build (this allows for an empty set of filtered tests at 
> single project level).
>  * (/) Port other settings and randomizations from common-build.xml
>  * (/) Configure security policy/ sandboxing for tests.
>  * (/) test's console output on -Ptests.verbose=true
>  * (/) add a :helpDeps explanation to how the dependency system works 
> (palantir plugin, lockfile) and how to retrieve structured information about 
> current dependencies of a given module (in a tree-like output).
>  * (/) jar checksums, jar checksum computation and validation. This should be 
> done without intermediate folders (directly on dependency sets).
>  * (/) verify min. JVM version and exact gradle version on build startup to 
> minimize odd build side-effects
>  * (/) Repro-line for failed tests/ runs.
>  * (/) add a top-level README note about building with gradle (and the 
> required JVM).
>  * (/) add an equivalent of 'validate-source-patterns' 
> (check-source-patterns.groovy) to precommit.
>  * (/) add an equivalent of 'rat-sources' to precommit.
>  * (/) add an equivalent of 'check-example-lucene-match-version' (solr only) 
> to precommit.
>  * (/) javadoc compilation
> Hard-to-implement stuff already investigated:
>  * (/) (done)  -*Printing console output of failed tests.* There doesn't seem 
> to be any way to do this in a reasonably efficient way. There are onOutput 
> listeners but they're slow to operate and solr tests emit *tons* of output so 
> it's an overkill.-
>  * (!) (LUCENE-9120) *Tests working with security-debug logs or other 
> JVM-early log output*. Gradle's test runner works by redirecting Java's 
> stdout/ syserr so this just won't work. Perhaps we can spin the ant-based 
> test runner for such corner-cases.
> Of lesser importance:
>  * (/) Add an equivalent of 'documentation-lint" to precommit.
>  * (/) Do not require files to be committed before running precommit. (staged 
> files are fine).
>  * (/) add rendering of javadocs (gradlew javadoc)
>  * (/) identify and port various "regenerate" tasks from ant builds (javacc, 
> precompiled automata, etc.)
>  * (/) Add Solr packaging for docs/* (see TODO in packaging/build.gradle; 
> currently XSLT...)
>  * I didn't bother adding Solr dist/test-framework to packaging (who'd use it 
> from a binary distribution? 
>  * (/) There is some python execution in check-broken-links and 
> check-missing-javadocs, not sure if it's been ported
>  * (/) Precommit doesn't catch unused imports
>  * Attach javadocs to maven publications.
>  * (/) Add test 'beasting' (rerunning the same suite multiple times). I'm 
> afraid it'll be difficult to run it sensibly because gradle doesn't offer cwd 
> separation for the forked test runners.
>  * if you diff solr packaged distribution against ant-created distribution 
> there are minor differences in library versions and some JARs are excluded/ 
> moved around. I didn't try to force these as everything seems to work (tests, 
> etc.) – perhaps these differences should  be fixed in the ant build instead.
>  * (/) Fill in POM details in gradle/defaults-maven.gradle so that they 
> reflect the previous content better (dependencies aside).
>  * (/) Add any IDE integration layers that should be added (I use IntelliJ 
> and it imports the project out of the box, without the need for any s

[jira] [Commented] (SOLR-14497) Move the project to become Apache TLP

2020-09-29 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203795#comment-17203795
 ] 

Ishan Chattopadhyaya commented on SOLR-14497:
-

bq. Are we still aiming to setup the Solr PMC before 9.0 so that Lucene can 
release 9.0 alone, followed by an independent Solr 9.0 release?

+1

How should I (or anyone just watching from the sidelines till now) proceed to 
help with this effort? Would parallely working on the tasks in the 
"Preparation" section help?

> Move the project to become Apache TLP
> -
>
> Key: SOLR-14497
> URL: https://issues.apache.org/jira/browse/SOLR-14497
> Project: Solr
>  Issue Type: Task
>Reporter: Dawid Weiss
>Priority: Major
>
> This issue is about the process of moving Solr to become an Apache TLP.
> Se 
> [https://cwiki.apache.org/confluence/display/LUCENE/Solr+TLP+needed+changes] 
> for a tabular view of possible changes.
> *TODO*: Add sub tasks.
> h4. Preparation
>  # Figure out formal steps to be taken with Apache Board to set up TLP 
> project for Solr.
>  # Figure out the initial committership, PMC members, chair.
>  # Separate the code (and build) from Lucene so that Solr can be built 
> independently. This applies to 8.x and master (9.x).
>  # Determine what happens with mailing lists (and their archives). Are new 
> mailing lists going to be set up or the existing ones aliased somehow?
>  # Determine what happens with CI and build servers.
>  # Determine how to split web site
>  # [add more here]
> h4. Execution
>  # Code: clone Lucene/Solr git to Solr TLP, add a TLP marking tag.
>  # Web Site: clone lucene/solr-site Git Repo, add a TLP marking tag
>  # Send an information about any potential mailing list changes, etc. to 
> previous addresses.
>  # Add redirects from old Solr site/ javadoc/ any other addresses to TLP 
> locations.
>  # [add more here]
> h4. Post-transition
>  # Proceed with LUCENE-9375.
>  # [add more here]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14615) CPU Based Circuit Breaker

2020-09-29 Thread Atri Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atri Sharma resolved SOLR-14615.

Resolution: Fixed

> CPU Based Circuit Breaker
> -
>
> Key: SOLR-14615
> URL: https://issues.apache.org/jira/browse/SOLR-14615
> Project: Solr
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Assignee: Atri Sharma
>Priority: Major
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> We should have a circuit breaker that can monitor and trigger on CPU 
> utilization



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14497) Move the project to become Apache TLP

2020-09-29 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203798#comment-17203798
 ] 

Dawid Weiss commented on SOLR-14497:


I've started working on separating the builds on master.

> Move the project to become Apache TLP
> -
>
> Key: SOLR-14497
> URL: https://issues.apache.org/jira/browse/SOLR-14497
> Project: Solr
>  Issue Type: Task
>Reporter: Dawid Weiss
>Priority: Major
>
> This issue is about the process of moving Solr to become an Apache TLP.
> Se 
> [https://cwiki.apache.org/confluence/display/LUCENE/Solr+TLP+needed+changes] 
> for a tabular view of possible changes.
> *TODO*: Add sub tasks.
> h4. Preparation
>  # Figure out formal steps to be taken with Apache Board to set up TLP 
> project for Solr.
>  # Figure out the initial committership, PMC members, chair.
>  # Separate the code (and build) from Lucene so that Solr can be built 
> independently. This applies to 8.x and master (9.x).
>  # Determine what happens with mailing lists (and their archives). Are new 
> mailing lists going to be set up or the existing ones aliased somehow?
>  # Determine what happens with CI and build servers.
>  # Determine how to split web site
>  # [add more here]
> h4. Execution
>  # Code: clone Lucene/Solr git to Solr TLP, add a TLP marking tag.
>  # Web Site: clone lucene/solr-site Git Repo, add a TLP marking tag
>  # Send an information about any potential mailing list changes, etc. to 
> previous addresses.
>  # Add redirects from old Solr site/ javadoc/ any other addresses to TLP 
> locations.
>  # [add more here]
> h4. Post-transition
>  # Proceed with LUCENE-9375.
>  # [add more here]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14151) Make schema components load from packages

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203821#comment-17203821
 ] 

ASF subversion and git services commented on SOLR-14151:


Commit b8b6fdfbb984ad3aa3426594ae0e19eaa45492f7 in lucene-solr's branch 
refs/heads/branch_8x from noblepaul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b8b6fdf ]

Revert "SOLR-14151: cleanup"

This reverts commit fe655d3d3404cf13764d04d0b536991420d2a58e.


> Make schema components load from packages
> -
>
> Key: SOLR-14151
> URL: https://issues.apache.org/jira/browse/SOLR-14151
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: packagemanager
> Fix For: 8.7
>
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Example:
> {code:xml}
>  
> 
>   
>generateNumberParts="0" catenateWords="0"
>   catenateNumbers="0" catenateAll="0"/>
>   
>   
> 
>   
> {code}
> * When a package is updated, the entire {{IndexSchema}} object is refreshed, 
> but the SolrCore object is not reloaded
> * Any component can be prefixed with the package name
> * The semantics of loading plugins remain the same as that of the components 
> in {{solrconfig.xml}}
> * Plugins can be registered using schema API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203819#comment-17203819
 ] 

ASF subversion and git services commented on SOLR-14850:


Commit ed1ca77deda824cdb6736df06402b732f5757ba4 in lucene-solr's branch 
refs/heads/branch_8x from noblepaul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ed1ca77 ]

Revert "SOLR-14850: Correct the spelling in contributor's name."

This reverts commit be6d55ed74063f7e025579a5e1017e49ec2c0184.

trying to fix an unwanted merge


> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.

[jira] [Commented] (SOLR-14835) Solr 8.6.x log starts with "XmlConfiguration Ignored arg" warning from Jetty

2020-09-29 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203835#comment-17203835
 ] 

Andrzej Bialecki commented on SOLR-14835:
-

As far as I can tell this is a bogus warning - the instrumented threadpool is 
still being correctly created and it correctly reports its metrics to the  
{{solr.jetty}} registry.

I found the following information from the Jetty project that confirms this:
 * One of the comments on [Issue 
4631|https://github.com/eclipse/jetty.project/issues/4631#issuecomment-593655806]
 explicitly says that the configuration is still being processed and it's just 
the order of arguments in the config that causes this bogus warning. Note that 
this order of arguments is actually the one that is recommended in the 
documentation and changing it (so that the {{}} section comes later) 
produces fatal errors.
 * this issue further links to [PR 
4632|https://github.com/eclipse/jetty.project/pull/4632] that fixes this bogus 
warning in Jetty 9.4.28.

So for the 8.6.3 release I think we should simply add a note in CHANGES.txt 
saying that this is a bogus warning that can be ignored. Also, before 8.7 we 
should upgrade Jetty to at least 9.4.28 where this warning has been fixed.

> Solr 8.6.x log starts with "XmlConfiguration Ignored arg" warning from Jetty
> 
>
> Key: SOLR-14835
> URL: https://issues.apache.org/jira/browse/SOLR-14835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.2
>Reporter: Colvin Cowie
>Assignee: Andrzej Bialecki
>Priority: Trivial
>
> After moving to 8.6.2 the first lines of the solr.log are
> {noformat}
> 2020-09-06 18:19:09.164 INFO  (main) [   ] o.e.j.u.log Logging initialized 
> @1197ms to org.eclipse.jetty.util.log.Slf4jLog
> 2020-09-06 18:19:09.226 WARN  (main) [   ] o.e.j.u.l.o.e.j.x.XmlConfiguration 
> Ignored arg: 
>  class="com.codahale.metrics.jetty9.InstrumentedQueuedThreadPool"> name="registry">
>  class="com.codahale.metrics.SharedMetricRegistries">solr.jetty
>   
>   
> {noformat}
> This config is declared here: 
> https://github.com/apache/lucene-solr/blob/5154b6008f54c9d096f5efe9ae347492c23dd780/solr/server/etc/jetty.xml#L33
>  and has been there for a long time, so I assume it's the bump in Jetty 
> version that's causing it now.
> I'm seeing this in 8.6.2, but I've not gone back to check other versions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14408) Refactor MoreLikeThisHandler Implementation

2020-09-29 Thread Nazerke Seidan (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203836#comment-17203836
 ] 

Nazerke Seidan commented on SOLR-14408:
---

Hello [~alessandro.benedetti],

 

No worries:) Regarding the InterestingTerm class, I have browsed the codebase 
and couldn't find any similar class to that. I think it is ok. 

 

I hope we can progress quickly with the merge. 

 

> Refactor MoreLikeThisHandler Implementation
> ---
>
> Key: SOLR-14408
> URL: https://issues.apache.org/jira/browse/SOLR-14408
> Project: Solr
>  Issue Type: Improvement
>  Components: MoreLikeThis
>Reporter: Nazerke Seidan
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The main goal of this refactoring is for readability and accessibility of 
> MoreLikeThisHandler class. Current MoreLikeThisHandler class consists of two 
> static subclasses and accessing them later in MoreLikeThisComponent.  I 
> propose to have them as separate public classes. 
> cc: [~abenedetti], as you have had the recent commit for MLT, what do you 
> think about this?  Anyway, the code is ready for review. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203838#comment-17203838
 ] 

Uwe Schindler commented on LUCENE-9548:
---

The Apache Jenkins Servers have the key to publish in their config. If Gradle 
reads the standard Maven Settings file to publish it should work..in the 
default Gradle Properties file in home for is also some token for donator 
publishing.

I am away from computer, but can check later. Just tell me if I can generally 
enable the Artifact builds (2 more jobs in Jenkins), and what's tasks I should 
execute. Reconfigure is easy.

About Maven, I'd set it up as standard Maven Deployment, maybe make the 
repository I'd configurable. I have not.much knowledge here.

Uwe

> Publish master (9.x) snapshots to https://repository.apache.org
> ---
>
> Key: LUCENE-9548
> URL: https://issues.apache.org/jira/browse/LUCENE-9548
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should start publishing snapshot JARs to Apache repositories. I'm not sure 
> how to set it all up with gradle but maybe there are other Apache projects 
> that use gradle and we could peek at their config? Mostly it's about signing 
> artifacts (how to pass credentials for signing) and setting up Nexus 
> deployment repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203838#comment-17203838
 ] 

Uwe Schindler edited comment on LUCENE-9548 at 9/29/20, 10:16 AM:
--

The Apache Jenkins Servers have the key to publish in their config. If Gradle 
reads the standard Maven Settings file to publish, it should work. In the 
default Gradle Properties file in home dir is also some token for artifact 
publishing.

I am away from computer, but can check later. Just tell me if I can generally 
enable the Artifact builds (2 more jobs in Jenkins), and what's tasks I should 
execute. Reconfigure is easy.

About Maven, I'd set it up as standard Maven Deployment, maybe make the 
repository I'd configurable. I have not.much knowledge here.

Uwe


was (Author: thetaphi):
The Apache Jenkins Servers have the key to publish in their config. If Gradle 
reads the standard Maven Settings file to publish it should work..in the 
default Gradle Properties file in home for is also some token for donator 
publishing.

I am away from computer, but can check later. Just tell me if I can generally 
enable the Artifact builds (2 more jobs in Jenkins), and what's tasks I should 
execute. Reconfigure is easy.

About Maven, I'd set it up as standard Maven Deployment, maybe make the 
repository I'd configurable. I have not.much knowledge here.

Uwe

> Publish master (9.x) snapshots to https://repository.apache.org
> ---
>
> Key: LUCENE-9548
> URL: https://issues.apache.org/jira/browse/LUCENE-9548
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should start publishing snapshot JARs to Apache repositories. I'm not sure 
> how to set it all up with gradle but maybe there are other Apache projects 
> that use gradle and we could peek at their config? Mostly it's about signing 
> artifacts (how to pass credentials for signing) and setting up Nexus 
> deployment repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203839#comment-17203839
 ] 

Dawid Weiss commented on LUCENE-9548:
-

If you take a look at the PR, Uwe, you'll see credentials are passed as 
properties to the authenticator. Gradle doesn't read maven setting files but 
it's groovy so you could implement any required logic in the build script. 

The command in the above comment (automatically generated by convention) will 
work. I think we can add a shorter alias task (that depends on the above) but 
it can come later. Jenkins configs can also come later. I'd try to publish a 
snapshot from local machine first to see that it works and then decide how to 
solve CI configs.

> Publish master (9.x) snapshots to https://repository.apache.org
> ---
>
> Key: LUCENE-9548
> URL: https://issues.apache.org/jira/browse/LUCENE-9548
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should start publishing snapshot JARs to Apache repositories. I'm not sure 
> how to set it all up with gradle but maybe there are other Apache projects 
> that use gradle and we could peek at their config? Mostly it's about signing 
> artifacts (how to pass credentials for signing) and setting up Nexus 
> deployment repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14850) ExactStatsCache NullPointerException when shards.tolerant=true

2020-09-29 Thread Yevhen Tienkaiev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203850#comment-17203850
 ] 

Yevhen Tienkaiev commented on SOLR-14850:
-

[~ab] thank you for support and pushing this, very appreciated!

> ExactStatsCache NullPointerException when shards.tolerant=true
> --
>
> Key: SOLR-14850
> URL: https://issues.apache.org/jira/browse/SOLR-14850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.6.2
>Reporter: Yevhen Tienkaiev
>Assignee: Andrzej Bialecki
>Priority: Critical
> Fix For: 8.7, 8.6.3
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> All derived classes from *ExactStatsCache* fails if *shards.tolerant* set to 
> *true* and some shard is down.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:59)
>   at 
> org.apache.solr.search.stats.ExactStatsCache.doMergeToGlobalStats(ExactStatsCache.java:104)
>   at 
> org.apache.solr.search.stats.StatsCache.mergeToGlobalStats(StatsCache.java:173)
>   at 
> org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:713)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:630)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:457)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2606)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:812)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:588)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndP

[jira] [Commented] (SOLR-14151) Make schema components load from packages

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203854#comment-17203854
 ] 

ASF subversion and git services commented on SOLR-14151:


Commit 69c65ebe27c2e2afaae342db14c00ea22dcdb5c3 in lucene-solr's branch 
refs/heads/branch_8x from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=69c65eb ]

SOLR-14151: Making the schema volatile

Also, fixing the mess that resulted out of cherry-picking master commit to 
branch_8x that didn't go well.


> Make schema components load from packages
> -
>
> Key: SOLR-14151
> URL: https://issues.apache.org/jira/browse/SOLR-14151
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: packagemanager
> Fix For: 8.7
>
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Example:
> {code:xml}
>  
> 
>   
>generateNumberParts="0" catenateWords="0"
>   catenateNumbers="0" catenateAll="0"/>
>   
>   
> 
>   
> {code}
> * When a package is updated, the entire {{IndexSchema}} object is refreshed, 
> but the SolrCore object is not reloaded
> * Any component can be prefixed with the package name
> * The semantics of loading plugins remain the same as that of the components 
> in {{solrconfig.xml}}
> * Plugins can be registered using schema API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14151) Make schema components load from packages

2020-09-29 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203856#comment-17203856
 ] 

Ishan Chattopadhyaya commented on SOLR-14151:
-

I think we're good now. Thanks [~noble.paul], [~erickerickson] and [~tflobbe] 
and all who helped. This is a very important feature!

[~worleydl], can you please test your plugins once more and make sure they 
still work?

> Make schema components load from packages
> -
>
> Key: SOLR-14151
> URL: https://issues.apache.org/jira/browse/SOLR-14151
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: packagemanager
> Fix For: 8.7
>
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Example:
> {code:xml}
>  
> 
>   
>generateNumberParts="0" catenateWords="0"
>   catenateNumbers="0" catenateAll="0"/>
>   
>   
> 
>   
> {code}
> * When a package is updated, the entire {{IndexSchema}} object is refreshed, 
> but the SolrCore object is not reloaded
> * Any component can be prefixed with the package name
> * The semantics of loading plugins remain the same as that of the components 
> in {{solrconfig.xml}}
> * Plugins can be registered using schema API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203858#comment-17203858
 ] 

Uwe Schindler commented on LUCENE-9548:
---

Do you have snapshot repo acces, or should I look it up on Jenkins. Problem is 
that I don't have my private key in hospital

> Publish master (9.x) snapshots to https://repository.apache.org
> ---
>
> Key: LUCENE-9548
> URL: https://issues.apache.org/jira/browse/LUCENE-9548
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should start publishing snapshot JARs to Apache repositories. I'm not sure 
> how to set it all up with gradle but maybe there are other Apache projects 
> that use gradle and we could peek at their config? Mostly it's about signing 
> artifacts (how to pass credentials for signing) and setting up Nexus 
> deployment repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9548) Publish master (9.x) snapshots to https://repository.apache.org

2020-09-29 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203859#comment-17203859
 ] 

Dawid Weiss commented on LUCENE-9548:
-

Uwe - recover first. This can wait. I don't have snapshot repo access but it's 
really not super urgent, I have lots of things in the backlog to do anyway.

> Publish master (9.x) snapshots to https://repository.apache.org
> ---
>
> Key: LUCENE-9548
> URL: https://issues.apache.org/jira/browse/LUCENE-9548
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should start publishing snapshot JARs to Apache repositories. I'm not sure 
> how to set it all up with gradle but maybe there are other Apache projects 
> that use gradle and we could peek at their config? Mostly it's about signing 
> artifacts (how to pass credentials for signing) and setting up Nexus 
> deployment repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14835) Solr 8.6.x log starts with "XmlConfiguration Ignored arg" warning from Jetty

2020-09-29 Thread Cassandra Targett (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203881#comment-17203881
 ] 

Cassandra Targett commented on SOLR-14835:
--

I linked SOLR-14844 which is the issue to upgrade Jetty to 9.4.31.

> Solr 8.6.x log starts with "XmlConfiguration Ignored arg" warning from Jetty
> 
>
> Key: SOLR-14835
> URL: https://issues.apache.org/jira/browse/SOLR-14835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.2
>Reporter: Colvin Cowie
>Assignee: Andrzej Bialecki
>Priority: Trivial
>
> After moving to 8.6.2 the first lines of the solr.log are
> {noformat}
> 2020-09-06 18:19:09.164 INFO  (main) [   ] o.e.j.u.log Logging initialized 
> @1197ms to org.eclipse.jetty.util.log.Slf4jLog
> 2020-09-06 18:19:09.226 WARN  (main) [   ] o.e.j.u.l.o.e.j.x.XmlConfiguration 
> Ignored arg: 
>  class="com.codahale.metrics.jetty9.InstrumentedQueuedThreadPool"> name="registry">
>  class="com.codahale.metrics.SharedMetricRegistries">solr.jetty
>   
>   
> {noformat}
> This config is declared here: 
> https://github.com/apache/lucene-solr/blob/5154b6008f54c9d096f5efe9ae347492c23dd780/solr/server/etc/jetty.xml#L33
>  and has been there for a long time, so I assume it's the bump in Jetty 
> version that's causing it now.
> I'm seeing this in 8.6.2, but I've not gone back to check other versions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread Cassandra Targett (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-14898:
-
Fix Version/s: 8.6.3

> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on pull request #1907: LUCENE-9538: Detect polygon self-intersections in the Tessellator

2020-09-29 Thread GitBox


iverase commented on pull request #1907:
URL: https://github.com/apache/lucene-solr/pull/1907#issuecomment-700679293


   Hi @nknize, I would like to add this for the upcoming release, do you think 
you will have time to review it?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn merged pull request #1371: SOLR-14333: print readable version of CollapsedPostFilter query

2020-09-29 Thread GitBox


munendrasn merged pull request #1371:
URL: https://github.com/apache/lucene-solr/pull/1371


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4510) when a test's heart beats it should also throw up (dump stack of all threads)

2020-09-29 Thread Michael McCandless (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203943#comment-17203943
 ] 

Michael McCandless commented on LUCENE-4510:


Boo!

> when a test's heart beats it should also throw up (dump stack of all threads)
> -
>
> Key: LUCENE-4510
> URL: https://issues.apache.org/jira/browse/LUCENE-4510
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Dawid Weiss
>Priority: Major
>
> We've had numerous cases where tests were hung but the "operator" of that 
> particular Jenkins instance struggles to properly get a stack dump for all 
> threads and eg accidentally kills the process instead (rather awful that the 
> same powerful tool "kill" can be used to get stack traces and to destroy the 
> process...).
> Is there some way the test infra could do this for us, eg when it prints the 
> HEARTBEAT message?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on pull request #1912: LUCENE-9535: Try to do larger flushes.

2020-09-29 Thread GitBox


mikemccand commented on pull request #1912:
URL: https://github.com/apache/lucene-solr/pull/1912#issuecomment-700718570


   > > We are talking about assigning DWPT to incoming indexing thread, right?
   > 
   > Right.
   > 
   > The thing that makes me hesitate is that it would probably make fetching a 
DWPT slightly more costly, which could slow indexing down with very fast 
indexing chains like the 1kb wikipedia documents dataset we use for nightly 
benchmarks since this method is called under a lock.
   
   Let's see if benchmarks really notice that?  Net/net I think the change is a 
good idea.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14333) Implement toString() in CollapsingPostFilter

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203945#comment-17203945
 ] 

ASF subversion and git services commented on SOLR-14333:


Commit 1dba76c0d31b6f0294c1f257e5a1fc51a722b82f in lucene-solr's branch 
refs/heads/master from Guna Sekhar Dora Kovvuru
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1dba76c ]

SOLR-14333: Implement toString in Collapse filter  (#1371)



> Implement toString() in CollapsingPostFilter
> 
>
> Key: SOLR-14333
> URL: https://issues.apache.org/jira/browse/SOLR-14333
> Project: Solr
>  Issue Type: Improvement
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {{toString()}} is not overridden in CollapsingPostFilter. Debug component 
> returns {{parsed_filter_queries}}, for multiple CollapsingPostFilter in 
> request, value in {{parsed_filter_queries}} is always 
> {{CollapsingPostFilter()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn merged pull request #1775: SOLR-14767 : fix long field parsing from string

2020-09-29 Thread GitBox


munendrasn merged pull request #1775:
URL: https://github.com/apache/lucene-solr/pull/1775


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203948#comment-17203948
 ] 

ASF subversion and git services commented on SOLR-14767:


Commit 63f0b6b706dcbc8e92a8ff3ee8b81d6d6900aa67 in lucene-solr's branch 
refs/heads/master from Apoorv Bhawsar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=63f0b6b ]

SOLR-14767 : Fix NumberFormatException when int/long field value is floating 
num (#1775)



> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProce

[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1925: Cleanup DWPT state handling

2020-09-29 Thread GitBox


mikemccand commented on a change in pull request #1925:
URL: https://github.com/apache/lucene-solr/pull/1925#discussion_r496729477



##
File path: 
lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
##
@@ -157,49 +157,42 @@ private boolean updatePeaks(long delta) {
   }
 
   DocumentsWriterPerThread doAfterDocument(DocumentsWriterPerThread perThread, 
boolean isUpdate) {
-final long delta = perThread.getCommitLastBytesUsedDelta();
+final long delta = perThread.commitLastBytesUsed();
 synchronized (this) {
-  // we need to commit this under lock but calculate it outside of the 
lock to minimize the time this lock is held
-  // per document. The reason we update this under lock is that we mark 
DWPTs as pending without acquiring it's
-  // lock in #setFlushPending and this also reads the committed bytes and 
modifies the flush/activeBytes.
-  // In the future we can clean this up to be more intuitive.

Review comment:
   ;)

##
File path: lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
##
@@ -446,7 +446,7 @@ long updateDocuments(final Iterable
   private boolean doFlush(DocumentsWriterPerThread flushingDWPT) throws 
IOException {
 boolean hasEvents = false;
 while (flushingDWPT != null) {
-  assert flushingDWPT.hasFlushed() == false;
+  assert flushingDWPT.getState() == 
DocumentsWriterPerThread.State.FLUSHING : "expected FLUSHING but was: " + 
flushingDWPT.getState();

Review comment:
   Maybe add a `boolean assertState(State)` to DWPT that does the above 
`assert` and then returns `true` so we can call it under `assert`?  Only 
downside is then we have double looking `assert`, i.e. `assert 
flushingDWPT.assertState(State.FLUSHING)`.  Looks like we are doing this in at 
least three places here ...

##
File path: 
lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
##
@@ -157,49 +157,42 @@ private boolean updatePeaks(long delta) {
   }
 
   DocumentsWriterPerThread doAfterDocument(DocumentsWriterPerThread perThread, 
boolean isUpdate) {
-final long delta = perThread.getCommitLastBytesUsedDelta();
+final long delta = perThread.commitLastBytesUsed();
 synchronized (this) {
-  // we need to commit this under lock but calculate it outside of the 
lock to minimize the time this lock is held
-  // per document. The reason we update this under lock is that we mark 
DWPTs as pending without acquiring it's
-  // lock in #setFlushPending and this also reads the committed bytes and 
modifies the flush/activeBytes.
-  // In the future we can clean this up to be more intuitive.
-  perThread.commitLastBytesUsed(delta);
   try {
 /*
  * We need to differentiate here if we are pending since 
setFlushPending
  * moves the perThread memory to the flushBytes and we could be set to
  * pending during a delete
  */
-if (perThread.isFlushPending()) {
-  flushBytes += delta;
-  assert updatePeaks(delta);
-} else {
-  activeBytes += delta;
-  assert updatePeaks(delta);
+activeBytes += delta;

Review comment:
   Hmm, why are we removing the conditional on "flush pending" here (and 
always updating `activeBytes`, never `flushBytes`)?  I guess `setFlushPending` 
will move all `activeBytes` over to `flushBytes`, so it is fine for us to 
always increment `activeBytes` here?
   
   Edit: hmm it looks like we are also removing `setFlushPending`'s moving of 
`activeBytes` to `flushBytes` ;)  So now I don't quite understand how we are 
changing the RAM accounting here.
   
   Edit again!: OK I see, we moved the RAM shifting into 
`checkoutFlushableWriter`, OK good!
   
   Should we fix (or maybe just remove) the above comment now?

##
File path: 
lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
##
@@ -295,27 +289,32 @@ public synchronized void waitForFlush() {
* {@link DocumentsWriterPerThread} must have indexed at least on Document 
and must not be
* already pending.
*/
-  public synchronized void setFlushPending(DocumentsWriterPerThread perThread) 
{
-assert !perThread.isFlushPending();
+  synchronized void setFlushPending(DocumentsWriterPerThread perThread) {
+assert !perThread.isFlushPending() : "state: " + perThread.getState();
 if (perThread.getNumDocsInRAM() > 0) {
-  perThread.setFlushPending(); // write access synced
-  final long bytes = perThread.getLastCommittedBytesUsed();
-  flushBytes += bytes;
-  activeBytes -= bytes;

Review comment:
   Hmm we are no longer moving `bytes` from `activeBytes` to `flushedBytes` 
here?  Are we (the caller here?) doing that elsewhere?

##
File path: 
lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThread.java
##
@@ -513,48 +515,21 @@ public String toString() {
 + nu

[jira] [Commented] (LUCENE-4510) when a test's heart beats it should also throw up (dump stack of all threads)

2020-09-29 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203955#comment-17203955
 ] 

Dawid Weiss commented on LUCENE-4510:
-

You can boo at gradle folks, Mike. :) I can't rule out the need for writing a 
custom test launcher (gradle-side) but until somebody does it we're stuck with 
what's available in the build system's infrastructure.


> when a test's heart beats it should also throw up (dump stack of all threads)
> -
>
> Key: LUCENE-4510
> URL: https://issues.apache.org/jira/browse/LUCENE-4510
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Dawid Weiss
>Priority: Major
>
> We've had numerous cases where tests were hung but the "operator" of that 
> particular Jenkins instance struggles to properly get a stack dump for all 
> threads and eg accidentally kills the process instead (rather awful that the 
> same powerful tool "kill" can be used to get stack traces and to destroy the 
> process...).
> Is there some way the test infra could do this for us, eg when it prints the 
> HEARTBEAT message?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9401) ComplexPhraseQuery's toString method always omits field name

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203956#comment-17203956
 ] 

ASF subversion and git services commented on LUCENE-9401:
-

Commit b9c7f50b6ece0bc2250482a8aa294ff876942ca2 in lucene-solr's branch 
refs/heads/master from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b9c7f50 ]

LUCENE-9401: include field in the complex pharse query's toString


> ComplexPhraseQuery's toString method always omits field name
> 
>
> Key: LUCENE-9401
> URL: https://issues.apache.org/jira/browse/LUCENE-9401
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 8.5.2
>Reporter: Thomas Hecker
>Assignee: Munendra S N
>Priority: Trivial
> Attachments: LUCENE-9401.patch, LUCENE-9401.patch
>
>
> The toString(String field) method in 
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery
>  should only omit the field name if query's field name is not equal to field 
> name that is passed as an argument.
> Instead, the query's field name is never included in the returned String.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8343) zkcli.sh support for SSL enabled ZK communication

2020-09-29 Thread Andras Salamon (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203958#comment-17203958
 ] 

Andras Salamon commented on SOLR-8343:
--

I'm testing Solr 8.4.1 + Zookeeper 3.5.5 with SSL and {{zkcli.sh}} was working 
after I set the following:
{noformat}
export ZKCLI_JVM_FLAGS="-Dzookeeper.client.secure=true 
-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty 
-Dzookeeper.ssl.keyStore.location=/path/to/keystore.jks 
-Dzookeeper.ssl.keyStore.password=REDACTED 
-Dzookeeper.ssl.trustStore.location=/path/to/truststore.jks 
-Dzookeeper.ssl.trustStore.password=REDACTED"{noformat}
The command:
{noformat}
zkcli.sh -zkhost :2182/solr -cmd getfile /solr.xml /tmp/solr.xml{noformat}
2182 is the secure port of our Zookeeper (2181 is the unsecure). I had to use 
FQDN in zkhost, localhost is not working with SSL.

What does "native support" mean in this Jira description? There is a commented 
out section for ZK ACL settings? Should we add an other commented out section 
to help the usage? Or some new environment variables could be used here to 
setup ZKCLI_JVM_FLAGS?

> zkcli.sh support for SSL enabled ZK communication
> -
>
> Key: SOLR-8343
> URL: https://issues.apache.org/jira/browse/SOLR-8343
> Project: Solr
>  Issue Type: Sub-task
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Priority: Major
>
> If communicating with a secured ZooKeeper, {{zkcli.sh}} script should have 
> native support for specifying the needed configurations, ref 
> https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8342) Easy setup of ZooKeeper SSL in bin/solr

2020-09-29 Thread Andras Salamon (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203962#comment-17203962
 ] 

Andras Salamon commented on SOLR-8342:
--

Let me revive this old jira. Right now solr is using Zookeeper 3.6.2 I'm happy 
to test this change if it helps.

> Easy setup of ZooKeeper SSL in bin/solr
> ---
>
> Key: SOLR-8342
> URL: https://issues.apache.org/jira/browse/SOLR-8342
> Project: Solr
>  Issue Type: Sub-task
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: SOLR_8342.patch
>
>
> Start scripts should support configuring ZooKeeper SSL. See 
> https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1928: LUCENE-9444: Improve test coverage for TaxonomyFacetLabels

2020-09-29 Thread GitBox


mikemccand commented on a change in pull request #1928:
URL: https://github.com/apache/lucene-solr/pull/1928#discussion_r496742525



##
File path: lucene/facet/src/test/org/apache/lucene/facet/FacetTestCase.java
##
@@ -70,32 +70,51 @@ public Facets getTaxonomyFacetCounts(TaxonomyReader 
taxoReader, FacetsConfig con
*
* @param taxoReader {@link TaxonomyReader} used to read taxonomy during 
search. This instance is expected to be open for reading.
* @param fc {@link FacetsCollector} A collector with matching hits.
-   * @return {@code List} where outer list has one non-null 
entry per document
+   * @param dimension  facet dimension for which labels are requested. A null 
value fetches labels for all dimensions.
+   * @return {@code List} where outer list has one non-null 
entry per document.
* and inner list contain all {@link FacetLabel} entries that belong to a 
document.
* @throws IOException when a low-level IO issue occurs.
*/
-  public List> getAllTaxonomyFacetLabels(TaxonomyReader 
taxoReader, FacetsCollector fc) throws IOException {
+  public List> getAllTaxonomyFacetLabels(String dimension, 
TaxonomyReader taxoReader, FacetsCollector fc) throws IOException {
 List> actualLabels = new ArrayList<>();
 TaxonomyFacetLabels taxoLabels = new TaxonomyFacetLabels(taxoReader, 
FacetsConfig.DEFAULT_INDEX_FIELD_NAME);
-
 for (MatchingDocs m : fc.getMatchingDocs()) {
   FacetLabelReader facetLabelReader = 
taxoLabels.getFacetLabelReader(m.context);
-
   DocIdSetIterator disi = m.bits.iterator();
   while (disi.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {
-List facetLabels = new ArrayList<>();
-int docId = disi.docID();
-FacetLabel facetLabel = facetLabelReader.nextFacetLabel(docId);
-while (facetLabel != null) {
-  facetLabels.add(facetLabel);
-  facetLabel = facetLabelReader.nextFacetLabel(docId);
-}
-actualLabels.add(facetLabels);
+actualLabels.add(allFacetLabels(disi.docID(), dimension, 
facetLabelReader));
   }
 }
 return actualLabels;
   }
 
+  /**
+   * Utility method to get all facet labels for an input docId and dimension 
using the supplied
+   * {@link FacetLabelReader}.
+   *
+   * @param docId docId for which facet labels are needed.
+   * @param dimension Retain facet labels for supplied dimension only. A null 
value fetches all facet labels.
+   * @param facetLabelReader {@FacetLabelReader} instance use to get facet 
labels for input docId.
+   * @return {@code List} containing matching facet labels.
+   * @throws IOException when a low-level IO issue occurs while reading facet 
labels.
+   */
+  List allFacetLabels(int docId, String dimension, 
FacetLabelReader facetLabelReader) throws IOException {
+List facetLabels = new ArrayList<>();
+FacetLabel facetLabel;
+if (dimension != null) {
+  for (facetLabel = facetLabelReader.nextFacetLabel(docId, dimension); 
facetLabel != null; ){

Review comment:
   Missing space before `{`?

##
File path: 
lucene/facet/src/test/org/apache/lucene/facet/taxonomy/TestTaxonomyFacetCounts.java
##
@@ -696,10 +696,9 @@ public void testRandom() throws Exception {
   } else {
 expectedCounts[j].put(doc.dims[j], v.intValue() + 1);
   }
-
   // Add document facet labels
   facetLabels.add(new FacetLabel("dim" + j, doc.dims[j]));
-}
+ }

Review comment:
   Hmm remove this added space?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14333) Implement toString() in CollapsingPostFilter

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203964#comment-17203964
 ] 

ASF subversion and git services commented on SOLR-14333:


Commit abe0163b271a76c255e8aa31aaa7673207723d61 in lucene-solr's branch 
refs/heads/branch_8x from Guna Sekhar Dora Kovvuru
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=abe0163 ]

SOLR-14333: Implement toString in Collapse filter  (#1371)



> Implement toString() in CollapsingPostFilter
> 
>
> Key: SOLR-14333
> URL: https://issues.apache.org/jira/browse/SOLR-14333
> Project: Solr
>  Issue Type: Improvement
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {{toString()}} is not overridden in CollapsingPostFilter. Debug component 
> returns {{parsed_filter_queries}}, for multiple CollapsingPostFilter in 
> request, value in {{parsed_filter_queries}} is always 
> {{CollapsingPostFilter()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9401) ComplexPhraseQuery's toString method always omits field name

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203966#comment-17203966
 ] 

ASF subversion and git services commented on LUCENE-9401:
-

Commit 03a5391e511d060955701bfd75fb713bad0113ec in lucene-solr's branch 
refs/heads/branch_8x from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=03a5391 ]

LUCENE-9401: include field in the complex pharse query's toString


> ComplexPhraseQuery's toString method always omits field name
> 
>
> Key: LUCENE-9401
> URL: https://issues.apache.org/jira/browse/LUCENE-9401
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 8.5.2
>Reporter: Thomas Hecker
>Assignee: Munendra S N
>Priority: Trivial
> Attachments: LUCENE-9401.patch, LUCENE-9401.patch
>
>
> The toString(String field) method in 
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery
>  should only omit the field name if query's field name is not equal to field 
> name that is passed as an argument.
> Instead, the query's field name is never included in the returned String.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203965#comment-17203965
 ] 

ASF subversion and git services commented on SOLR-14767:


Commit b2028defdc035b6a55b9ce0abf9e049f60f05dc6 in lucene-solr's branch 
refs/heads/branch_8x from Apoorv Bhawsar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b2028de ]

SOLR-14767 : Fix NumberFormatException when int/long field value is floating 
num (#1775)



> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestPr

[jira] [Commented] (LUCENE-4510) when a test's heart beats it should also throw up (dump stack of all threads)

2020-09-29 Thread Michael McCandless (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203970#comment-17203970
 ] 

Michael McCandless commented on LUCENE-4510:


Thanks [~dweiss] :)

We always have a workaround for this – manually track down the JVM PIDs and 
{{kill -QUIT}} them, if we can log onto the CI build box in time.

> when a test's heart beats it should also throw up (dump stack of all threads)
> -
>
> Key: LUCENE-4510
> URL: https://issues.apache.org/jira/browse/LUCENE-4510
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Dawid Weiss
>Priority: Major
>
> We've had numerous cases where tests were hung but the "operator" of that 
> particular Jenkins instance struggles to properly get a stack dump for all 
> threads and eg accidentally kills the process instead (rather awful that the 
> same powerful tool "kill" can be used to get stack traces and to destroy the 
> process...).
> Is there some way the test infra could do this for us, eg when it prints the 
> HEARTBEAT message?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-14767.
-
Fix Version/s: 8.7
   Resolution: Fixed

Thanks [~apoorvprecisely]

> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   a

[jira] [Updated] (LUCENE-9401) ComplexPhraseQuery's toString method always omits field name

2020-09-29 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated LUCENE-9401:
-
Fix Version/s: 8.7
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~tomhecker]

> ComplexPhraseQuery's toString method always omits field name
> 
>
> Key: LUCENE-9401
> URL: https://issues.apache.org/jira/browse/LUCENE-9401
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 8.5.2
>Reporter: Thomas Hecker
>Assignee: Munendra S N
>Priority: Trivial
> Fix For: 8.7
>
> Attachments: LUCENE-9401.patch, LUCENE-9401.patch
>
>
> The toString(String field) method in 
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery
>  should only omit the field name if query's field name is not equal to field 
> name that is passed as an argument.
> Instead, the query's field name is never included in the returned String.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14333) Implement toString() in CollapsingPostFilter

2020-09-29 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-14333.
-
Fix Version/s: 8.7
   Resolution: Fixed

Thanks [~kgsdora]

> Implement toString() in CollapsingPostFilter
> 
>
> Key: SOLR-14333
> URL: https://issues.apache.org/jira/browse/SOLR-14333
> Project: Solr
>  Issue Type: Improvement
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {{toString()}} is not overridden in CollapsingPostFilter. Debug component 
> returns {{parsed_filter_queries}}, for multiple CollapsingPostFilter in 
> request, value in {{parsed_filter_queries}} is always 
> {{CollapsingPostFilter()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14897) HttpSolrCall will forward a virtually unlimited number of times until ClusterState ZkWatcher is updated after collection delete

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203982#comment-17203982
 ] 

ASF subversion and git services commented on SOLR-14897:


Commit 3dcb19f88670b954e0956d0764853effedd923ef in lucene-solr's branch 
refs/heads/master from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3dcb19f ]

SOLR-14897: limit no of forwarding for given request

* Irrespective of active or down replicas, restrict no of forwarding of request.
  Previously, this restriction was applied only if active is not found


> HttpSolrCall will forward a virtually unlimited number of times until 
> ClusterState ZkWatcher is updated after collection delete
> ---
>
> Key: SOLR-14897
> URL: https://issues.apache.org/jira/browse/SOLR-14897
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14897.patch
>
>
> While investigating the root cause of some SOLR-14896 related failures, I 
> have seen evidence that if a collection is deleted, but a client makes a 
> subequent request for that collection _before_ the local ClusterState has 
> been updated to remove that DocCollection, HttpSolrCall will forward/proxy 
> that request a (virtually) unbounded number of times in a very short time 
> period - stopping only once the the "cached" local DocCollection is updated 
> to indicate there are no active replicas.**
> While HttpSolrCall does track & increment a {{_forwardedCount}} param on 
> every request it forwards, it doesn't consult that request unless/until it 
> finds a situation where the (local) DocCollection says there are no active 
> replicas.
> So if you have a collection XX with 4 total replicas on 4 diff nodes 
> (A,B,C,D), and and you delete XX (triggering sequential core deletions on 
> A,B,C,D that fire successive ZkWatchers on various nodes to update the 
> collection state) a request for XX can bounce back and forth between nodes C 
> & D 20+ times until the ClusterState watcher fires on both of those nodes so 
> they finally realize that the {{_forwardedCount=20}} is more the the 0 active 
> replicas...
> In the below code snippet from HttpSolrCall, the first call to 
> {{getCoreUrl(...)}} is expected to return null if there are no active 
> replicas - but it uses the local cached DocCollection, which may _think_ 
> there is an active replica on another node, so it forwards the request to 
> that node - where the replica may have been deleted, so that node runs hte 
> same code and may forward the request right back to the original node
> {code:java}
> String coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
> activeSlices, byCoreName, true);
> // Avoid getting into a recursive loop of requests being forwarded by
> // stopping forwarding and erroring out after (totalReplicas) forwards
> if (coreUrl == null) {
>   if (queryParams.getInt(INTERNAL_REQUEST_COUNT, 0) > totalReplicas){
> throw new SolrException(SolrException.ErrorCode.INVALID_STATE,
> "No active replicas found for collection: " + collectionName);
>   }
>   coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
>   activeSlices, byCoreName, false);
> }
> {code}
> ..the check that is suppose to prevent a "recursive loop" is only consulted 
> once a situation arises where local ClusterState indicates there are no 
> active replicas - which seems to defeat the point of the forward check?  (at 
> which point if the total number of replicas hasn't been exceeded, the code is 
> happy to forward the request to a coreUrl which the local ClusterState 
> indicates is _not_ active (which also sems to defeat the point?)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14897) HttpSolrCall will forward a virtually unlimited number of times until ClusterState ZkWatcher is updated after collection delete

2020-09-29 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N reassigned SOLR-14897:
---

Assignee: Munendra S N

> HttpSolrCall will forward a virtually unlimited number of times until 
> ClusterState ZkWatcher is updated after collection delete
> ---
>
> Key: SOLR-14897
> URL: https://issues.apache.org/jira/browse/SOLR-14897
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Munendra S N
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14897.patch
>
>
> While investigating the root cause of some SOLR-14896 related failures, I 
> have seen evidence that if a collection is deleted, but a client makes a 
> subequent request for that collection _before_ the local ClusterState has 
> been updated to remove that DocCollection, HttpSolrCall will forward/proxy 
> that request a (virtually) unbounded number of times in a very short time 
> period - stopping only once the the "cached" local DocCollection is updated 
> to indicate there are no active replicas.**
> While HttpSolrCall does track & increment a {{_forwardedCount}} param on 
> every request it forwards, it doesn't consult that request unless/until it 
> finds a situation where the (local) DocCollection says there are no active 
> replicas.
> So if you have a collection XX with 4 total replicas on 4 diff nodes 
> (A,B,C,D), and and you delete XX (triggering sequential core deletions on 
> A,B,C,D that fire successive ZkWatchers on various nodes to update the 
> collection state) a request for XX can bounce back and forth between nodes C 
> & D 20+ times until the ClusterState watcher fires on both of those nodes so 
> they finally realize that the {{_forwardedCount=20}} is more the the 0 active 
> replicas...
> In the below code snippet from HttpSolrCall, the first call to 
> {{getCoreUrl(...)}} is expected to return null if there are no active 
> replicas - but it uses the local cached DocCollection, which may _think_ 
> there is an active replica on another node, so it forwards the request to 
> that node - where the replica may have been deleted, so that node runs hte 
> same code and may forward the request right back to the original node
> {code:java}
> String coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
> activeSlices, byCoreName, true);
> // Avoid getting into a recursive loop of requests being forwarded by
> // stopping forwarding and erroring out after (totalReplicas) forwards
> if (coreUrl == null) {
>   if (queryParams.getInt(INTERNAL_REQUEST_COUNT, 0) > totalReplicas){
> throw new SolrException(SolrException.ErrorCode.INVALID_STATE,
> "No active replicas found for collection: " + collectionName);
>   }
>   coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
>   activeSlices, byCoreName, false);
> }
> {code}
> ..the check that is suppose to prevent a "recursive loop" is only consulted 
> once a situation arises where local ClusterState indicates there are no 
> active replicas - which seems to defeat the point of the forward check?  (at 
> which point if the total number of replicas hasn't been exceeded, the code is 
> happy to forward the request to a coreUrl which the local ClusterState 
> indicates is _not_ active (which also sems to defeat the point?)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203985#comment-17203985
 ] 

Munendra S N commented on SOLR-14898:
-

LGTM, +1 to committing this

{quote}writing a junit test proved (virutally) impossible due to 
SOLR-14903.{quote}
+1. In case, if we need to enable the new test without SOLR-14903, we might 
need to hardcode headers 
[here|https://github.com/apache/lucene-solr/blob/3dcb19f88670b954e0956d0764853effedd923ef/solr/core/src/java/org/apache/solr/client/solrj/embedded/JettySolrRunner.java#L403]


> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14897) HttpSolrCall will forward a virtually unlimited number of times until ClusterState ZkWatcher is updated after collection delete

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203987#comment-17203987
 ] 

ASF subversion and git services commented on SOLR-14897:


Commit ade0a8fb9a9a352847592bda621633b8a0d5f209 in lucene-solr's branch 
refs/heads/branch_8x from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ade0a8f ]

SOLR-14897: limit no of forwarding for given request

* Irrespective of active or down replicas, restrict no of forwarding of request.
  Previously, this restriction was applied only if active is not found


> HttpSolrCall will forward a virtually unlimited number of times until 
> ClusterState ZkWatcher is updated after collection delete
> ---
>
> Key: SOLR-14897
> URL: https://issues.apache.org/jira/browse/SOLR-14897
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Munendra S N
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14897.patch
>
>
> While investigating the root cause of some SOLR-14896 related failures, I 
> have seen evidence that if a collection is deleted, but a client makes a 
> subequent request for that collection _before_ the local ClusterState has 
> been updated to remove that DocCollection, HttpSolrCall will forward/proxy 
> that request a (virtually) unbounded number of times in a very short time 
> period - stopping only once the the "cached" local DocCollection is updated 
> to indicate there are no active replicas.**
> While HttpSolrCall does track & increment a {{_forwardedCount}} param on 
> every request it forwards, it doesn't consult that request unless/until it 
> finds a situation where the (local) DocCollection says there are no active 
> replicas.
> So if you have a collection XX with 4 total replicas on 4 diff nodes 
> (A,B,C,D), and and you delete XX (triggering sequential core deletions on 
> A,B,C,D that fire successive ZkWatchers on various nodes to update the 
> collection state) a request for XX can bounce back and forth between nodes C 
> & D 20+ times until the ClusterState watcher fires on both of those nodes so 
> they finally realize that the {{_forwardedCount=20}} is more the the 0 active 
> replicas...
> In the below code snippet from HttpSolrCall, the first call to 
> {{getCoreUrl(...)}} is expected to return null if there are no active 
> replicas - but it uses the local cached DocCollection, which may _think_ 
> there is an active replica on another node, so it forwards the request to 
> that node - where the replica may have been deleted, so that node runs hte 
> same code and may forward the request right back to the original node
> {code:java}
> String coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
> activeSlices, byCoreName, true);
> // Avoid getting into a recursive loop of requests being forwarded by
> // stopping forwarding and erroring out after (totalReplicas) forwards
> if (coreUrl == null) {
>   if (queryParams.getInt(INTERNAL_REQUEST_COUNT, 0) > totalReplicas){
> throw new SolrException(SolrException.ErrorCode.INVALID_STATE,
> "No active replicas found for collection: " + collectionName);
>   }
>   coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
>   activeSlices, byCoreName, false);
> }
> {code}
> ..the check that is suppose to prevent a "recursive loop" is only consulted 
> once a situation arises where local ClusterState indicates there are no 
> active replicas - which seems to defeat the point of the forward check?  (at 
> which point if the total number of replicas hasn't been exceeded, the code is 
> happy to forward the request to a coreUrl which the local ClusterState 
> indicates is _not_ active (which also sems to defeat the point?)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14897) HttpSolrCall will forward a virtually unlimited number of times until ClusterState ZkWatcher is updated after collection delete

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203992#comment-17203992
 ] 

ASF subversion and git services commented on SOLR-14897:


Commit 47615df8615380f4aec4f3cae05d3fa8c9686923 in lucene-solr's branch 
refs/heads/branch_8_6 from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=47615df ]

SOLR-14897: limit no of forwarding for given request

* Irrespective of active or down replicas, restrict no of forwarding of request.
  Previously, this restriction was applied only if active is not found


> HttpSolrCall will forward a virtually unlimited number of times until 
> ClusterState ZkWatcher is updated after collection delete
> ---
>
> Key: SOLR-14897
> URL: https://issues.apache.org/jira/browse/SOLR-14897
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Munendra S N
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14897.patch
>
>
> While investigating the root cause of some SOLR-14896 related failures, I 
> have seen evidence that if a collection is deleted, but a client makes a 
> subequent request for that collection _before_ the local ClusterState has 
> been updated to remove that DocCollection, HttpSolrCall will forward/proxy 
> that request a (virtually) unbounded number of times in a very short time 
> period - stopping only once the the "cached" local DocCollection is updated 
> to indicate there are no active replicas.**
> While HttpSolrCall does track & increment a {{_forwardedCount}} param on 
> every request it forwards, it doesn't consult that request unless/until it 
> finds a situation where the (local) DocCollection says there are no active 
> replicas.
> So if you have a collection XX with 4 total replicas on 4 diff nodes 
> (A,B,C,D), and and you delete XX (triggering sequential core deletions on 
> A,B,C,D that fire successive ZkWatchers on various nodes to update the 
> collection state) a request for XX can bounce back and forth between nodes C 
> & D 20+ times until the ClusterState watcher fires on both of those nodes so 
> they finally realize that the {{_forwardedCount=20}} is more the the 0 active 
> replicas...
> In the below code snippet from HttpSolrCall, the first call to 
> {{getCoreUrl(...)}} is expected to return null if there are no active 
> replicas - but it uses the local cached DocCollection, which may _think_ 
> there is an active replica on another node, so it forwards the request to 
> that node - where the replica may have been deleted, so that node runs hte 
> same code and may forward the request right back to the original node
> {code:java}
> String coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
> activeSlices, byCoreName, true);
> // Avoid getting into a recursive loop of requests being forwarded by
> // stopping forwarding and erroring out after (totalReplicas) forwards
> if (coreUrl == null) {
>   if (queryParams.getInt(INTERNAL_REQUEST_COUNT, 0) > totalReplicas){
> throw new SolrException(SolrException.ErrorCode.INVALID_STATE,
> "No active replicas found for collection: " + collectionName);
>   }
>   coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
>   activeSlices, byCoreName, false);
> }
> {code}
> ..the check that is suppose to prevent a "recursive loop" is only consulted 
> once a situation arises where local ClusterState indicates there are no 
> active replicas - which seems to defeat the point of the forward check?  (at 
> which point if the total number of replicas hasn't been exceeded, the code is 
> happy to forward the request to a coreUrl which the local ClusterState 
> indicates is _not_ active (which also sems to defeat the point?)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14897) HttpSolrCall will forward a virtually unlimited number of times until ClusterState ZkWatcher is updated after collection delete

2020-09-29 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-14897.
-
Resolution: Fixed

> HttpSolrCall will forward a virtually unlimited number of times until 
> ClusterState ZkWatcher is updated after collection delete
> ---
>
> Key: SOLR-14897
> URL: https://issues.apache.org/jira/browse/SOLR-14897
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Munendra S N
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14897.patch
>
>
> While investigating the root cause of some SOLR-14896 related failures, I 
> have seen evidence that if a collection is deleted, but a client makes a 
> subequent request for that collection _before_ the local ClusterState has 
> been updated to remove that DocCollection, HttpSolrCall will forward/proxy 
> that request a (virtually) unbounded number of times in a very short time 
> period - stopping only once the the "cached" local DocCollection is updated 
> to indicate there are no active replicas.**
> While HttpSolrCall does track & increment a {{_forwardedCount}} param on 
> every request it forwards, it doesn't consult that request unless/until it 
> finds a situation where the (local) DocCollection says there are no active 
> replicas.
> So if you have a collection XX with 4 total replicas on 4 diff nodes 
> (A,B,C,D), and and you delete XX (triggering sequential core deletions on 
> A,B,C,D that fire successive ZkWatchers on various nodes to update the 
> collection state) a request for XX can bounce back and forth between nodes C 
> & D 20+ times until the ClusterState watcher fires on both of those nodes so 
> they finally realize that the {{_forwardedCount=20}} is more the the 0 active 
> replicas...
> In the below code snippet from HttpSolrCall, the first call to 
> {{getCoreUrl(...)}} is expected to return null if there are no active 
> replicas - but it uses the local cached DocCollection, which may _think_ 
> there is an active replica on another node, so it forwards the request to 
> that node - where the replica may have been deleted, so that node runs hte 
> same code and may forward the request right back to the original node
> {code:java}
> String coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
> activeSlices, byCoreName, true);
> // Avoid getting into a recursive loop of requests being forwarded by
> // stopping forwarding and erroring out after (totalReplicas) forwards
> if (coreUrl == null) {
>   if (queryParams.getInt(INTERNAL_REQUEST_COUNT, 0) > totalReplicas){
> throw new SolrException(SolrException.ErrorCode.INVALID_STATE,
> "No active replicas found for collection: " + collectionName);
>   }
>   coreUrl = getCoreUrl(collectionName, origCorename, clusterState,
>   activeSlices, byCoreName, false);
> }
> {code}
> ..the check that is suppose to prevent a "recursive loop" is only consulted 
> once a situation arises where local ClusterState indicates there are no 
> active replicas - which seems to defeat the point of the forward check?  (at 
> which point if the total number of replicas hasn't been exceeded, the code is 
> happy to forward the request to a coreUrl which the local ClusterState 
> indicates is _not_ active (which also sems to defeat the point?)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203999#comment-17203999
 ] 

David Smiley commented on SOLR-14767:
-

FWIW this looks a lot more like an "Improvement" than a "Bug".

> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processA

[jira] [Commented] (LUCENE-9077) Gradle build

2020-09-29 Thread Tomoko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204001#comment-17204001
 ] 

Tomoko Uchida commented on LUCENE-9077:
---

🎉

> Gradle build
> 
>
> Key: LUCENE-9077
> URL: https://issues.apache.org/jira/browse/LUCENE-9077
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: LUCENE-9077-javadoc-locale-en-US.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This task focuses on providing gradle-based build equivalent for Lucene and 
> Solr (on master branch). See notes below on why this respin is needed.
> The code lives on *gradle-master* branch. It is kept with sync with *master*. 
> Try running the following to see an overview of helper guides concerning 
> typical workflow, testing and ant-migration helpers:
> gradlew :help
> A list of items that needs to be added or requires work. If you'd like to 
> work on any of these, please add your name to the list. Once you have a 
> patch/ pull request let me (dweiss) know - I'll try to coordinate the merges.
>  * (/) Apply forbiddenAPIs
>  * (/) Generate hardware-aware gradle defaults for parallelism (count of 
> workers and test JVMs).
>  * (/) Fail the build if --tests filter is applied and no tests execute 
> during the entire build (this allows for an empty set of filtered tests at 
> single project level).
>  * (/) Port other settings and randomizations from common-build.xml
>  * (/) Configure security policy/ sandboxing for tests.
>  * (/) test's console output on -Ptests.verbose=true
>  * (/) add a :helpDeps explanation to how the dependency system works 
> (palantir plugin, lockfile) and how to retrieve structured information about 
> current dependencies of a given module (in a tree-like output).
>  * (/) jar checksums, jar checksum computation and validation. This should be 
> done without intermediate folders (directly on dependency sets).
>  * (/) verify min. JVM version and exact gradle version on build startup to 
> minimize odd build side-effects
>  * (/) Repro-line for failed tests/ runs.
>  * (/) add a top-level README note about building with gradle (and the 
> required JVM).
>  * (/) add an equivalent of 'validate-source-patterns' 
> (check-source-patterns.groovy) to precommit.
>  * (/) add an equivalent of 'rat-sources' to precommit.
>  * (/) add an equivalent of 'check-example-lucene-match-version' (solr only) 
> to precommit.
>  * (/) javadoc compilation
> Hard-to-implement stuff already investigated:
>  * (/) (done)  -*Printing console output of failed tests.* There doesn't seem 
> to be any way to do this in a reasonably efficient way. There are onOutput 
> listeners but they're slow to operate and solr tests emit *tons* of output so 
> it's an overkill.-
>  * (!) (LUCENE-9120) *Tests working with security-debug logs or other 
> JVM-early log output*. Gradle's test runner works by redirecting Java's 
> stdout/ syserr so this just won't work. Perhaps we can spin the ant-based 
> test runner for such corner-cases.
> Of lesser importance:
>  * (/) Add an equivalent of 'documentation-lint" to precommit.
>  * (/) Do not require files to be committed before running precommit. (staged 
> files are fine).
>  * (/) add rendering of javadocs (gradlew javadoc)
>  * (/) identify and port various "regenerate" tasks from ant builds (javacc, 
> precompiled automata, etc.)
>  * (/) Add Solr packaging for docs/* (see TODO in packaging/build.gradle; 
> currently XSLT...)
>  * I didn't bother adding Solr dist/test-framework to packaging (who'd use it 
> from a binary distribution? 
>  * (/) There is some python execution in check-broken-links and 
> check-missing-javadocs, not sure if it's been ported
>  * (/) Precommit doesn't catch unused imports
>  * Attach javadocs to maven publications.
>  * (/) Add test 'beasting' (rerunning the same suite multiple times). I'm 
> afraid it'll be difficult to run it sensibly because gradle doesn't offer cwd 
> separation for the forked test runners.
>  * if you diff solr packaged distribution against ant-created distribution 
> there are minor differences in library versions and some JARs are excluded/ 
> moved around. I didn't try to force these as everything seems to work (tests, 
> etc.) – perhaps these differences should  be fixed in the ant build instead.
>  * (/) Fill in POM details in gradle/defaults-maven.gradle so that they 
> reflect the previous content better (dependencies aside).
>  * (/) Add any IDE integration layers that should be added (I use IntelliJ 
> and it imports the project out of the box, without the need for any special 
> tuning).
>  
> *{color:#ff}Note:{color}* this builds on the work done by Mark Miller and 
> Cao Mạnh Đạt bu

[jira] [Commented] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204006#comment-17204006
 ] 

Munendra S N commented on SOLR-14767:
-

This already works with atomic updates(updating existing field value) but not 
with adding new document with the field. Since, it had different behavior, I 
thought it should be probably consider as bug. Let me know if you think 
otherwise, will move it to improvement

> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55

[jira] [Comment Edited] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204006#comment-17204006
 ] 

Munendra S N edited comment on SOLR-14767 at 9/29/20, 3:00 PM:
---

This already works with atomic updates(updating existing field value) but not 
with adding new document with the field. Since, it had different behavior, I 
thought it should probably be considered as bug. Let me know if you think 
otherwise, I will move it to improvement


was (Author: munendrasn):
This already works with atomic updates(updating existing field value) but not 
with adding new document with the field. Since, it had different behavior, I 
thought it should be probably consider as bug. Let me know if you think 
otherwise, will move it to improvement

> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apa

[jira] [Resolved] (LUCENE-9535) Investigate recent indexing slowdown for wikimedium documents

2020-09-29 Thread Adrien Grand (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-9535.
--
Fix Version/s: 8.7
   Resolution: Fixed

This looks fixed now, so I'm closing this issue. Thanks Mike, Robert and Simon 
for helping dig this one.

> Investigate recent indexing slowdown for wikimedium documents
> -
>
> Key: LUCENE-9535
> URL: https://issues.apache.org/jira/browse/LUCENE-9535
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.7
>
> Attachments: cpu_profile.svg
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Nightly benchmarks report a ~10% slowdown for 1kB documents as of September 
> 9th: [http://people.apache.org/~mikemccand/lucenebench/indexing.html].
> On that day, we added stored fields in DWPT accounting (LUCENE-9511), so I 
> first thought this could be due to smaller flushed segments and more merging, 
> but I still wonder whether there's something else. The benchmark runs with 
> 8GB of heap, 2GB of RAM buffer and 36 indexing threads. So it's about 2GB/36 
> = 57MB of RAM buffer per thread in the worst-case scenario that all DWPTs get 
> full at the same time. Stored fields account for about 0.7MB of memory, or 1% 
> of the indexing buffer size. How can a 1% reduction of buffering capacity 
> explain a 10% indexing slowdown? I looked into this further by running 
> indexing benchmarks locally with 8 indexing threads and 128MB of indexing 
> buffer memory, which would make this issue even more apparent if the smaller 
> RAM buffer was the cause, but I'm not seeing a regression and actually I'm 
> seeing similar number of flushes when I disabled memory accounting for stored 
> fields.
> I ran indexing under a profiler to see whether something else could cause 
> this slowdown, e.g. slow implementations of ramBytesUsed on stored fields 
> writers, but nothing surprising showed up and the profile looked just like I 
> would have expected.
> Another question I have is why the 4kB benchmark is not affected at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ctargett commented on pull request #1923: SOLR-14900: Reference Guide build cleanup/module upgrade

2020-09-29 Thread GitBox


ctargett commented on pull request #1923:
URL: https://github.com/apache/lucene-solr/pull/1923#issuecomment-700798217


   Let me build with the patch and see if I want to revise my thoughts on it. 
It will take me until sometime tomorrow or Thursday to have time to look at it, 
just FYI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov opened a new pull request #1930: LUCENE-9322: add VectorValues to new Lucene90 codec

2020-09-29 Thread GitBox


msokolov opened a new pull request #1930:
URL: https://github.com/apache/lucene-solr/pull/1930


   This adds a floating-point vector format building on the designs in 
Lucene-9004 and LUCENE-9322. This patch fully supports indexing and reading 
vectors with an iterator, and a random-access API. Support for search based on 
an NSW graph implementation is intended to follow soon, but I wanted to include 
the vector APIs that I needed to get that working, even though they are not yet 
used here, so eg it includes the definition of a scoring function and a 
nearest-neighbors search API, but no implementation of search yet. My intention 
is to keep the ANN implementation hidden, so graphs and other supporting data 
structures (eg we might want to support LSH or k-means clustering and so on) 
would be implementation details invoked by a configuration on the 
VectorField/VectorValues. At the moment you can specify a ScoringFunction, and 
it is implicit that NSW will be the result. In the future we could add another 
parameter to ScoringFunction and/or new functions to represent support for 
other 
 algorithms.
   
   Some open questions: 
   
   1. Should this be Lucene 9.0 only? In this patch I added Lucene90 Codec. If 
we do this then it would be awkward to backport.
   2. It seems messy to have the ScoringFunction implementation in the main 
VectorValues interface API file. I'd appreciate any better suggestion for how 
to organize this.
   3. Vector scoring can return negative numbers. I'd like to have first-class 
support for dot product distance (which can be negative) since that's what my 
consumers seem to have settled on. I don't think we need to be compatible with 
relevance scores, at least not directly in the KNN search API, but IDK maybe we 
should? We could renormalize/convert from dot-product scores to a positive 
score with math in the output layer where we return the scores. So far this is 
just specification question as there is no implementation of search yet.
   4. I think there is room for improvement in some of the data structures used 
to map docids to dense vector ordinals and back. I'd appreciate comments on 
that, but maybe we could revisit in a fast follow-on issue?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9322) Discussing a unified vectors format API

2020-09-29 Thread Michael Sokolov (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204091#comment-17204091
 ] 

Michael Sokolov commented on LUCENE-9322:
-

I posted a PR addressing that builds on the discussion and earlier PR's from 
[~jtibshirani] and [~tomoko] and would appreciate your review if you have time. 
Just to address some of the recent discussion here:

1. This is for dense vectors only. I think handling sparse vectors is 
potentially interesting, but would require a completely different approach, so 
I think should be done separately.
2. I would like to see if we can completely hide the ANN implementation behind 
the vector API, as Julie initially proposed, making the selection of an 
algorithm a simple parameter of VectorValues. In the soon-to-come NSW graph 
implementation I have in mind there is no new graph format, just another 
auxiliary index file inside the vector format. To that end, I included both L2 
and dot-product distances with the idea of maintaining something in the API 
that enables control over the underlying KNN implementation. EG we could have 
ScoreFunction overloaded with graph algorithm? Maybe it's too much, I'd like 
feedback on this part.


> Discussing a unified vectors format API
> ---
>
> Key: LUCENE-9322
> URL: https://issues.apache.org/jira/browse/LUCENE-9322
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Julie Tibshirani
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Two different approximate nearest neighbor approaches are currently being 
> developed, one based on HNSW (LUCENE-9004) and another based on coarse 
> quantization ([#LUCENE-9136]). Each prototype proposes to add a new format to 
> handle vectors. In LUCENE-9136 we discussed the possibility of a unified API 
> that could support both approaches. The two ANN strategies give different 
> trade-offs in terms of speed, memory, and complexity, and it’s likely that 
> we’ll want to support both. Vector search is also an active research area, 
> and it would be great to be able to prototype and incorporate new approaches 
> without introducing more formats.
> To me it seems like a good time to begin discussing a unified API. The 
> prototype for coarse quantization 
> ([https://github.com/apache/lucene-solr/pull/1314]) could be ready to commit 
> soon (this depends on everyone's feedback of course). The approach is simple 
> and shows solid search performance, as seen 
> [here|https://github.com/apache/lucene-solr/pull/1314#issuecomment-608645326].
>  I think this API discussion is an important step in moving that 
> implementation forward.
> The goals of the API would be
> # Support for storing and retrieving individual float vectors.
> # Support for approximate nearest neighbor search -- given a query vector, 
> return the indexed vectors that are closest to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-09-29 Thread Michael Sokolov (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204099#comment-17204099
 ] 

Michael Sokolov commented on LUCENE-9004:
-

Thanks [~mikemccand] - I posted a PR over in LUCENE-9322 that is just for 
adding vectors, no KNN search yet.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> and no integration iwth IndexSearcher, but it does work by some measure using 
> a standalone test class. I've tested with uniform random vectors and on my 
> laptop indexed 10K documents in around 10 seconds and searched them at 95% 
> recall (compared with exact nearest-neighbor baseline) at around 250 QPS. I 
> haven't made any attempt to use multithreaded search for this, but it is 
> amenable to per-segment concurrency.
> [1] 
> [https://www.semanticscholar.org/paper/Efficient-an

[jira] [Commented] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204107#comment-17204107
 ] 

ASF subversion and git services commented on SOLR-14898:


Commit 8c7502dfeb5bcc6c0d37f65220cd49f15efa0797 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8c7502d ]

SOLR-14898: Stop returning duplicate HTTP response headers when requests are 
forward to another node


> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12987) Log deprecation warnings to separate log file

2020-09-29 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204113#comment-17204113
 ] 

David Smiley commented on SOLR-12987:
-

I wonder if this might even be expanded to warnings about suspicious misuse of 
Solr (not deprecations) -- e.g. rows > 1000
And/or maybe we should add headers to give tips.

CC [~mdrob] as I watch your Apachecon talk :-)


> Log deprecation warnings to separate log file
> -
>
> Key: SOLR-12987
> URL: https://issues.apache.org/jira/browse/SOLR-12987
> Project: Solr
>  Issue Type: New Feature
>  Components: logging
>Reporter: Jan Høydahl
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As discussed in solr-user list:
> {quote}When instructing people in what to do before upgrading to a new 
> version, we often tell them to check for deprecation log messages and fix 
> those before upgrading. Normally you'll see the most important logs as WARN 
> level in the Admin UI log tab just after startup and first use. But I'm 
> wondering if it also makes sense to introduce a separate 
> DeprecationLogger.log(foo) that is configured in log4j2.xml to log to a 
> separate logs/deprecation.log to make it easier to check this from the 
> command line. If the file is non-empty you have work to do :)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204116#comment-17204116
 ] 

ASF subversion and git services commented on SOLR-14898:


Commit 9b49512a11a27d5b0e6449b62c3910fbf119a73f in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9b49512 ]

SOLR-14898: Stop returning duplicate HTTP response headers when requests are 
forward to another node

(cherry picked from commit 8c7502dfeb5bcc6c0d37f65220cd49f15efa0797)


> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204121#comment-17204121
 ] 

Andrzej Bialecki commented on SOLR-14898:
-

I manually tested the scenario that you described, both without and with the 
fix, and indeed I was able to reproduce the problem without the fix, and with 
the fix the problem is gone. So it LGTM, +1.

> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204123#comment-17204123
 ] 

ASF subversion and git services commented on SOLR-14898:


Commit e41b18b2fade52ba5853a3102031b1f98d762202 in lucene-solr's branch 
refs/heads/branch_8_6 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e41b18b ]

SOLR-14898: Stop returning duplicate HTTP response headers when requests are 
forward to another node

(cherry picked from commit 8c7502dfeb5bcc6c0d37f65220cd49f15efa0797)


> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter updated SOLR-14898:
--
Affects Version/s: (was: 8.6.3)

> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14898) Proxied/Forwarded requests to other nodes wind up getting duplicate response headers

2020-09-29 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter resolved SOLR-14898.
---
Resolution: Fixed

bq. ... In case, if we need to enable the new test without SOLR-14903, we might 
need to hardcode headers here

I'm not (personally) willing to spend any time adding more "Fakery" to 
JettySolrRunner, but if that's something you're interested in working on I 
certainly have no objections.

Patch committed to master & 8x and backported to branch_8_6 for inclusion in 
8.6.3

> Proxied/Forwarded requests to other nodes wind up getting duplicate response 
> headers
> 
>
> Key: SOLR-14898
> URL: https://issues.apache.org/jira/browse/SOLR-14898
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6.3
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Blocker
> Fix For: 8.6.3
>
> Attachments: SOLR-14898.patch
>
>
> When Solr receives a request for a collection not hosted on the current node, 
> HttpSolrCall forwards/proxies that request - but the final response for the 
> client can include duplicate response headers - one header from the remote 
> node that ultimately handled the request, and a second copy of the header 
> added by the current node...
> {noformat}
> # create a simple 2 node cluster...
> $ ./bin/solr -e cloud -noprompt
> # ...
> $ curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE&name=solo&numShards=1&nrtReplicas=1'
> # ...
> # node 8983 is the node currently hosting the only replica of the 'solo' 
> collection, and responds to requests directly...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:8983/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 169
> # node 7574 does not host a replica, and forwards requests for it to 8983
> # the response the client gets from 7574 has several security related headers 
> duplicated...
> #
> $ curl -S -s -D - -o /dev/null http://localhost:7574/solr/solo/select
> HTTP/1.1 200 OK
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Security-Policy: default-src 'none'; base-uri 'none'; connect-src 
> 'self'; form-action 'self'; font-src 'self'; frame-ancestors 'none'; img-src 
> 'self'; media-src 'self'; style-src 'self' 'unsafe-inline'; script-src 
> 'self'; worker-src 'self';
> X-Content-Type-Options: nosniff
> X-Frame-Options: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Content-Type: application/json;charset=utf-8
> Content-Length: 197
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14896) jetty "Bad Message 400" / "Illegal character" responses to sporadic requests

2020-09-29 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter resolved SOLR-14896.
---
Fix Version/s: 8.6.3
   8.7
   master (9.0)
   Resolution: Fixed

With the changes made in SOLR-14897 + SOLR-14898 this issue shouldn't impact 
normal solr installations, because it should now be (virtually) impossible for 
solr to generate headers that exceed jetty's "500: Response header too large" 
threshold, triggering the underlying jetty bug.

> jetty "Bad Message 400" / "Illegal character" responses to sporadic requests
> 
>
> Key: SOLR-14896
> URL: https://issues.apache.org/jira/browse/SOLR-14896
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Major
> Fix For: master (9.0), 8.7, 8.6.3
>
>
> I'm getting some offline reports from users who recently upgraded to solr 8.6 
> of non-reliably reproducing errors from Jetty when doing the HTTP header 
> parsing. these errors result in a jetty response that will typically look 
> something like...
> {noformat}
> HTTP/1.1 400 Illegal character VCHAR='='
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 70
> Connection: close
> Bad Message 400reason: Illegal character VCHAR='='
> {noformat}
> ...the exact "Illegal character" can vary. Based on some review of the 
> solr/jetty code bases these errors come from Jetty's HttpParser class when 
> unexpected character types are encountered while parsing the HTTP METHOD or 
> Header *NAMES* (VCHAR is a legal token type in Header _VALUES_)
> 
> These errors sometimes manifest in log files as RemoteSolrExceptions due to 
> intra-node communication, but I've been told they can also be returned 
> directly by solr/jetty when clients make requests – _even when the client is 
> not using SolrJ_ – suggesting that whatever is causing the "malformed" 
> request is not specifically a bug in solr/solrj – this seems like a jetty bug.
> I'm suspect that the underlying cause is jetty buffers being re-used/shared 
> between multiple requests, and suspicious that this is caused by / related to 
> [jetty issue#4936|https://github.com/eclipse/jetty.project/issues/4936] / aka 
> [jetty bug#564984|https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984] .. 
> -BUT ... that issue suggests that the buffer re-use situation *ONLY* arises 
> due to a prior "500: Response header too large" error – but i have not been 
> able to confirm that that situation has ever existed on any of the affected 
> servers where this "400 Illegal character" error occurs.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14844) Upgrade Jetty to 9.4.31

2020-09-29 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter updated SOLR-14844:
--
Description: 
A CVE was found in Jetty 9.4.27-9.4.29 that has some security scanning tools 
raising red flags 
([https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17638]).

Here's the Jetty issue: [https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984]. 
It's fixed in 9.4.30+, so we should upgrade to that for 8.7

-It has a simple mitigation (raise Jetty's responseHeaderSize to higher than 
requestHeaderSize), but I don't know how Solr uses Jetty well enough to a) know 
if this problem is even exploitable in Solr, or b) if the workaround suggested 
is even possible in Solr.-

In normal Solr installs, w/o jetty optimizations, this issue is largely 
mitigated in 8.6.3: see SOLR-14896 (and linked bug fixes) for details.

  was:
A CVE was found in Jetty 9.4.27-9.4.29 that has some security scanning tools 
raising red flags 
(https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17638).

Here's the Jetty issue: https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984. 
It's fixed in 9.4.30+, so we should upgrade to that for 8.7

It has a simple mitigation (raise Jetty's responseHeaderSize to higher than 
requestHeaderSize), but I don't know how Solr uses Jetty well enough to a) know 
if this problem is even exploitable in Solr, or b) if the workaround suggested 
is even possible in Solr.


> Upgrade Jetty to 9.4.31
> ---
>
> Key: SOLR-14844
> URL: https://issues.apache.org/jira/browse/SOLR-14844
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.6
>Reporter: Cassandra Targett
>Assignee: Erick Erickson
>Priority: Major
>
> A CVE was found in Jetty 9.4.27-9.4.29 that has some security scanning tools 
> raising red flags 
> ([https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17638]).
> Here's the Jetty issue: 
> [https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984]. It's fixed in 
> 9.4.30+, so we should upgrade to that for 8.7
> -It has a simple mitigation (raise Jetty's responseHeaderSize to higher than 
> requestHeaderSize), but I don't know how Solr uses Jetty well enough to a) 
> know if this problem is even exploitable in Solr, or b) if the workaround 
> suggested is even possible in Solr.-
> In normal Solr installs, w/o jetty optimizations, this issue is largely 
> mitigated in 8.6.3: see SOLR-14896 (and linked bug fixes) for details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gus-asf opened a new pull request #1931: SOLR-14597 Advanced Query Parser (WIP)

2020-09-29 Thread GitBox


gus-asf opened a new pull request #1931:
URL: https://github.com/apache/lucene-solr/pull/1931


   still to do: 
   
- move lucene classes to lucene ticket
- Investigate why TestPackages doesn't fail now (did fail, I had fixed it, 
but new version doesn't need my fix, so need to understand why)
- Docs!
- Package?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14151) Make schema components load from packages

2020-09-29 Thread Daniel Worley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204157#comment-17204157
 ] 

Daniel Worley commented on SOLR-14151:
--

[~ichattopadhyaya] looks good on branch_8x.

> Make schema components load from packages
> -
>
> Key: SOLR-14151
> URL: https://issues.apache.org/jira/browse/SOLR-14151
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: packagemanager
> Fix For: 8.7
>
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Example:
> {code:xml}
>  
> 
>   
>generateNumberParts="0" catenateWords="0"
>   catenateNumbers="0" catenateAll="0"/>
>   
>   
> 
>   
> {code}
> * When a package is updated, the entire {{IndexSchema}} object is refreshed, 
> but the SolrCore object is not reloaded
> * Any component can be prefixed with the package name
> * The semantics of loading plugins remain the same as that of the components 
> in {{solrconfig.xml}}
> * Plugins can be registered using schema API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Reopened] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter reopened SOLR-14767:
---

This change has broken JsonLoaderTest causing reliably reproducing jenkins 
failures...

Passess... 
{noformat}
hossman@slate:~/lucene/dev [j11] [*master] $ git co 
1dba76c0d31b6f0294c1f257e5a1fc51a722b82f
...
hossman@slate:~/lucene/dev [j11] [DIRTY:1dba76c0d31] $ ./gradlew 
:solr:core:test --tests "org.apache.solr.handler.JsonLoaderTest" -Ptests.jvms=6 
-Ptests.haltonfailure=false -Ptests.jvmargs='-XX:+UseCompressedOops 
-XX:+UseSerialGC' -Ptests.seed=B65FB0896CCB43ED -Ptests.multiplier=3 
-Ptests.badapples=false -Ptests.file.encoding=US-ASCII
...
{noformat}

Fails...
{noformat}
hossman@slate:~/lucene/dev [j11] [DIRTY:1dba76c0d31] $ git co 
63f0b6b706dcbc8e92a8ff3ee8b81d6d6900aa67
...
hossman@slate:~/lucene/dev [j11] [DIRTY:63f0b6b706d] $ ./gradlew 
:solr:core:test --tests "org.apache.solr.handler.JsonLoaderTest" -Ptests.jvms=6 
-Ptests.haltonfailure=false -Ptests.jvmargs='-XX:+UseCompressedOops 
-XX:+UseSerialGC' -Ptests.seed=B65FB0896CCB43ED -Ptests.multiplier=3 
-Ptests.badapples=false -Ptests.file.encoding=US-ASCII
...
org.apache.solr.handler.JsonLoaderTest > testAddBigIntegerValueToTrieField 
FAILED
junit.framework.AssertionFailedError: Expected exception SolrException but 
no exception was thrown
...
{noformat}

> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.

[jira] [Created] (LUCENE-9549) gradle "Reproduce with" line doesn't fully quote all args

2020-09-29 Thread Chris M. Hostetter (Jira)
Chris M. Hostetter created LUCENE-9549:
--

 Summary: gradle "Reproduce with" line doesn't fully  quote all args
 Key: LUCENE-9549
 URL: https://issues.apache.org/jira/browse/LUCENE-9549
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Chris M. Hostetter


Recent jenkins build include the following outpu...

{noformat}
Build Log:
[...truncated 1849 lines...]
ERROR: The following test(s) have failed:
  - org.apache.solr.handler.JsonLoaderTest.testAddBigIntegerValueToTrieField 
(:solr:core)
Test output: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/build/test-results/test/outputs/OUTPUT-org.apache.solr.ha
ndler.JsonLoaderTest.txt
Reproduce with: gradlew :solr:core:test --tests 
"org.apache.solr.handler.JsonLoaderTest" -Ptests.jvms=6 
-Ptests.haltonfailure=false -Ptests.jvmargs=-XX:+UseCompressedOops 
-XX:+UseSerialGC -Ptests.seed=B65FB0896CCB43ED -Ptests.multiplier=3 
-Ptests.badapples=false -Ptests.file.encoding=US-ASCII
{noformat}

Attempting to run that command as written produces the following output...

{noformat}
FAILURE: Build failed with an exception.

* What went wrong:
Problem configuring task :solr:core:test from command line.
> Unknown command-line option '-X'.
{noformat}

The explanation of that error is that {{\-XX:+UseSerialGC}} is suppose to be 
part of the {{\-Ptests.jvmargs}} param.

Ideally the gradle Reproduce with line should have looked like...

{noformat}
Reproduce with: gradlew :solr:core:test --tests 
"org.apache.solr.handler.JsonLoaderTest" -Ptests.jvms=6 
-Ptests.haltonfailure=false -Ptests.jvmargs='-XX:+UseCompressedOops 
-XX:+UseSerialGC' -Ptests.seed=B65FB0896CCB43ED -Ptests.multiplier=3 
-Ptests.badapples=false -Ptests.file.encoding=US-ASCII
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14767) Adding document in XML format fails with NumberFormatException

2020-09-29 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204177#comment-17204177
 ] 

Munendra S N commented on SOLR-14767:
-

Looking into this

> Adding document in XML format fails with NumberFormatException
> --
>
> Key: SOLR-14767
> URL: https://issues.apache.org/jira/browse/SOLR-14767
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Indexing the following document
> {code:java}
>  
>   001 
>   14.0 
>   
> {code}
> fails with following stack trace
> {code:java}
> ERROR: [doc=001] Error adding field 'stars_i'='14.0' msg=For input string: 
> "14.0"
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:221)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:109)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:975)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessorFactory$RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:73)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:259)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:498)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:339)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
>   at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:245)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:481)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>   at

[jira] [Commented] (LUCENE-9549) gradle "Reproduce with" line doesn't fully quote all args

2020-09-29 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204179#comment-17204179
 ] 

Chris M. Hostetter commented on LUCENE-9549:


(i have no idea how hard/easy it is to make this change, nor do i think it's 
particularly hard to work around – mainly just filing this issue now to note 
the " {{Unknown command-line option '-X'.}} " output since may be confusing to 
novice gradle users who encounter it.

> gradle "Reproduce with" line doesn't fully  quote all args
> --
>
> Key: LUCENE-9549
> URL: https://issues.apache.org/jira/browse/LUCENE-9549
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Priority: Major
>
> Recent jenkins build include the following outpu...
> {noformat}
> Build Log:
> [...truncated 1849 lines...]
> ERROR: The following test(s) have failed:
>   - org.apache.solr.handler.JsonLoaderTest.testAddBigIntegerValueToTrieField 
> (:solr:core)
> Test output: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/build/test-results/test/outputs/OUTPUT-org.apache.solr.ha
> ndler.JsonLoaderTest.txt
> Reproduce with: gradlew :solr:core:test --tests 
> "org.apache.solr.handler.JsonLoaderTest" -Ptests.jvms=6 
> -Ptests.haltonfailure=false -Ptests.jvmargs=-XX:+UseCompressedOops 
> -XX:+UseSerialGC -Ptests.seed=B65FB0896CCB43ED -Ptests.multiplier=3 
> -Ptests.badapples=false -Ptests.file.encoding=US-ASCII
> {noformat}
> Attempting to run that command as written produces the following output...
> {noformat}
> FAILURE: Build failed with an exception.
> * What went wrong:
> Problem configuring task :solr:core:test from command line.
> > Unknown command-line option '-X'.
> {noformat}
> The explanation of that error is that {{\-XX:+UseSerialGC}} is suppose to be 
> part of the {{\-Ptests.jvmargs}} param.
> Ideally the gradle Reproduce with line should have looked like...
> {noformat}
> Reproduce with: gradlew :solr:core:test --tests 
> "org.apache.solr.handler.JsonLoaderTest" -Ptests.jvms=6 
> -Ptests.haltonfailure=false -Ptests.jvmargs='-XX:+UseCompressedOops 
> -XX:+UseSerialGC' -Ptests.seed=B65FB0896CCB43ED -Ptests.multiplier=3 
> -Ptests.badapples=false -Ptests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14889) improve templated variable escaping in ref-guide _config.yml

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204181#comment-17204181
 ] 

ASF subversion and git services commented on SOLR-14889:


Commit 52183dfbf6407fe939bc5ccb01cb74dabca44334 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=52183df ]

SOLR-14889: improve templated variable escaping in ref-guide _config.yml


> improve templated variable escaping in ref-guide _config.yml
> 
>
> Key: SOLR-14889
> URL: https://issues.apache.org/jira/browse/SOLR-14889
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14889.patch, SOLR-14889.patch, SOLR-14889.patch, 
> SOLR-14889.patch, SOLR-14889.patch
>
>
> SOLR-14824 ran into windows failures when we switching from using a hardcoded 
> "relative" path to the solrRootPath  to using groovy/project variables to get 
> the path.  the reason for the failures was that the path us used as a 
> variable tempted into {{_config.yml.template}} to build the {{_config.yml}} 
> file, but on windows the path seperater of '\' was being parsed by 
> jekyll/YAML as a string escape character.
> (This wasn't a problem we ran into before, even on windows, prior to the 
> SOLR-14824 changes, because the hardcoded relative path only used '/' 
> delimiters, which (j)ruby was happy to work with, even on windows.
> As Uwe pointed out when hotfixing this...
> {quote}Problem was that backslashes are used to escape strings, but windows 
> paths also have those. Fix was to add StringEscapeUtils, but I don't like 
> this too much. Maybe we find a better solution to make special characters in 
> those properties escaped correctly when used in strings inside templates.
> {quote}
> ...the current fix of using {{StringEscapeUtils.escapeJava}} - only for this 
> one variable -- doesn't really protect other variables that might have 
> special charactes in them down the road, and while "escapeJava" work ok for 
> the "\" issue, it isn't neccessarily consistent with all YAML escapse, which 
> could lead to even weird bugs/cofusion down the road.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14889) improve templated variable escaping in ref-guide _config.yml

2020-09-29 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter resolved SOLR-14889.
---
Fix Version/s: master (9.0)
   Resolution: Fixed

> improve templated variable escaping in ref-guide _config.yml
> 
>
> Key: SOLR-14889
> URL: https://issues.apache.org/jira/browse/SOLR-14889
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-14889.patch, SOLR-14889.patch, SOLR-14889.patch, 
> SOLR-14889.patch, SOLR-14889.patch
>
>
> SOLR-14824 ran into windows failures when we switching from using a hardcoded 
> "relative" path to the solrRootPath  to using groovy/project variables to get 
> the path.  the reason for the failures was that the path us used as a 
> variable tempted into {{_config.yml.template}} to build the {{_config.yml}} 
> file, but on windows the path seperater of '\' was being parsed by 
> jekyll/YAML as a string escape character.
> (This wasn't a problem we ran into before, even on windows, prior to the 
> SOLR-14824 changes, because the hardcoded relative path only used '/' 
> delimiters, which (j)ruby was happy to work with, even on windows.
> As Uwe pointed out when hotfixing this...
> {quote}Problem was that backslashes are used to escape strings, but windows 
> paths also have those. Fix was to add StringEscapeUtils, but I don't like 
> this too much. Maybe we find a better solution to make special characters in 
> those properties escaped correctly when used in strings inside templates.
> {quote}
> ...the current fix of using {{StringEscapeUtils.escapeJava}} - only for this 
> one variable -- doesn't really protect other variables that might have 
> special charactes in them down the road, and while "escapeJava" work ok for 
> the "\" issue, it isn't neccessarily consistent with all YAML escapse, which 
> could lead to even weird bugs/cofusion down the road.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] muse-dev[bot] commented on a change in pull request #1931: SOLR-14597 Advanced Query Parser (WIP)

2020-09-29 Thread GitBox


muse-dev[bot] commented on a change in pull request #1931:
URL: https://github.com/apache/lucene-solr/pull/1931#discussion_r496945715



##
File path: 
solr/core/src/java/org/apache/solr/analysis/TokenAnalyzerFilterFactory.java
##
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.analysis;
+
+import java.lang.invoke.MethodHandles;
+import java.util.Map;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.TokenFilterFactory;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.solr.request.SolrRequestInfo;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaAware;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Provides a filter that will analyze tokens with the analyzer from an 
arbitrary field type.
+ *
+ * 
+ * 
+ *   
+ * 
+ * 
+ * 
+ * 
+ *   
+ * 
+ *
+ * Note that a configuration such as above may interfere with multi-word 
synonyms
+ *
+ *  @since 8.4
+ *  @lucene.spi {@value #NAME}
+ */
+public class TokenAnalyzerFilterFactory extends TokenFilterFactory implements 
SchemaAware {
+  private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  /** SPI name */
+  public static final String NAME = "tokenAnalyzer";
+
+  private IndexSchema schema;
+  private String asType;
+  private boolean preserveType;
+
+  /**
+   * Initialize this factory via a set of key-value pairs.
+   */
+  public TokenAnalyzerFilterFactory(Map args) {
+super(args);
+asType = super.require(args, "asType");
+preserveType = super.getBoolean(args,"preserveType",false);
+  }
+
+  private synchronized void lazyInit() {
+if (schema != null) {
+  return;
+}
+// todo: it would be *awfully* nice if there were a better way to get
+//  added to the schema aware list.
+
+// holy hackery Batman! ... I know Robin, but we have no choice.
+// We need to be aware of the Toker's movements!
+synchronized (this) {
+  // synch to avoid missing an inform occurring between these two 
statements.
+  schema = 
SolrRequestInfo.getRequestInfo().getReq().getCore().getLatestSchema();
+  schema.registerAware(this);
+}
+  }
+
+  private Analyzer acquireAnalyzer(IndexSchema latestSchema) {
+FieldType fieldType = latestSchema.getFieldTypeByName(asType);
+if (fieldType == null) {
+  throw new RuntimeException("Could not find field type " + asType + " 
check that field type is defined and " +
+  "your spelling in 'asType' matches the field's 'name' property");
+}
+// meh, but I see no better way... suggestions welcome
+boolean forQuery = 
SolrRequestInfo.getRequestInfo().getReq().getParams().getParams("q") != null;

Review comment:
   *NULL_DEREFERENCE:*  object returned by `getRequestInfo()` could be null 
and is dereferenced at line 93.

##
File path: 
solr/core/src/java/org/apache/solr/analysis/TokenAnalyzerFilterFactory.java
##
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the Lice

  1   2   >