[jira] [Commented] (SOLR-14296) Update netty to 4.1.47

2020-03-18 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061480#comment-17061480
 ] 

Dawid Weiss commented on SOLR-14296:


{quote}bq. Curiously, the gradle version was 4.1.45, not sure how it got there
{quote}
Palantir's plugin will try to consolidate version numbers across all modules 
and dependencies. At least one dependency probably required this version. You 
can inspect dependencies and why a certain version was chosen 
(help/dependencies.txt).

> Update netty to 4.1.47
> --
>
> Key: SOLR-14296
> URL: https://issues.apache.org/jira/browse/SOLR-14296
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andras Salamon
>Priority: Minor
> Fix For: 8.6
>
> Attachments: SOLR-14296-01.patch
>
>
> There are two CVEs against the current netty version:
> [https://nvd.nist.gov/vuln/detail/CVE-2019-20444]
>  [https://nvd.nist.gov/vuln/detail/CVE-2019-20445]
> Although solr is not affected it would be still good to update netty.
> The first non-affected netty version is 4.1.45 but during the update I've 
> found a netty bug ( [https://github.com/netty/netty/issues/10017] ) so it's 
> better to update to 4.1.46



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061488#comment-17061488
 ] 

Dawid Weiss commented on LUCENE-9281:
-

I see two possibilities:

1) A cleaner and more reasonable fix would be to have no-arg public 
constructors on SPI classes and a single-call initialize(Map<>) method (any 
subsequent call throwing an exception). Sure, it would make things a bit uglier 
(no final fields for variables extracted from the map). Maybe some it could be 
engineered to be a bit nicer: those factory classes could have a final field of 
the kind of {{OneTimeSupplier options = new 
OneTimeSupplier((argsMap) -> new ClassOptions<>(argsMap))}} and this could 
ensure the options are initialized once, with get() still being fairly fast.

2) A hacky way is to create a thread-local {{Map<> OptionsSupplier.get()}} 
which would be set up temporarily for the thread enumerating the SPI. Then the 
public constructor in factory classes can extract argsMap from this thread 
local:

{{Foo() { this(}}{{OptionsSupplier.get());}}}

{{Neither is particularly pretty.}}

> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14341) Move a collection's configSet name to state.json

2020-03-18 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061521#comment-17061521
 ] 

Jan Høydahl commented on SOLR-14341:


+1 While ZK allows a znode to both have data and children, perhaps we as a 
project should stick to the more traditional approach of avoiding that. As far 
as I know, this is the only place we add data to a "folder".

> Move a collection's configSet name to state.json
> 
>
> Key: SOLR-14341
> URL: https://issues.apache.org/jira/browse/SOLR-14341
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
>
> It's a bit odd that a collection's state.json knows everything about a 
> collection except for perhaps the most important pointer -- the configSet 
> name.  Presently the configSet name is retrieved via 
> {{ZkStateReader.getConfigName(collectionName)}} which looks at the zk path 
> {{/collections/collectionName}} (an intermediate node) interpreted as a 
> trivial JSON object.  Combining the configSet name into state.json is simpler 
> and also more efficient since many calls to grab the configset name _already_ 
> need the state.json (via a DocCollection object).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9282) Surround query parser's Query instances accumulate clause count on rewrite() causing TooManyBasicQueries and hashCode/equals changes

2020-03-18 Thread Dawid Weiss (Jira)
Dawid Weiss created LUCENE-9282:
---

 Summary: Surround query parser's Query instances accumulate clause 
count on rewrite() causing TooManyBasicQueries and hashCode/equals changes
 Key: LUCENE-9282
 URL: https://issues.apache.org/jira/browse/LUCENE-9282
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss


I was surprised to discover that queries produced by the surround query parser 
(span queries) behave in a non-deterministic way over multiple calls to 
IndexSearcher.search() methods.

The problem is that SQP produces classes referencing an internal 
BasicQueryFactory that accumulates primitive clause count on rewrite, throwing 
TooManyBasicQueries if a given threshold is exceeded. This is fine but leads to 
an odd situation in which a loop like this:

{code:java}
Query q = QueryParser.parse("...") 
for (int i = 0; i < 1; i++) {
  indexSearcher.search(q, 10);
}
{code}

would execute the query successfully up until the threshold is reached, only 
after that throwing an exception. What's even weirder, the hashCode/ equals 
changes on q over time, disrespecting the Query class contract:

https://github.com/apache/lucene-solr/blob/fbd05167f455e3ce2b2ead50336e2b9c2521cd6c/lucene/queryparser/src/java/org/apache/lucene/queryparser/surround/query/RewriteQuery.java#L67

and:

https://github.com/apache/lucene-solr/blob/fbd05167f455e3ce2b2ead50336e2b9c2521cd6c/lucene/queryparser/src/java/org/apache/lucene/queryparser/surround/query/BasicQueryFactory.java#L76-L91

This seems like a bug to me but I wanted to make sure if this behavior is 
relied on anywhere? 

My take on fixing this would be to pass an independent query counter inside 
rewrite so that the original Query itself remains immutable (including hash 
code and equals) and rewrite can be called any number of times (always with the 
same result).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9283) DelimitedBoostTokenFilter can fail testRandomChains

2020-03-18 Thread Alan Woodward (Jira)
Alan Woodward created LUCENE-9283:
-

 Summary: DelimitedBoostTokenFilter can fail testRandomChains
 Key: LUCENE-9283
 URL: https://issues.apache.org/jira/browse/LUCENE-9283
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Alan Woodward
Assignee: Alan Woodward


DelimitedBoostTokenFilter expects tokens of the form `token` or `token|number` 
and throws a NumberFormatException if the `number` part can't be parsed.  This 
can cause test failures when we build random chains and throw random data 
through them.

We can either exclude DelimiteBoostTokenFilter when building a random analyzer, 
or add a flag to ignore badly-formed tokens. I lean towards doing the former, 
as I don't really want to make leniency the default here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9283) DelimitedBoostTokenFilter can fail testRandomChains

2020-03-18 Thread Alan Woodward (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061547#comment-17061547
 ] 

Alan Woodward commented on LUCENE-9283:
---

{code}
20:39:23[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
20:39:23[junit4]   2> TEST FAIL: useCharFilter=false 
text='\u3121\u3121\u312f\u3111\u3104\u3116 \u6412  \ud847\udd35\ud85d\ude23  
tatd lcdqpn - ve v my \ud800\udd9a\ud800\uddcc  imie jzi \ufbf2\u0128 
\u034e\u035c\u0368  nlocx  wklihk'
20:39:23[junit4]   2> Exception from random analyzer: 
20:39:23[junit4]   2> charfilters=
20:39:23[junit4]   2> tokenizer=
20:39:23[junit4]   2>   org.apache.lucene.analysis.core.LetterTokenizer()
20:39:23[junit4]   2> filters=
20:39:23[junit4]   2>   
org.apache.lucene.analysis.standard.ClassicFilter(ValidatingTokenFilter@244ccec3
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1)
20:39:23[junit4]   2>   
Conditional:org.apache.lucene.analysis.cz.CzechStemFilter(OneTimeWrapper@2ae1194a
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
20:39:23[junit4]   2>   
org.apache.lucene.analysis.boost.DelimitedBoostTokenFilter(ValidatingTokenFilter@7590871e
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,boost=1.0,
 ?)
20:39:23[junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestRandomChains -Dtests.method=testRandomChainsWithLargeStrings 
-Dtests.seed=ACB1BAA709A1F2B2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ur-IN -Dtests.timezone=PNT -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
20:39:23[junit4] ERROR   0.03s J0 | 
TestRandomChains.testRandomChainsWithLargeStrings <<<
20:39:23[junit4]> Throwable #1: java.lang.NumberFormatException: For 
input string: "Ĩ"
20:39:23[junit4]>   at 
__randomizedtesting.SeedInfo.seed([ACB1BAA709A1F2B2:C6EA05B650EFD241]:0)
20:39:23[junit4]>   at 
java.base/jdk.internal.math.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2054)
20:39:23[junit4]>   at 
java.base/jdk.internal.math.FloatingDecimal.parseFloat(FloatingDecimal.java:122)
20:39:23[junit4]>   at 
java.base/java.lang.Float.parseFloat(Float.java:455)
20:39:23[junit4]>   at 
org.apache.lucene.analysis.boost.DelimitedBoostTokenFilter.incrementToken(DelimitedBoostTokenFilter.java:52)
20:39:23[junit4]>   at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:77)
20:39:23[junit4]>   at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:716)
20:39:23[junit4]>   at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:630)
20:39:23[junit4]>   at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:558)
20:39:23[junit4]>   at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:899)
20:39:23[junit4]>   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
20:39:23[junit4]>   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
20:39:23[junit4]>   at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
20:39:23[junit4]>   at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
20:39:23[junit4]>   at 
java.base/java.lang.Thread.run(Thread.java:834)
20:39:23[junit4]   2> NOTE: test params are: codec=Asserting(Lucene84): 
{dummy=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
maxPointsInLeafNode=1932, maxMBSortInHeap=6.861957584994992, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@e215be6),
 locale=ur-IN, timezone=PNT
{code}

> DelimitedBoostTokenFilter can fail testRandomChains
> ---
>
> Key: LUCENE-9283
> URL: https://issues.apache.org/jira/browse/LUCENE-9283
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>
> DelimitedBoostTokenFilter expects tokens of the form `token` or 
> `token|number` and throws a NumberFormatException if the `number` part can't 
> be parsed.  This can cause test failures when we build random chains and 
> throw random data through them.
> We can either exclude DelimiteBoostTokenFilter when building a random 
> analyzer, or add a flag to ignore badly-formed tokens. I lean towar

[GitHub] [lucene-solr] jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early termination in TopFieldCollector using minmin score

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early 
termination in TopFieldCollector using minmin score
URL: https://github.com/apache/lucene-solr/pull/1316#discussion_r394218121
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/ParallelSortedCollector.java
 ##
 @@ -0,0 +1,612 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.PriorityQueue;
+
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.search.FieldValueHitQueue.Entry;
+import org.apache.lucene.search.TotalHits.Relation;
+
+/**
+ * A {@link Collector} for results sorted by field, optimized for early 
termination in
+ * the case where the {@link Sort} matches the index and the search is 
executed in parallel,
+ * using multiple threads.
+ *
+ * @lucene.experimental
+ */
+abstract class ParallelSortedCollector extends TopDocsCollector {
+
+  private static final ScoreDoc[] EMPTY_SCOREDOCS = new ScoreDoc[0];
+
+  final int numHits;
+  final Sort sort;
+  final HitsThresholdChecker hitsThresholdChecker;
+  final FieldComparator firstComparator;
+
+  // the current local minimum competitive score already propagated to the 
underlying scorer
+  float minCompetitiveScore;
+
+  // Enables global early termination with concurrent threads using minimum 
competitive scores and
+  // collected counts of all segments
+  final MaxScoreTerminator maxScoreTerminator;
+
+  final int numComparators;
+  FieldValueHitQueue.Entry bottom = null;
+  boolean queueFull;
+  int docBase;
+  final boolean needsScores;
+  final ScoreMode scoreMode;
+
+  // Declaring the constructor private prevents extending this class by anyone
+  // else. Note that the class cannot be final since it's extended by the
+  // internal versions. If someone will define a constructor with any other
+  // visibility, then anyone will be able to extend the class, which is not 
what
+  // we want.
+  private ParallelSortedCollector(FieldValueHitQueue pq, int numHits, 
Sort sort,
+  HitsThresholdChecker hitsThresholdChecker, 
boolean needsScores,
+  MaxScoreTerminator maxScoreTerminator) {
+super(pq);
+this.needsScores = needsScores;
+this.numHits = numHits;
+this.sort = sort;
+this.hitsThresholdChecker = hitsThresholdChecker;
+this.maxScoreTerminator = maxScoreTerminator;
+numComparators = pq.getComparators().length;
+firstComparator = pq.getComparators()[0];
+scoreMode = needsScores ? ScoreMode.COMPLETE : 
ScoreMode.COMPLETE_NO_SCORES;
+  }
+
+  private abstract class TopFieldLeafCollector implements LeafCollector {
+
+final LeafFieldComparator comparator;
+final int firstReverseMul;
+final int reverseMul;
+final LeafReaderContext context;
+final MaxScoreTerminator.LeafState leafTerminationState;
+
+private double score;
+Scorable scorer;
+
+TopFieldLeafCollector(FieldValueHitQueue queue, LeafReaderContext 
context) throws IOException {
+  LeafFieldComparator[] comparators = queue.getComparators(context);
+  firstReverseMul = queue.reverseMul[0];
+  if (comparators.length == 1) {
+this.reverseMul = queue.reverseMul[0];
+this.comparator = comparators[0];
+  } else {
+this.reverseMul = 1;
+this.comparator = new MultiLeafFieldComparator(comparators, 
queue.reverseMul);
+  }
+  this.context = context;
+  leafTerminationState = maxScoreTerminator.addLeafState();
+}
+
+void countHit() {
+  ++totalHits;
+  // TODO: replace hitsThresholdChecker with something simpler
+  hitsThresholdChecker.incrementHitCount();
+}
+
+void collectHitIfCompetitive(int doc) throws IOException {
+  if (reverseMul * comparator.compareBottom(doc) > 0) {
+comparator.copy(bottom.slot, doc);
+score = getComparatorValue(bottom.slot);
+//System.out.printf("leaf=%d doc=%d score=%f\n", context.ord, 
context.docBase + doc, score);
+updateBottom(doc);
+comparator.setBottom(bo

[GitHub] [lucene-solr] jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early termination in TopFieldCollector using minmin score

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early 
termination in TopFieldCollector using minmin score
URL: https://github.com/apache/lucene-solr/pull/1316#discussion_r394213103
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/ParallelSortedCollector.java
 ##
 @@ -0,0 +1,612 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.PriorityQueue;
+
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.search.FieldValueHitQueue.Entry;
+import org.apache.lucene.search.TotalHits.Relation;
+
+/**
+ * A {@link Collector} for results sorted by field, optimized for early 
termination in
+ * the case where the {@link Sort} matches the index and the search is 
executed in parallel,
+ * using multiple threads.
+ *
+ * @lucene.experimental
+ */
 
 Review comment:
   I wonder why it should be reserved for parallel collection ? Sorry I wasn't 
clear but I thought that specializing collectors for sorted index could open 
the door for more optimizations. For instance the priority queue is not needed 
since the index is already sorted, a lot of comparisons could be saved ? 
   Regarding the usage of the `MaxScoreTerminator`, I think it would be simpler 
if the logic to track leaf state remains in the `LeafCollector`. We have to 
track the global bottom value but I don't understand why `MaxScoreTerminator` 
also handles leave states. This is similar to the `MaxScoreAccumulator` so you 
could apply the logic for termination  directly in the collector:
   
if (hitsThresholdChecker.isThresholdReached() 
   && Double.compare(maxScoreTerminator.getCurrent(), 
leafTerminationState) > 0)) {
  // should early terminated
   
   Checking the current bottom score does not require extensive synchronization 
so the interval at which the bottom value is checked is more to avoid extra 
comparison than thread contention. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early termination in TopFieldCollector using minmin score

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early 
termination in TopFieldCollector using minmin score
URL: https://github.com/apache/lucene-solr/pull/1316#discussion_r394217885
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/ParallelSortedCollector.java
 ##
 @@ -0,0 +1,612 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.PriorityQueue;
+
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.search.FieldValueHitQueue.Entry;
+import org.apache.lucene.search.TotalHits.Relation;
+
+/**
+ * A {@link Collector} for results sorted by field, optimized for early 
termination in
+ * the case where the {@link Sort} matches the index and the search is 
executed in parallel,
+ * using multiple threads.
+ *
+ * @lucene.experimental
+ */
+abstract class ParallelSortedCollector extends TopDocsCollector {
+
+  private static final ScoreDoc[] EMPTY_SCOREDOCS = new ScoreDoc[0];
+
+  final int numHits;
+  final Sort sort;
+  final HitsThresholdChecker hitsThresholdChecker;
+  final FieldComparator firstComparator;
+
+  // the current local minimum competitive score already propagated to the 
underlying scorer
+  float minCompetitiveScore;
+
+  // Enables global early termination with concurrent threads using minimum 
competitive scores and
+  // collected counts of all segments
+  final MaxScoreTerminator maxScoreTerminator;
+
+  final int numComparators;
+  FieldValueHitQueue.Entry bottom = null;
+  boolean queueFull;
+  int docBase;
+  final boolean needsScores;
+  final ScoreMode scoreMode;
+
+  // Declaring the constructor private prevents extending this class by anyone
+  // else. Note that the class cannot be final since it's extended by the
+  // internal versions. If someone will define a constructor with any other
+  // visibility, then anyone will be able to extend the class, which is not 
what
+  // we want.
+  private ParallelSortedCollector(FieldValueHitQueue pq, int numHits, 
Sort sort,
+  HitsThresholdChecker hitsThresholdChecker, 
boolean needsScores,
+  MaxScoreTerminator maxScoreTerminator) {
+super(pq);
+this.needsScores = needsScores;
+this.numHits = numHits;
+this.sort = sort;
+this.hitsThresholdChecker = hitsThresholdChecker;
+this.maxScoreTerminator = maxScoreTerminator;
+numComparators = pq.getComparators().length;
+firstComparator = pq.getComparators()[0];
+scoreMode = needsScores ? ScoreMode.COMPLETE : 
ScoreMode.COMPLETE_NO_SCORES;
+  }
+
+  private abstract class TopFieldLeafCollector implements LeafCollector {
+
+final LeafFieldComparator comparator;
+final int firstReverseMul;
+final int reverseMul;
+final LeafReaderContext context;
+final MaxScoreTerminator.LeafState leafTerminationState;
+
+private double score;
+Scorable scorer;
+
+TopFieldLeafCollector(FieldValueHitQueue queue, LeafReaderContext 
context) throws IOException {
+  LeafFieldComparator[] comparators = queue.getComparators(context);
+  firstReverseMul = queue.reverseMul[0];
+  if (comparators.length == 1) {
+this.reverseMul = queue.reverseMul[0];
+this.comparator = comparators[0];
+  } else {
+this.reverseMul = 1;
+this.comparator = new MultiLeafFieldComparator(comparators, 
queue.reverseMul);
+  }
+  this.context = context;
+  leafTerminationState = maxScoreTerminator.addLeafState();
+}
+
+void countHit() {
+  ++totalHits;
+  // TODO: replace hitsThresholdChecker with something simpler
+  hitsThresholdChecker.incrementHitCount();
+}
+
+void collectHitIfCompetitive(int doc) throws IOException {
+  if (reverseMul * comparator.compareBottom(doc) > 0) {
+comparator.copy(bottom.slot, doc);
+score = getComparatorValue(bottom.slot);
+//System.out.printf("leaf=%d doc=%d score=%f\n", context.ord, 
context.docBase + doc, score);
+updateBottom(doc);
+comparator.setBottom(bo

[GitHub] [lucene-solr] jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early termination in TopFieldCollector using minmin score

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early 
termination in TopFieldCollector using minmin score
URL: https://github.com/apache/lucene-solr/pull/1316#discussion_r394194718
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/ParallelSortedCollector.java
 ##
 @@ -0,0 +1,612 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.PriorityQueue;
+
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.search.FieldValueHitQueue.Entry;
+import org.apache.lucene.search.TotalHits.Relation;
+
+/**
+ * A {@link Collector} for results sorted by field, optimized for early 
termination in
+ * the case where the {@link Sort} matches the index and the search is 
executed in parallel,
+ * using multiple threads.
+ *
+ * @lucene.experimental
+ */
+abstract class ParallelSortedCollector extends TopDocsCollector {
+
+  private static final ScoreDoc[] EMPTY_SCOREDOCS = new ScoreDoc[0];
+
+  final int numHits;
+  final Sort sort;
+  final HitsThresholdChecker hitsThresholdChecker;
+  final FieldComparator firstComparator;
+
+  // the current local minimum competitive score already propagated to the 
underlying scorer
+  float minCompetitiveScore;
+
+  // Enables global early termination with concurrent threads using minimum 
competitive scores and
+  // collected counts of all segments
+  final MaxScoreTerminator maxScoreTerminator;
+
+  final int numComparators;
+  FieldValueHitQueue.Entry bottom = null;
+  boolean queueFull;
+  int docBase;
+  final boolean needsScores;
+  final ScoreMode scoreMode;
+
+  // Declaring the constructor private prevents extending this class by anyone
+  // else. Note that the class cannot be final since it's extended by the
+  // internal versions. If someone will define a constructor with any other
+  // visibility, then anyone will be able to extend the class, which is not 
what
+  // we want.
+  private ParallelSortedCollector(FieldValueHitQueue pq, int numHits, 
Sort sort,
+  HitsThresholdChecker hitsThresholdChecker, 
boolean needsScores,
+  MaxScoreTerminator maxScoreTerminator) {
+super(pq);
+this.needsScores = needsScores;
+this.numHits = numHits;
+this.sort = sort;
+this.hitsThresholdChecker = hitsThresholdChecker;
+this.maxScoreTerminator = maxScoreTerminator;
+numComparators = pq.getComparators().length;
+firstComparator = pq.getComparators()[0];
+scoreMode = needsScores ? ScoreMode.COMPLETE : 
ScoreMode.COMPLETE_NO_SCORES;
+  }
+
+  private abstract class TopFieldLeafCollector implements LeafCollector {
+
+final LeafFieldComparator comparator;
+final int firstReverseMul;
+final int reverseMul;
+final LeafReaderContext context;
+final MaxScoreTerminator.LeafState leafTerminationState;
+
+private double score;
+Scorable scorer;
+
+TopFieldLeafCollector(FieldValueHitQueue queue, LeafReaderContext 
context) throws IOException {
+  LeafFieldComparator[] comparators = queue.getComparators(context);
+  firstReverseMul = queue.reverseMul[0];
+  if (comparators.length == 1) {
+this.reverseMul = queue.reverseMul[0];
+this.comparator = comparators[0];
+  } else {
+this.reverseMul = 1;
+this.comparator = new MultiLeafFieldComparator(comparators, 
queue.reverseMul);
+  }
+  this.context = context;
+  leafTerminationState = maxScoreTerminator.addLeafState();
+}
+
+void countHit() {
+  ++totalHits;
+  // TODO: replace hitsThresholdChecker with something simpler
+  hitsThresholdChecker.incrementHitCount();
+}
+
+void collectHitIfCompetitive(int doc) throws IOException {
+  if (reverseMul * comparator.compareBottom(doc) > 0) {
+comparator.copy(bottom.slot, doc);
+score = getComparatorValue(bottom.slot);
+//System.out.printf("leaf=%d doc=%d score=%f\n", context.ord, 
context.docBase + doc, score);
+updateBottom(doc);
+comparator.setBottom(bo

[GitHub] [lucene-solr] jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early termination in TopFieldCollector using minmin score

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early 
termination in TopFieldCollector using minmin score
URL: https://github.com/apache/lucene-solr/pull/1316#discussion_r394214819
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/ParallelSortedCollector.java
 ##
 @@ -0,0 +1,612 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.PriorityQueue;
+
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.search.FieldValueHitQueue.Entry;
+import org.apache.lucene.search.TotalHits.Relation;
+
+/**
+ * A {@link Collector} for results sorted by field, optimized for early 
termination in
+ * the case where the {@link Sort} matches the index and the search is 
executed in parallel,
+ * using multiple threads.
+ *
+ * @lucene.experimental
+ */
+abstract class ParallelSortedCollector extends TopDocsCollector {
+
+  private static final ScoreDoc[] EMPTY_SCOREDOCS = new ScoreDoc[0];
+
+  final int numHits;
+  final Sort sort;
+  final HitsThresholdChecker hitsThresholdChecker;
+  final FieldComparator firstComparator;
+
+  // the current local minimum competitive score already propagated to the 
underlying scorer
+  float minCompetitiveScore;
+
+  // Enables global early termination with concurrent threads using minimum 
competitive scores and
+  // collected counts of all segments
+  final MaxScoreTerminator maxScoreTerminator;
+
+  final int numComparators;
+  FieldValueHitQueue.Entry bottom = null;
+  boolean queueFull;
+  int docBase;
+  final boolean needsScores;
+  final ScoreMode scoreMode;
+
+  // Declaring the constructor private prevents extending this class by anyone
+  // else. Note that the class cannot be final since it's extended by the
+  // internal versions. If someone will define a constructor with any other
+  // visibility, then anyone will be able to extend the class, which is not 
what
+  // we want.
+  private ParallelSortedCollector(FieldValueHitQueue pq, int numHits, 
Sort sort,
+  HitsThresholdChecker hitsThresholdChecker, 
boolean needsScores,
+  MaxScoreTerminator maxScoreTerminator) {
+super(pq);
+this.needsScores = needsScores;
+this.numHits = numHits;
+this.sort = sort;
+this.hitsThresholdChecker = hitsThresholdChecker;
+this.maxScoreTerminator = maxScoreTerminator;
+numComparators = pq.getComparators().length;
+firstComparator = pq.getComparators()[0];
+scoreMode = needsScores ? ScoreMode.COMPLETE : 
ScoreMode.COMPLETE_NO_SCORES;
+  }
+
+  private abstract class TopFieldLeafCollector implements LeafCollector {
+
+final LeafFieldComparator comparator;
+final int firstReverseMul;
+final int reverseMul;
+final LeafReaderContext context;
+final MaxScoreTerminator.LeafState leafTerminationState;
+
+private double score;
+Scorable scorer;
+
+TopFieldLeafCollector(FieldValueHitQueue queue, LeafReaderContext 
context) throws IOException {
+  LeafFieldComparator[] comparators = queue.getComparators(context);
+  firstReverseMul = queue.reverseMul[0];
+  if (comparators.length == 1) {
+this.reverseMul = queue.reverseMul[0];
+this.comparator = comparators[0];
+  } else {
+this.reverseMul = 1;
+this.comparator = new MultiLeafFieldComparator(comparators, 
queue.reverseMul);
+  }
+  this.context = context;
+  leafTerminationState = maxScoreTerminator.addLeafState();
+}
+
+void countHit() {
+  ++totalHits;
+  // TODO: replace hitsThresholdChecker with something simpler
+  hitsThresholdChecker.incrementHitCount();
+}
+
+void collectHitIfCompetitive(int doc) throws IOException {
+  if (reverseMul * comparator.compareBottom(doc) > 0) {
+comparator.copy(bottom.slot, doc);
+score = getComparatorValue(bottom.slot);
+//System.out.printf("leaf=%d doc=%d score=%f\n", context.ord, 
context.docBase + doc, score);
+updateBottom(doc);
+comparator.setBottom(bo

[GitHub] [lucene-solr] jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early termination in TopFieldCollector using minmin score

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1316: LUCENE-8929 parallel early 
termination in TopFieldCollector using minmin score
URL: https://github.com/apache/lucene-solr/pull/1316#discussion_r394215894
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/ParallelSortedCollector.java
 ##
 @@ -0,0 +1,612 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.PriorityQueue;
+
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.search.FieldValueHitQueue.Entry;
+import org.apache.lucene.search.TotalHits.Relation;
+
+/**
+ * A {@link Collector} for results sorted by field, optimized for early 
termination in
+ * the case where the {@link Sort} matches the index and the search is 
executed in parallel,
+ * using multiple threads.
+ *
+ * @lucene.experimental
+ */
+abstract class ParallelSortedCollector extends TopDocsCollector {
+
+  private static final ScoreDoc[] EMPTY_SCOREDOCS = new ScoreDoc[0];
+
+  final int numHits;
+  final Sort sort;
+  final HitsThresholdChecker hitsThresholdChecker;
+  final FieldComparator firstComparator;
+
+  // the current local minimum competitive score already propagated to the 
underlying scorer
+  float minCompetitiveScore;
+
+  // Enables global early termination with concurrent threads using minimum 
competitive scores and
+  // collected counts of all segments
+  final MaxScoreTerminator maxScoreTerminator;
+
+  final int numComparators;
+  FieldValueHitQueue.Entry bottom = null;
+  boolean queueFull;
+  int docBase;
+  final boolean needsScores;
+  final ScoreMode scoreMode;
+
+  // Declaring the constructor private prevents extending this class by anyone
+  // else. Note that the class cannot be final since it's extended by the
+  // internal versions. If someone will define a constructor with any other
+  // visibility, then anyone will be able to extend the class, which is not 
what
+  // we want.
+  private ParallelSortedCollector(FieldValueHitQueue pq, int numHits, 
Sort sort,
+  HitsThresholdChecker hitsThresholdChecker, 
boolean needsScores,
+  MaxScoreTerminator maxScoreTerminator) {
+super(pq);
+this.needsScores = needsScores;
+this.numHits = numHits;
+this.sort = sort;
+this.hitsThresholdChecker = hitsThresholdChecker;
+this.maxScoreTerminator = maxScoreTerminator;
+numComparators = pq.getComparators().length;
+firstComparator = pq.getComparators()[0];
+scoreMode = needsScores ? ScoreMode.COMPLETE : 
ScoreMode.COMPLETE_NO_SCORES;
+  }
+
+  private abstract class TopFieldLeafCollector implements LeafCollector {
+
+final LeafFieldComparator comparator;
+final int firstReverseMul;
+final int reverseMul;
+final LeafReaderContext context;
+final MaxScoreTerminator.LeafState leafTerminationState;
+
+private double score;
+Scorable scorer;
+
+TopFieldLeafCollector(FieldValueHitQueue queue, LeafReaderContext 
context) throws IOException {
+  LeafFieldComparator[] comparators = queue.getComparators(context);
+  firstReverseMul = queue.reverseMul[0];
+  if (comparators.length == 1) {
+this.reverseMul = queue.reverseMul[0];
+this.comparator = comparators[0];
+  } else {
+this.reverseMul = 1;
+this.comparator = new MultiLeafFieldComparator(comparators, 
queue.reverseMul);
+  }
+  this.context = context;
+  leafTerminationState = maxScoreTerminator.addLeafState();
+}
+
+void countHit() {
+  ++totalHits;
+  // TODO: replace hitsThresholdChecker with something simpler
+  hitsThresholdChecker.incrementHitCount();
+}
+
+void collectHitIfCompetitive(int doc) throws IOException {
+  if (reverseMul * comparator.compareBottom(doc) > 0) {
 
 Review comment:
   if we specialize for parallel sorted  index we should get ride of this 
comparison ? A linked list should also be enough to keep track of the top N  
per leave ?


This is

[jira] [Created] (SOLR-14346) Solr fails to check zombie server in LbHttpSolrClient

2020-03-18 Thread maoxiajun (Jira)
maoxiajun created SOLR-14346:


 Summary: Solr fails to check zombie server in LbHttpSolrClient
 Key: SOLR-14346
 URL: https://issues.apache.org/jira/browse/SOLR-14346
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 8.3.1
 Environment: solr: 8.3.1

java: 1.8.0_191

os: centos 7
Reporter: maoxiajun


solr version: 8.3.1

steps to reappear:

step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
'[http://host2:8983/solr'|http://host2:8983/solr',]]

step2: shutdown host2, and keep query going

step3: start host2, wait for 1 minute or more, and we'll find host2 not 
receiving any query request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14346) Solr fails to check zombie server in LbHttpSolrClient

2020-03-18 Thread maoxiajun (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maoxiajun updated SOLR-14346:
-
Description: 
solr version: 8.3.1

steps to reappear:

step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
'[http://host2:8983/solr'|http://host2:8983/solr',]]

step2: shutdown host2, and keep query going

step3: start host2, wait for 1 minute or more, but we'll find host2 not 
receiving any query request

  was:
solr version: 8.3.1

steps to reappear:

step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
'[http://host2:8983/solr'|http://host2:8983/solr',]]

step2: shutdown host2, and keep query going

step3: start host2, wait for 1 minute or more, and we'll find host2 not 
receiving any query request


> Solr fails to check zombie server in LbHttpSolrClient
> -
>
> Key: SOLR-14346
> URL: https://issues.apache.org/jira/browse/SOLR-14346
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.3.1
> Environment: solr: 8.3.1
> java: 1.8.0_191
> os: centos 7
>Reporter: maoxiajun
>Priority: Major
>
> solr version: 8.3.1
> steps to reappear:
> step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
> '[http://host2:8983/solr'|http://host2:8983/solr',]]
> step2: shutdown host2, and keep query going
> step3: start host2, wait for 1 minute or more, but we'll find host2 not 
> receiving any query request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14346) Solr fails to check zombie server in LbHttpSolrClient

2020-03-18 Thread maoxiajun (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maoxiajun updated SOLR-14346:
-
Environment: 
solr: 8.3.1 (not cloud mode)

java: 1.8.0_191

os: centos 7

  was:
solr: 8.3.1

java: 1.8.0_191

os: centos 7


> Solr fails to check zombie server in LbHttpSolrClient
> -
>
> Key: SOLR-14346
> URL: https://issues.apache.org/jira/browse/SOLR-14346
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.3.1
> Environment: solr: 8.3.1 (not cloud mode)
> java: 1.8.0_191
> os: centos 7
>Reporter: maoxiajun
>Priority: Major
>
> solr version: 8.3.1
> steps to reappear:
> step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
> '[http://host2:8983/solr'|http://host2:8983/solr',]]
> step2: shutdown host2, and keep query going
> step3: start host2, wait for 1 minute or more, but we'll find host2 not 
> receiving any query request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14346) Solr fails to check zombie server in LbHttpSolrClient

2020-03-18 Thread maoxiajun (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061644#comment-17061644
 ] 

maoxiajun commented on SOLR-14346:
--

find out method *checkAZombieServer* in *LBSolrClient* not working correctly

> Solr fails to check zombie server in LbHttpSolrClient
> -
>
> Key: SOLR-14346
> URL: https://issues.apache.org/jira/browse/SOLR-14346
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.3.1
> Environment: solr: 8.3.1 (not cloud mode)
> java: 1.8.0_191
> os: centos 7
>Reporter: maoxiajun
>Priority: Major
>
> solr version: 8.3.1
> steps to reappear:
> step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
> '[http://host2:8983/solr'|http://host2:8983/solr',]]
> step2: shutdown host2, and keep query going
> step3: start host2, wait for 1 minute or more, but we'll find host2 not 
> receiving any query request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14346) Solr fails to check zombie server in LbHttpSolrClient

2020-03-18 Thread maoxiajun (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061644#comment-17061644
 ] 

maoxiajun edited comment on SOLR-14346 at 3/18/20, 11:31 AM:
-

seems method *checkAZombieServer* in *LBSolrClient* not working correctly


was (Author: tiaotiaoba):
find out method *checkAZombieServer* in *LBSolrClient* not working correctly

> Solr fails to check zombie server in LbHttpSolrClient
> -
>
> Key: SOLR-14346
> URL: https://issues.apache.org/jira/browse/SOLR-14346
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.3.1
> Environment: solr: 8.3.1 (not cloud mode)
> java: 1.8.0_191
> os: centos 7
>Reporter: maoxiajun
>Priority: Major
>
> solr version: 8.3.1
> steps to reappear:
> step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
> '[http://host2:8983/solr'|http://host2:8983/solr',]]
> step2: shutdown host2, and keep query going
> step3: start host2, wait for 1 minute or more, but we'll find host2 not 
> receiving any query request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061651#comment-17061651
 ] 

Uwe Schindler commented on LUCENE-9281:
---

Another option
3) We can also simply add the no-arg ctor to all analysis factories and let it 
throw UoE. It is just there to make the ServiceProviderInterface working. The 
UOE thrower can be added as default ctor in the abstract 
AbstractAnalysisFactory, which gives a useful message that this can only be 
initialized through our factories. All subclasses must just declare it (which 
makes it ugly, as ctors are not inherited). The ctor is never called (that 
stated in the spec) unless you call ServiceLoader.Provider#get(), not even as 
side effect of ServiceLoader.Provider#type().

> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061651#comment-17061651
 ] 

Uwe Schindler edited comment on LUCENE-9281 at 3/18/20, 11:44 AM:
--

Another option
3) We can also simply add the no-arg ctor to all analysis factories and let it 
throw UoE. It is just there to make the ServiceProviderInterface working. The 
UOE thrower can be added as default ctor in the abstract 
AbstractAnalysisFactory, which gives a useful message that this can only be 
initialized through our factories. All subclasses must just declare it (which 
makes it ugly, as ctors are not inherited). The ctor is never called (that 
stated in the spec) unless you call ServiceLoader.Provider#get(), not even as 
side effect of ServiceLoader.Provider#type().

UPDATE: I checked and this works, but if you implement the subclass ctor caling 
super() is not enough if you have final fields (because the compiler does not 
know that super() always throws Exception). So I added a static pattern:

{code:java}
  /**
   * This default ctor is required to be implemented by all subclasses.
   * All subclasses should just call {@code throw defaultInitializer();}
   */
  protected AbstractAnalysisFactory() {
throw defaultInitializer();
  }
  
  /**
   * Helper method to be called from default constructor of all subclasses to 
make ServiceProvider interface happy.
   * @see #AbstractAnalysisFactory()
   */
  protected static RuntimeException defaultInitializer() {
return new UnsupportedOperationException("Analysis factories cannot be 
instantiated without arguments. Use factory methods for Tokenizers, CharFilters 
and TokenFilters.");
  }
{code}


was (Author: thetaphi):
Another option
3) We can also simply add the no-arg ctor to all analysis factories and let it 
throw UoE. It is just there to make the ServiceProviderInterface working. The 
UOE thrower can be added as default ctor in the abstract 
AbstractAnalysisFactory, which gives a useful message that this can only be 
initialized through our factories. All subclasses must just declare it (which 
makes it ugly, as ctors are not inherited). The ctor is never called (that 
stated in the spec) unless you call ServiceLoader.Provider#get(), not even as 
side effect of ServiceLoader.Provider#type().

> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14346) Solr fails to check zombie server in LbHttpSolrClient

2020-03-18 Thread maoxiajun (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061644#comment-17061644
 ] 

maoxiajun edited comment on SOLR-14346 at 3/18/20, 12:11 PM:
-

seems method *checkAZombieServer* in *LBSolrClient* not working correctly, 
called *QueryRequest.process()* without core name


was (Author: tiaotiaoba):
seems method *checkAZombieServer* in *LBSolrClient* not working correctly

> Solr fails to check zombie server in LbHttpSolrClient
> -
>
> Key: SOLR-14346
> URL: https://issues.apache.org/jira/browse/SOLR-14346
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.3.1
> Environment: solr: 8.3.1 (not cloud mode)
> java: 1.8.0_191
> os: centos 7
>Reporter: maoxiajun
>Priority: Major
>
> solr version: 8.3.1
> steps to reappear:
> step1: configured LbHttpSolrClient whith ['[http://host1:8983/solr',] 
> '[http://host2:8983/solr'|http://host2:8983/solr',]]
> step2: shutdown host2, and keep query going
> step3: start host2, wait for 1 minute or more, but we'll find host2 not 
> receiving any query request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] uschindler opened a new pull request #1360: LUCENE-9281: First mockup of SPIClassIterator retirement

2020-03-18 Thread GitBox
uschindler opened a new pull request #1360: LUCENE-9281: First mockup of 
SPIClassIterator retirement
URL: https://github.com/apache/lucene-solr/pull/1360
 
 
   See https://issues.apache.org/jira/browse/LUCENE-9281 for more details:
   
   We currently have our own implementation of the service loader standard 
(SPI) fo several reasons:
   
   (1) In some older JDKs the order of classpath was not respected and this 
lead to wrong order of codecs implementing the same SPI name. This caused tests 
to sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
read/write Lucene3 codec that was listed before the read-only one). That's no 
longer an issue, the order of loading does not matter. In addition, Java now 
does everything correct.
   
   (2) In Analysis, we require SPI classes to have a constructor taking args (a 
Map of params in our case). We also extract the NAME from a static field. 
Standard service loader does not support this, it tries to instantiate the 
class with default ctor.
   
   With Java 9+, the ServiceLoader now has a stream() method that allows to 
filter and preprocess classes: 
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()
   This allows us to use the new interface and just get the loaded class (which 
may come from module-info.class or a conventional SPI file): 
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()
   
   This change allows us to convert Lucene to modules listing all SPIs in the 
module-info.java.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061670#comment-17061670
 ] 

Uwe Schindler commented on LUCENE-9281:
---

I opened a Github Pull Request using my idea. I did not yet implement it for 
all Factory subclasses (this will be a quite huge patch), so this is for a 
quick review only, to discuss ideas: 
https://github.com/apache/lucene-solr/pull/1360

> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14256) Remove HashDocSet

2020-03-18 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061673#comment-17061673
 ] 

ASF subversion and git services commented on SOLR-14256:


Commit 4fd96bedc27adff61f3487539adbd67011181b90 in lucene-solr's branch 
refs/heads/master from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4fd96be ]

SOLR-14256: replaced EMPTY with empty() to fix deadlock


> Remove HashDocSet
> -
>
> Key: SOLR-14256
> URL: https://issues.apache.org/jira/browse/SOLR-14256
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This particular DocSet is only used in places where we need to convert 
> SortedIntDocSet in particular to a DocSet that is fast for random access.  
> Once such a conversion happens, it's only used to test some docs for presence 
> and it could be another interface.  DocSet has kind of a large-ish API 
> surface area to implement.  Since we only need to test docs, we could use 
> Bits interface (having only 2 methods) backed by an off-the-shelf primitive 
> long hash set on our classpath.  Perhaps a new method on DocSet: getBits() or 
> DocSetUtil.getBits(DocSet).
> In addition to removing complexity unto itself, this improvement is required 
> by SOLR-14185 because it wants to be able to produce a DocIdSetIterator slice 
> directly from the DocSet but HashDocSet can't do that without sorting first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on issue #1358: SOLR-14256: replaced EMPTY with empty() to fix deadlock

2020-03-18 Thread GitBox
dsmiley commented on issue #1358: SOLR-14256: replaced EMPTY with empty() to 
fix deadlock
URL: https://github.com/apache/lucene-solr/pull/1358#issuecomment-600595629
 
 
   Merged: 4fd96bedc27adff61f3487539adbd67011181b90


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley closed pull request #1358: SOLR-14256: replaced EMPTY with empty() to fix deadlock

2020-03-18 Thread GitBox
dsmiley closed pull request #1358: SOLR-14256: replaced EMPTY with empty() to 
fix deadlock
URL: https://github.com/apache/lucene-solr/pull/1358
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14256) Remove HashDocSet

2020-03-18 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-14256.
-
Resolution: Fixed

> Remove HashDocSet
> -
>
> Key: SOLR-14256
> URL: https://issues.apache.org/jira/browse/SOLR-14256
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This particular DocSet is only used in places where we need to convert 
> SortedIntDocSet in particular to a DocSet that is fast for random access.  
> Once such a conversion happens, it's only used to test some docs for presence 
> and it could be another interface.  DocSet has kind of a large-ish API 
> surface area to implement.  Since we only need to test docs, we could use 
> Bits interface (having only 2 methods) backed by an off-the-shelf primitive 
> long hash set on our classpath.  Perhaps a new method on DocSet: getBits() or 
> DocSetUtil.getBits(DocSet).
> In addition to removing complexity unto itself, this improvement is required 
> by SOLR-14185 because it wants to be able to produce a DocIdSetIterator slice 
> directly from the DocSet but HashDocSet can't do that without sorting first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14296) Update netty to 4.1.47

2020-03-18 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061675#comment-17061675
 ] 

Erick Erickson commented on SOLR-14296:
---

Thanks [~dweiss] , that's a good thing to know about. Once I cleared out all 
the old references to netty 4.1.45 and regenerated versions.lock, 4.1.45 
disappeared so I didn't pursue it any farther.

> Update netty to 4.1.47
> --
>
> Key: SOLR-14296
> URL: https://issues.apache.org/jira/browse/SOLR-14296
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andras Salamon
>Priority: Minor
> Fix For: 8.6
>
> Attachments: SOLR-14296-01.patch
>
>
> There are two CVEs against the current netty version:
> [https://nvd.nist.gov/vuln/detail/CVE-2019-20444]
>  [https://nvd.nist.gov/vuln/detail/CVE-2019-20445]
> Although solr is not affected it would be still good to update netty.
> The first non-affected netty version is 4.1.45 but during the update I've 
> found a netty bug ( [https://github.com/netty/netty/issues/10017] ) so it's 
> better to update to 4.1.46



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw opened a new pull request #1361: LUCENE-8118: Throw exception if DWPT grows beyond it's maximum ram limit

2020-03-18 Thread GitBox
s1monw opened a new pull request #1361: LUCENE-8118: Throw exception if DWPT 
grows beyond it's maximum ram limit
URL: https://github.com/apache/lucene-solr/pull/1361
 
 
   Today if you add documents that cause a single DWPT to grow beyond its 
maximum ram buffer size we just keep on indexing. If a user  misuses our 
IndexWriter#addDocuments API and provides a very very large iterable we will 
run into `ArrayIndexOutOfBoundsException` down the road and abort the 
IndexWriter. This change is not bulletproof but best effort to ensure that we 
fail with a better exception message and don't abort the IndexWriter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061679#comment-17061679
 ] 

Uwe Schindler commented on LUCENE-9281:
---

FYI, I moved the AbstractAnalysisFactor#lookupSPIName(Class) to the 
AnalysisSPILoader because it better belongs there. I can revert this if needed. 
[~tomoko]?

> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14343) Properly set initCapacity in NamedList

2020-03-18 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061682#comment-17061682
 ] 

Lucene/Solr QA commented on SOLR-14343:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14343 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12996980/SOLR-14343.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 7f37a55a8c0 |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/718/testReport/ |
| modules | C: solr/solrj U: solr/solrj |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/718/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Properly set initCapacity in NamedList
> --
>
> Key: SOLR-14343
> URL: https://issues.apache.org/jira/browse/SOLR-14343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Minor
> Attachments: SOLR-14343.patch
>
>
> In {{NamedList(Map)}}, list is initialised to map.size() instead of 2 times 
> the size. There are other few instances where initial capacity can be 
> correctly set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14347) Autoscaling placement wrong due to incorrect LockLevel for collection modifications

2020-03-18 Thread Andrzej Bialecki (Jira)
Andrzej Bialecki created SOLR-14347:
---

 Summary: Autoscaling placement wrong due to incorrect LockLevel 
for collection modifications
 Key: SOLR-14347
 URL: https://issues.apache.org/jira/browse/SOLR-14347
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Affects Versions: 8.5
Reporter: Andrzej Bialecki
Assignee: Andrzej Bialecki


Steps to reproduce:
 * create a cluster of a few nodes (tested with 7 nodes)
 * define per-collection policies that distribute replicas exclusively on 
different nodes per policy
 * concurrently create a few collections, each using a different policy
 * resulting replica placement will be seriously wrong, causing many policy 
violations

Running the same scenario but instead creating collections sequentially results 
in no violations.

I suspect this is caused by incorrect locking level for all collection 
operations (as defined in {{CollectionParams.CollectionAction}}) that create 
new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
use the policy engine to create new replica placements, and as a result they 
change the cluster state. However, currently these operations are locked (in 
{{OverseerCollectionMessageHandler.lockTask}} ) using {{LockLevel.COLLECTION}}. 
In practice this means that the lock is held only for the particular collection 
that is being modified.

A straightforward fix for this issue is to change the locking level to CLUSTER 
(and I confirm this fixes the scenario described above). However, this 
effectively serializes all collection operations listed above, which will 
result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14347) Autoscaling placement wrong due to incorrect LockLevel for collection modifications

2020-03-18 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-14347:

Attachment: SOLR-14347.patch

> Autoscaling placement wrong due to incorrect LockLevel for collection 
> modifications
> ---
>
> Key: SOLR-14347
> URL: https://issues.apache.org/jira/browse/SOLR-14347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.5
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-14347.patch
>
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on 
> different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy 
> violations
> Running the same scenario but instead creating collections sequentially 
> results in no violations.
> I suspect this is caused by incorrect locking level for all collection 
> operations (as defined in {{CollectionParams.CollectionAction}}) that create 
> new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
> REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
> use the policy engine to create new replica placements, and as a result they 
> change the cluster state. However, currently these operations are locked (in 
> {{OverseerCollectionMessageHandler.lockTask}} ) using 
> {{LockLevel.COLLECTION}}. In practice this means that the lock is held only 
> for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to 
> CLUSTER (and I confirm this fixes the scenario described above). However, 
> this effectively serializes all collection operations listed above, which 
> will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14347) Autoscaling placement wrong due to incorrect LockLevel for collection modifications

2020-03-18 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061706#comment-17061706
 ] 

Andrzej Bialecki commented on SOLR-14347:
-

Patch that changes the lock level of affected collection commands from 
COLLECTION to CLUSTER. This patch fixes the scenario described above.

> Autoscaling placement wrong due to incorrect LockLevel for collection 
> modifications
> ---
>
> Key: SOLR-14347
> URL: https://issues.apache.org/jira/browse/SOLR-14347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.5
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-14347.patch
>
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on 
> different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy 
> violations
> Running the same scenario but instead creating collections sequentially 
> results in no violations.
> I suspect this is caused by incorrect locking level for all collection 
> operations (as defined in {{CollectionParams.CollectionAction}}) that create 
> new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
> REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
> use the policy engine to create new replica placements, and as a result they 
> change the cluster state. However, currently these operations are locked (in 
> {{OverseerCollectionMessageHandler.lockTask}} ) using 
> {{LockLevel.COLLECTION}}. In practice this means that the lock is held only 
> for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to 
> CLUSTER (and I confirm this fixes the scenario described above). However, 
> this effectively serializes all collection operations listed above, which 
> will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14034) remove deprecated min_rf references

2020-03-18 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061718#comment-17061718
 ] 

Rabi Kumar K C commented on SOLR-14034:
---

Hi [~cpoerschke] I am new to solr and want to work ontthis ticket. I found the 
usage of UpdateRequest.MIN_REPFACT in DistributedZkUpdateProcessor.java, 
HttpPartitionTest.java, BaseCloudSolrClient.java, and 
ReplicationFactorTest.java in the source code. Specifically, in 
ReplicationFactorTest.java MIN_REPFACT has been used in the following lines:
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L264-268]
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L480]
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L489]

So my question was should I refactor the ReplicationFactorTest.java after 
removing MIN_REPFACT and keep tests for UpdateRequest.REPFACT or change the 
MIN_REPFACT into REPFACT?

Please do let me know your thoughts on this or if my question is not clear 
enough. Thank You

> remove deprecated min_rf references
> ---
>
> Key: SOLR-14034
> URL: https://issues.apache.org/jira/browse/SOLR-14034
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Blocker
> Fix For: master (9.0)
>
>
> * {{min_rf}} support was added under SOLR-5468 in version 4.9 
> (https://github.com/apache/lucene-solr/blob/releases/lucene-solr/4.9.0/solr/solrj/src/java/org/apache/solr/client/solrj/request/UpdateRequest.java#L50)
>  and deprecated under SOLR-12767 in version 7.6 
> (https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.6.0/solr/solrj/src/java/org/apache/solr/client/solrj/request/UpdateRequest.java#L57-L61)
> * http://lucene.apache.org/solr/7_6_0/changes/Changes.html and 
> https://lucene.apache.org/solr/guide/8_0/major-changes-in-solr-8.html#solr-7-6
>  both clearly mention the deprecation
> This ticket is to fully remove {{min_rf}} references in code, tests and 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14034) remove deprecated min_rf references

2020-03-18 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061718#comment-17061718
 ] 

Rabi Kumar K C edited comment on SOLR-14034 at 3/18/20, 2:00 PM:
-

Hi [~cpoerschke] I am new to solr and want to work on this ticket. I found the 
usage of UpdateRequest.MIN_REPFACT in DistributedZkUpdateProcessor.java, 
HttpPartitionTest.java, BaseCloudSolrClient.java, and 
ReplicationFactorTest.java in the source code. Specifically, in 
ReplicationFactorTest.java MIN_REPFACT has been used in the following lines:
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L264-268]
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L480]
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L489]

So my question was should I refactor the ReplicationFactorTest.java after 
removing MIN_REPFACT and keep tests for UpdateRequest.REPFACT or change the 
MIN_REPFACT into REPFACT?

Please do let me know your thoughts on this or if my question is not clear 
enough. Thank You


was (Author: rabikumar.kc):
Hi [~cpoerschke] I am new to solr and want to work ontthis ticket. I found the 
usage of UpdateRequest.MIN_REPFACT in DistributedZkUpdateProcessor.java, 
HttpPartitionTest.java, BaseCloudSolrClient.java, and 
ReplicationFactorTest.java in the source code. Specifically, in 
ReplicationFactorTest.java MIN_REPFACT has been used in the following lines:
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L264-268]
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L480]
 * 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/ReplicationFactorTest.java#L489]

So my question was should I refactor the ReplicationFactorTest.java after 
removing MIN_REPFACT and keep tests for UpdateRequest.REPFACT or change the 
MIN_REPFACT into REPFACT?

Please do let me know your thoughts on this or if my question is not clear 
enough. Thank You

> remove deprecated min_rf references
> ---
>
> Key: SOLR-14034
> URL: https://issues.apache.org/jira/browse/SOLR-14034
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Blocker
> Fix For: master (9.0)
>
>
> * {{min_rf}} support was added under SOLR-5468 in version 4.9 
> (https://github.com/apache/lucene-solr/blob/releases/lucene-solr/4.9.0/solr/solrj/src/java/org/apache/solr/client/solrj/request/UpdateRequest.java#L50)
>  and deprecated under SOLR-12767 in version 7.6 
> (https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.6.0/solr/solrj/src/java/org/apache/solr/client/solrj/request/UpdateRequest.java#L57-L61)
> * http://lucene.apache.org/solr/7_6_0/changes/Changes.html and 
> https://lucene.apache.org/solr/guide/8_0/major-changes-in-solr-8.html#solr-7-6
>  both clearly mention the deprecation
> This ticket is to fully remove {{min_rf}} references in code, tests and 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14340) ZkStateReader.readConfigName is doing too much work

2020-03-18 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061758#comment-17061758
 ] 

Erick Erickson commented on SOLR-14340:
---

[~dsmiley]  It's not an expensive test though. True, I threw it in there on a 
"while we're at it, let's check this too" basis.

I think a larger question is what the correct behavior should be. Upon 
reflection, I don't like the fact that in this weird case we don't return any 
information about this state, we just omit the collection altogether. The 
collection exists, but requires some sleuthing to figure out what state it's in.

So the important bit is that in this weird case we do something rational and 
return a clusterstatus that contains what makes sense. IIRC, we blew up before 
(NPE?) and returned nothing useful at all.

ClusterStatus.java[154] has this comment:

{color:#808080}// skip this collection because the configset's znode has been 
deleted
{color}{color:#808080}// which can happen during aggressive collection removal, 
see SOLR-10720
{color}

and we skip adding the collection info entirely. Looking at 10720, I worry 
about breaking something else if we returned a partial status, say the 
collection with a configname of "WARNING_MISSING_CONFIGSET".

WDYT about adding an "errors" section to clusterstatus with a message like 
"Configset for collection XY not found, status for collection omitted"?

Then we could remove all the rest of the checks in the readConfigName, I had no 
idea it'd be that expensive.  But I suspect as we work with more and more 
collections, things like this will crop up.

I actually rather like the idea of an "errors" section to clusterstatus where 
we can put anything weird we discover...

> ZkStateReader.readConfigName is doing too much work
> ---
>
> Key: SOLR-14340
> URL: https://issues.apache.org/jira/browse/SOLR-14340
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>
> ZkStateReader.readConfigName reads the configSet name given a collection name 
> parameter.  Simple.  It's doing too much work to do this though.  First it's 
> calling zkClient.exists() which is redundant given that we then call getData 
> will will detect if it doesn't exist.  Then we validate that the config path 
> exists.  But I don't think that should be verified on read, only on write.  
> This method is a hotspot for nodes with lots of cores, proven out with 
> profiling.
> Ideally the configSet name ought to be moved to state.json which is where all 
> the other collection metadata is and is thus already read by most spots that 
> want to read the configSet name.  That would speed things up further and 
> generally simplify things as well.  That can happen on a follow-on issue, I 
> think.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Uwe Schindler (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-9281:
--
Description: 
We currently have our own implementation of the service loader standard (SPI) 
fo several reasons:

(1) In some older JDKs the order of classpath was not respected and this lead 
to wrong order of codecs implementing the same SPI name. This caused tests to 
sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
read/write Lucene3 codec that was listed before the read-only one). That's no 
longer an issue, the order of loading does not matter. In addition, Java now 
does everything correct.

(2) In Analysis, we require SPI classes to have a constructor taking args (a 
Map of params in our case). We also extract the NAME from a static field. 
Standard service loader does not support this, it tries to instantiate the 
class with default ctor.

With Java 9+, the ServiceLoader now has a stream() method that allows to filter 
and preprocess classes: 
[https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()]
This allows us to use the new interface and just get the loaded class (which 
may come from module-info.class or a conventional SPI file): 
[https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()]

This change allows us to convert Lucene to modules listing all SPIs in the 
module-info.java.

  was:
We currently have our own implementation of the service loader standard (SPI) 
fo several reasons:

(1) In some older JDKs the order of classpath was not respected and this lead 
to wrong order of codecs implementing the same SPI name. This caused tests to 
sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
read/write Lucene3 codec that was listed before the read-only one). That's no 
longer an issue, the order of loading does not matter. In addition, Java now 
does everything correct.

(2) In Analysis, we require SPI classes to have a constructor taking args (a 
Map of params in our case). We also extract the NAME from a static field. 
Standard service loader does not support this, it tries to instantiate the 
class with default ctor.

With Java 9+, the ServiceLoader now has a stream() method that allows to filter 
and preprocess classes: 
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()
This allows us to use the new interface and just get the loaded class (which 
may come from module-info.class or a conventional SPI file): 
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()

This change allows us to convert Lucene to modules listing all SPIs in the 
module-info.java.


> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> [https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()]
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> [https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()]
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

--

[jira] [Commented] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-18 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061781#comment-17061781
 ] 

Munendra S N commented on SOLR-14345:
-

 [^SOLR-14345.patch] 
This handles both Http/Http2 clients

> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-18 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14345:

Attachment: SOLR-14345.patch

> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-18 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14345:

Status: Patch Available  (was: Open)

> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-18 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N reassigned SOLR-14345:
---

Assignee: Munendra S N

> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-18 Thread Munendra S N (Jira)
Munendra S N created SOLR-14348:
---

 Summary: Split TestJsonFacets to multiple Test Classes
 Key: SOLR-14348
 URL: https://issues.apache.org/jira/browse/SOLR-14348
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Munendra S N


{{TestJsonFacets}} has parameterized testing. It runs each tests for each 
facet.method. There are error cases which doesn't actually need it. Also, 
facet.method is applicable only to term facets.
There are few Range facet tests which are present and runs repeatedly without 
any change(facet.method as no effect). Also, splitting would help when we 
introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-18 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14348:

Attachment: SOLR-14348.patch

> Split TestJsonFacets to multiple Test Classes
> -
>
> Key: SOLR-14348
> URL: https://issues.apache.org/jira/browse/SOLR-14348
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-14348.patch
>
>
> {{TestJsonFacets}} has parameterized testing. It runs each tests for each 
> facet.method. There are error cases which doesn't actually need it. Also, 
> facet.method is applicable only to term facets.
> There are few Range facet tests which are present and runs repeatedly without 
> any change(facet.method as no effect). Also, splitting would help when we 
> introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-18 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061787#comment-17061787
 ] 

Munendra S N commented on SOLR-14348:
-

 [^SOLR-14348.patch] 
This patch splits {{TestJsonFacets}} into 3 {{TestJsonFacets}} (trimmed down), 
{{TestJsonFacetErrors}} and {{TestJsonRangeFacet}}.
* Range facet also covers distributed case now. To cover distributed case for 
errors, SOLR-14345 need to be resolved (without with it is difficult to verify 
error message)
[~mkhl] could you please review?

> Split TestJsonFacets to multiple Test Classes
> -
>
> Key: SOLR-14348
> URL: https://issues.apache.org/jira/browse/SOLR-14348
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-14348.patch
>
>
> {{TestJsonFacets}} has parameterized testing. It runs each tests for each 
> facet.method. There are error cases which doesn't actually need it. Also, 
> facet.method is applicable only to term facets.
> There are few Range facet tests which are present and runs repeatedly without 
> any change(facet.method as no effect). Also, splitting would help when we 
> introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-18 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14348:

Status: Patch Available  (was: Open)

> Split TestJsonFacets to multiple Test Classes
> -
>
> Key: SOLR-14348
> URL: https://issues.apache.org/jira/browse/SOLR-14348
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-14348.patch
>
>
> {{TestJsonFacets}} has parameterized testing. It runs each tests for each 
> facet.method. There are error cases which doesn't actually need it. Also, 
> facet.method is applicable only to term facets.
> There are few Range facet tests which are present and runs repeatedly without 
> any change(facet.method as no effect). Also, splitting would help when we 
> introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Tomoko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061797#comment-17061797
 ] 

Tomoko Uchida commented on LUCENE-9281:
---

I am fine with the changes.

> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> [https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()]
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> [https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()]
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] ravowlga123 opened a new pull request #1362: SOLR-13768 Remove requiresub parameter in JWTAuthPlugin

2020-03-18 Thread GitBox
ravowlga123 opened a new pull request #1362: SOLR-13768 Remove requiresub 
parameter in JWTAuthPlugin
URL: https://github.com/apache/lucene-solr/pull/1362
 
 
   # Description
   As per ticket [SOLR-13768](https://jira.apache.org/jira/browse/SOLR-13768) 
we need to remove support for deprecated 'requireSub' parameter in JWTAuthPlugin
   
   # Solution
   Removed the parameter PARAM_REQUIRE_SUBJECT used in JWTAuthPlugin.
   
   # Tests
   Ran the full test suite using ant test.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [x] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9283) DelimitedBoostTokenFilter can fail testRandomChains

2020-03-18 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061878#comment-17061878
 ] 

David Smiley commented on LUCENE-9283:
--

Yes, remove in TestRandomChains for consistency with the choice by 
DelimitedTermFrequencyTokenFilter which is super-similar and already excluded.

> DelimitedBoostTokenFilter can fail testRandomChains
> ---
>
> Key: LUCENE-9283
> URL: https://issues.apache.org/jira/browse/LUCENE-9283
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>
> DelimitedBoostTokenFilter expects tokens of the form `token` or 
> `token|number` and throws a NumberFormatException if the `number` part can't 
> be parsed.  This can cause test failures when we build random chains and 
> throw random data through them.
> We can either exclude DelimiteBoostTokenFilter when building a random 
> analyzer, or add a flag to ignore badly-formed tokens. I lean towards doing 
> the former, as I don't really want to make leniency the default here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13768) Remove support for deprecated 'requireSub' parameter in JWTAuthPlugin

2020-03-18 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061890#comment-17061890
 ] 

Rabi Kumar K C commented on SOLR-13768:
---

Hi [~janhoy] I was just going through JWTAuthPlugin class and it seems that 
PARAM_REQUIRE_SUBJECT is not marked as deprecated but if the value is set then 
message at warning level is logged saying that param_require_subject might be 
removed in the future releases. I have removed this parameter and created a PR 
fo this. Please do let me know if the made changes are not the expected one so 
that I could make the correct changes.

> Remove support for deprecated 'requireSub' parameter in JWTAuthPlugin
> -
>
> Key: SOLR-13768
> URL: https://issues.apache.org/jira/browse/SOLR-13768
> Project: Solr
>  Issue Type: Improvement
>  Components: Authentication
>Reporter: Jan Høydahl
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Spinoff from SOLR-13734



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9281) Retire SPIClassIterator from master because Java 9+ uses different mechanism to load services when module system is used

2020-03-18 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061927#comment-17061927
 ] 

Dawid Weiss commented on LUCENE-9281:
-

Yes, this looks good too.

> Retire SPIClassIterator from master because Java 9+ uses different mechanism 
> to load services when module system is used
> 
>
> Key: LUCENE-9281
> URL: https://issues.apache.org/jira/browse/LUCENE-9281
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We currently have our own implementation of the service loader standard (SPI) 
> fo several reasons:
> (1) In some older JDKs the order of classpath was not respected and this lead 
> to wrong order of codecs implementing the same SPI name. This caused tests to 
> sometimes use wrong class (we had this in Lucene 4 where we had a test-only 
> read/write Lucene3 codec that was listed before the read-only one). That's no 
> longer an issue, the order of loading does not matter. In addition, Java now 
> does everything correct.
> (2) In Analysis, we require SPI classes to have a constructor taking args (a 
> Map of params in our case). We also extract the NAME from a static field. 
> Standard service loader does not support this, it tries to instantiate the 
> class with default ctor.
> With Java 9+, the ServiceLoader now has a stream() method that allows to 
> filter and preprocess classes: 
> [https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html#stream()]
> This allows us to use the new interface and just get the loaded class (which 
> may come from module-info.class or a conventional SPI file): 
> [https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.Provider.html#type()]
> This change allows us to convert Lucene to modules listing all SPIs in the 
> module-info.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13768) Remove support for deprecated 'requireSub' parameter in JWTAuthPlugin

2020-03-18 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061890#comment-17061890
 ] 

Rabi Kumar K C edited comment on SOLR-13768 at 3/18/20, 5:27 PM:
-

Hi [~janhoy] I was just going through JWTAuthPlugin class and it seems that 
PARAM_REQUIRE_SUBJECT is not marked as deprecated but if the value is set then 
message at warning level is logged saying that param_require_subject might be 
removed in the future releases. I have removed this parameter and created a PR 
for this. Please do let me know if the made changes are not the expected one so 
that I could make the correct changes.


was (Author: rabikumar.kc):
Hi [~janhoy] I was just going through JWTAuthPlugin class and it seems that 
PARAM_REQUIRE_SUBJECT is not marked as deprecated but if the value is set then 
message at warning level is logged saying that param_require_subject might be 
removed in the future releases. I have removed this parameter and created a PR 
fo this. Please do let me know if the made changes are not the expected one so 
that I could make the correct changes.

> Remove support for deprecated 'requireSub' parameter in JWTAuthPlugin
> -
>
> Key: SOLR-13768
> URL: https://issues.apache.org/jira/browse/SOLR-13768
> Project: Solr
>  Issue Type: Improvement
>  Components: Authentication
>Reporter: Jan Høydahl
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Spinoff from SOLR-13734



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14325) Core status could be improved to not require an IndexSearcher

2020-03-18 Thread Richard Goodman (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061939#comment-17061939
 ] 

Richard Goodman commented on SOLR-14325:


Hi David, 

we noticed the following when trying to recover some instances on a bigger 
cluster set up, because of that we rolled back:

{code}
2020-03-17 13:27:38.288 INFO  (qtp511717113-1079) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores params={wt=json} status=500 QTime=4821977
2020-03-17 13:27:38.289 ERROR (qtp511717113-1079) [   ] o.a.s.s.HttpSolrCall 
null:org.apache.solr.common.SolrException: Error handling 'STATUS' action
at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:363)
at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.file.NoSuchFileException: 
/data/solr/solrcloud-cluster0/data/a_collection_shard9_replica_n30/data/index.20200317120438926/_1eo70_6t1.liv
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun

[jira] [Commented] (SOLR-14347) Autoscaling placement wrong due to incorrect LockLevel for collection modifications

2020-03-18 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061971#comment-17061971
 ] 

Andrzej Bialecki commented on SOLR-14347:
-

This patch seems to fix SOLR-13884, too.

> Autoscaling placement wrong due to incorrect LockLevel for collection 
> modifications
> ---
>
> Key: SOLR-14347
> URL: https://issues.apache.org/jira/browse/SOLR-14347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.5
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-14347.patch
>
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on 
> different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy 
> violations
> Running the same scenario but instead creating collections sequentially 
> results in no violations.
> I suspect this is caused by incorrect locking level for all collection 
> operations (as defined in {{CollectionParams.CollectionAction}}) that create 
> new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
> REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
> use the policy engine to create new replica placements, and as a result they 
> change the cluster state. However, currently these operations are locked (in 
> {{OverseerCollectionMessageHandler.lockTask}} ) using 
> {{LockLevel.COLLECTION}}. In practice this means that the lock is held only 
> for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to 
> CLUSTER (and I confirm this fixes the scenario described above). However, 
> this effectively serializes all collection operations listed above, which 
> will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13884) Concurrent collection creation leads to multiple replicas placed on same node

2020-03-18 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061970#comment-17061970
 ] 

Andrzej Bialecki commented on SOLR-13884:
-

See also SOLR-14347, this test passes when the fix in that issue is applied.

> Concurrent collection creation leads to multiple replicas placed on same node
> -
>
> Key: SOLR-13884
> URL: https://issues.apache.org/jira/browse/SOLR-13884
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When multiple collection creations are done concurrently with a 
> collection-level policy, multiple replicas of a single shard can end up on 
> the same node, violating the specified policy.
> This was observed on both 8.2 and master.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14347) Autoscaling placement wrong due to incorrect LockLevel for collection modifications

2020-03-18 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062007#comment-17062007
 ] 

David Smiley commented on SOLR-14347:
-

I wonder if we should not take this approach and instead introduce a separate 
mechanism that is purely used for replica placement/balancing?  Such a 
mechanism might not be a lock; maybe it'd be some data structure that 
represents the target cluster shape we want to get to.  If it were held in 
ZooKeeper and atomically updated, we wouldn't even need the Overseer to be the 
single point of management of it.  I'm just spitballing here.

> Autoscaling placement wrong due to incorrect LockLevel for collection 
> modifications
> ---
>
> Key: SOLR-14347
> URL: https://issues.apache.org/jira/browse/SOLR-14347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.5
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-14347.patch
>
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on 
> different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy 
> violations
> Running the same scenario but instead creating collections sequentially 
> results in no violations.
> I suspect this is caused by incorrect locking level for all collection 
> operations (as defined in {{CollectionParams.CollectionAction}}) that create 
> new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
> REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
> use the policy engine to create new replica placements, and as a result they 
> change the cluster state. However, currently these operations are locked (in 
> {{OverseerCollectionMessageHandler.lockTask}} ) using 
> {{LockLevel.COLLECTION}}. In practice this means that the lock is held only 
> for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to 
> CLUSTER (and I confirm this fixes the scenario described above). However, 
> this effectively serializes all collection operations listed above, which 
> will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-18 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062010#comment-17062010
 ] 

Lucene/Solr QA commented on SOLR-14345:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m  7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 47m  
7s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
55s{color} | {color:green} solrj in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14345 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12997036/SOLR-14345.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 4fd96bedc27 |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/719/testReport/ |
| modules | C: solr/core solr/solrj solr/test-framework U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/719/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash computation optional and resilient

2020-03-18 Thread GitBox
mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash 
computation optional and resilient
URL: https://github.com/apache/lucene-solr/pull/1359#discussion_r394589235
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/store/blob/metadata/ServerSideMetadata.java
 ##
 @@ -214,23 +228,23 @@ public String getDirectoryHash() {
 
   /**
* Returns true if the contents of the directory passed into 
this method is identical to the contents of
-   * the directory of the Solr core of this instance, taken at instance 
creation time.
+   * the directory of the Solr core of this instance, taken at instance 
creation time. If the directory hash was not 
+   * computed at the instance creation time, then this returns 
false
 
 Review comment:
   If directory hash was not computed I would throw IllegalStateException 
instead of returning false. This will help identify the programming error 
faster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash computation optional and resilient

2020-03-18 Thread GitBox
mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash 
computation optional and resilient
URL: https://github.com/apache/lucene-solr/pull/1359#discussion_r394591134
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/store/blob/process/CorePullTask.java
 ##
 @@ -286,7 +286,7 @@ void pullCoreFromBlob(boolean isLeaderPulling) throws 
InterruptedException {
   }
 
   // Get local metadata + resolve with blob metadata. Given we're doing a 
pull, don't need to reserve commit point
-  ServerSideMetadata serverMetadata = new 
ServerSideMetadata(pullCoreInfo.getCoreName(), storeManager.getCoreContainer(), 
false);
+  ServerSideMetadata serverMetadata = new 
ServerSideMetadata(pullCoreInfo.getCoreName(), storeManager.getCoreContainer(), 
false, true);
 
 Review comment:
   Maybe add a short comment explaining the passed value of captureDirHash. 
Same comment for BlobStoreUtils.java and CorePusher.java.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14325) Core status could be improved to not require an IndexSearcher

2020-03-18 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062022#comment-17062022
 ] 

David Smiley commented on SOLR-14325:
-

I think the central problem here is that we can't be sure the underlying files 
are going to be changed out from under us due to replication or restore.  
Grabbing an IndexSearcher effectively locks it for us but avoiding it leaves us 
vulnerable.  I'm not sure if there is a suitable lock we can grab. [~hossman] 
do you know? _If there was such a lock_, I suspect nonetheless the situation 
would be back to what you have now – waiting for many seconds if a replication 
is in progress.

Can you try calling getNewestSearcher(false) instead (and use try-finally to 
ensure you close it when done if non-null)?  I'm not sure if that might block 
for a long time.

> Core status could be improved to not require an IndexSearcher
> -
>
> Key: SOLR-14325
> URL: https://issues.apache.org/jira/browse/SOLR-14325
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-14325.patch
>
>
> When the core status is told to request "indexInfo", it currently grabs the 
> SolrIndexSearcher but only to grab the Directory.  SolrCore.getIndexSize also 
> only requires the Directory.  By insisting on a SolrIndexSearcher, we 
> potentially block for awhile if the core is in recovery since there is no 
> SolrIndexSearcher.
> [https://lists.apache.org/thread.html/r076218c964e9bd6ed0a53133be9170c3cf36cc874c1b4652120db417%40%3Cdev.lucene.apache.org%3E]
> It'd be nice to have a solution that conditionally used the Directory of the 
> SolrIndexSearcher only if it's present so that we don't waste time creating 
> one either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-18 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062078#comment-17062078
 ] 

Lucene/Solr QA commented on SOLR-14348:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 47s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.security.hadoop.TestDelegationWithHadoopAuth |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12997037/SOLR-14348.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 4fd96bedc27 |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/720/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/720/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/720/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Split TestJsonFacets to multiple Test Classes
> -
>
> Key: SOLR-14348
> URL: https://issues.apache.org/jira/browse/SOLR-14348
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-14348.patch
>
>
> {{TestJsonFacets}} has parameterized testing. It runs each tests for each 
> facet.method. There are error cases which doesn't actually need it. Also, 
> facet.method is applicable only to term facets.
> There are few Range facet tests which are present and runs repeatedly without 
> any change(facet.method as no effect). Also, splitting would help when we 
> introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394656220
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/LongDocValuesPointComparator.java
 ##
 @@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import org.apache.lucene.document.LongPoint;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.NumericDocValues;
+import org.apache.lucene.index.PointValues;
+import org.apache.lucene.util.DocIdSetBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import static 
org.apache.lucene.search.FieldComparator.IteratorSupplierComparator;
+
+public class LongDocValuesPointComparator extends 
IteratorSupplierComparator {
+private final String field;
+private final boolean reverse;
+private final long[] values;
+private long bottom;
+private long topValue;
+protected NumericDocValues docValues;
+private DocIdSetIterator iterator;
+private PointValues pointValues;
+private int maxDoc;
+private int maxDocVisited;
+
+public LongDocValuesPointComparator(String field, int numHits, boolean 
reverse) {
+this.field = field;
+this.reverse = reverse;
+this.values = new long[numHits];
+}
+
+private long getValueForDoc(int doc) throws IOException {
+if (docValues.advanceExact(doc)) {
+return docValues.longValue();
+} else {
+return 0L; // TODO: missing value
+}
+}
+
+@Override
+public int compare(int slot1, int slot2) {
+return Long.compare(values[slot1], values[slot2]);
+}
+
+@Override
+public void setTopValue(Long value) {
+topValue = value;
+}
+
+@Override
+public Long value(int slot) {
+return Long.valueOf(values[slot]);
+}
+
+@Override
+public LeafFieldComparator getLeafComparator(LeafReaderContext context) 
throws IOException {
+docValues = DocValues.getNumeric(context.reader(), field);
+iterator = docValues;
+pointValues = context.reader().getPointValues(field);
+maxDoc = context.reader().maxDoc();
+maxDocVisited = 0;
+return this;
+}
+
+@Override
+public void setBottom(int slot) {
+this.bottom = values[slot];
+}
+
+@Override
+public int compareBottom(int doc) throws IOException {
+return Long.compare(bottom, getValueForDoc(doc));
+}
+
+@Override
+public int compareTop(int doc) throws IOException {
+return Long.compare(topValue, getValueForDoc(doc));
+}
+
+@Override
+public void copy(int slot, int doc) throws IOException {
+maxDocVisited = doc;
+values[slot] = getValueForDoc(doc);
+}
+
+@Override
+public void setScorer(Scorable scorer) throws IOException {}
+
+public DocIdSetIterator iterator() {
+return iterator;
+}
+
+public void updateIterator() throws IOException {
+final byte[] maxValueAsBytes = new byte[Long.BYTES];
+final byte[] minValueAsBytes = new byte[Long.BYTES];
+if (reverse == false) {
+LongPoint.encodeDimension(bottom, maxValueAsBytes, 0);
+} else {
+LongPoint.encodeDimension(bottom, minValueAsBytes, 0);
+};
+
+DocIdSetBuilder result = new DocIdSetBuilder(maxDoc);
+PointValues.IntersectVisitor visitor = new 
PointValues.IntersectVisitor() {
+DocIdSetBuilder.BulkAdder adder;
+@Override
+public void grow(int count) {
+adder = result.grow(count);
+}
+
+@Override
+public void visit(int docID) {
+if (docID <= maxDocVisited) {
+return; // Already visited or skipped
+}
+adder.add(docID);
+}
+
+@Override
+public void visit(int docID, byte[] packedValue) {
+if (docID <= maxDocVisited) {
+return;

[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394656108
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/LongDocValuesPointComparator.java
 ##
 @@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import org.apache.lucene.document.LongPoint;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.NumericDocValues;
+import org.apache.lucene.index.PointValues;
+import org.apache.lucene.util.DocIdSetBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import static 
org.apache.lucene.search.FieldComparator.IteratorSupplierComparator;
+
+public class LongDocValuesPointComparator extends 
IteratorSupplierComparator {
+private final String field;
+private final boolean reverse;
+private final long[] values;
+private long bottom;
+private long topValue;
+protected NumericDocValues docValues;
+private DocIdSetIterator iterator;
+private PointValues pointValues;
+private int maxDoc;
+private int maxDocVisited;
+
+public LongDocValuesPointComparator(String field, int numHits, boolean 
reverse) {
+this.field = field;
+this.reverse = reverse;
+this.values = new long[numHits];
+}
+
+private long getValueForDoc(int doc) throws IOException {
+if (docValues.advanceExact(doc)) {
+return docValues.longValue();
+} else {
+return 0L; // TODO: missing value
+}
+}
+
+@Override
+public int compare(int slot1, int slot2) {
+return Long.compare(values[slot1], values[slot2]);
+}
+
+@Override
+public void setTopValue(Long value) {
+topValue = value;
+}
+
+@Override
+public Long value(int slot) {
+return Long.valueOf(values[slot]);
+}
+
+@Override
+public LeafFieldComparator getLeafComparator(LeafReaderContext context) 
throws IOException {
+docValues = DocValues.getNumeric(context.reader(), field);
+iterator = docValues;
+pointValues = context.reader().getPointValues(field);
+maxDoc = context.reader().maxDoc();
+maxDocVisited = 0;
+return this;
+}
+
+@Override
+public void setBottom(int slot) {
+this.bottom = values[slot];
+}
+
+@Override
+public int compareBottom(int doc) throws IOException {
+return Long.compare(bottom, getValueForDoc(doc));
+}
+
+@Override
+public int compareTop(int doc) throws IOException {
+return Long.compare(topValue, getValueForDoc(doc));
+}
+
+@Override
+public void copy(int slot, int doc) throws IOException {
+maxDocVisited = doc;
+values[slot] = getValueForDoc(doc);
+}
+
+@Override
+public void setScorer(Scorable scorer) throws IOException {}
+
+public DocIdSetIterator iterator() {
+return iterator;
+}
+
+public void updateIterator() throws IOException {
+final byte[] maxValueAsBytes = new byte[Long.BYTES];
+final byte[] minValueAsBytes = new byte[Long.BYTES];
+if (reverse == false) {
+LongPoint.encodeDimension(bottom, maxValueAsBytes, 0);
+} else {
+LongPoint.encodeDimension(bottom, minValueAsBytes, 0);
+};
+
 
 Review comment:
   Addressed in 6384b15


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394655989
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/LongDocValuesPointComparator.java
 ##
 @@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import org.apache.lucene.document.LongPoint;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.NumericDocValues;
+import org.apache.lucene.index.PointValues;
+import org.apache.lucene.util.DocIdSetBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import static 
org.apache.lucene.search.FieldComparator.IteratorSupplierComparator;
+
+public class LongDocValuesPointComparator extends 
IteratorSupplierComparator {
+private final String field;
+private final boolean reverse;
+private final long[] values;
+private long bottom;
+private long topValue;
+protected NumericDocValues docValues;
+private DocIdSetIterator iterator;
+private PointValues pointValues;
+private int maxDoc;
+private int maxDocVisited;
+
+public LongDocValuesPointComparator(String field, int numHits, boolean 
reverse) {
+this.field = field;
+this.reverse = reverse;
+this.values = new long[numHits];
+}
+
+private long getValueForDoc(int doc) throws IOException {
+if (docValues.advanceExact(doc)) {
+return docValues.longValue();
+} else {
+return 0L; // TODO: missing value
+}
+}
+
+@Override
+public int compare(int slot1, int slot2) {
+return Long.compare(values[slot1], values[slot2]);
+}
+
+@Override
+public void setTopValue(Long value) {
+topValue = value;
+}
+
+@Override
+public Long value(int slot) {
+return Long.valueOf(values[slot]);
+}
+
+@Override
+public LeafFieldComparator getLeafComparator(LeafReaderContext context) 
throws IOException {
+docValues = DocValues.getNumeric(context.reader(), field);
+iterator = docValues;
+pointValues = context.reader().getPointValues(field);
+maxDoc = context.reader().maxDoc();
+maxDocVisited = 0;
+return this;
+}
+
+@Override
+public void setBottom(int slot) {
+this.bottom = values[slot];
+}
+
+@Override
+public int compareBottom(int doc) throws IOException {
+return Long.compare(bottom, getValueForDoc(doc));
+}
+
+@Override
+public int compareTop(int doc) throws IOException {
+return Long.compare(topValue, getValueForDoc(doc));
+}
+
+@Override
+public void copy(int slot, int doc) throws IOException {
+maxDocVisited = doc;
+values[slot] = getValueForDoc(doc);
+}
+
+@Override
+public void setScorer(Scorable scorer) throws IOException {}
+
+public DocIdSetIterator iterator() {
+return iterator;
+}
+
+public void updateIterator() throws IOException {
 
 Review comment:
   Addressed in 6384b15


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394659789
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/FieldComparator.java
 ##
 @@ -928,4 +928,9 @@ public int compareTop(int doc) throws IOException {
 @Override
 public void setScorer(Scorable scorer) {}
   }
+
+  public static abstract class IteratorSupplierComparator extends 
FieldComparator implements LeafFieldComparator {
+abstract DocIdSetIterator iterator();
+abstract void updateIterator() throws IOException;
 
 Review comment:
   Indeed it is more straightforward to just update an iterator in `setBottom` 
function of a comparator.
   
   But I was thinking it is better to have a special function for two reasons:
   1)  After updating an iterator, in `TopFieldCollector` we need to change 
   `totalHitsRelation = TotalHits.Relation.GREATER_THAN_OR_EQUAL_TO;`
   
   2)  we also need to check `hitsThresholdChecker.isThresholdReached()`, and 
passing not strictly related object `hitsThresholdChecker` to a comparator's 
constructor doesn't look nice to me. 
   
   Please let me know if you think otherwise


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394659789
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/FieldComparator.java
 ##
 @@ -928,4 +928,9 @@ public int compareTop(int doc) throws IOException {
 @Override
 public void setScorer(Scorable scorer) {}
   }
+
+  public static abstract class IteratorSupplierComparator extends 
FieldComparator implements LeafFieldComparator {
+abstract DocIdSetIterator iterator();
+abstract void updateIterator() throws IOException;
 
 Review comment:
   Indeed it is more straightforward to just update an iterator in `setBottom` 
function of a comparator.
   
   But I was thinking it is better to have a special function for two reasons:
   1)  After updating an iterator, in `TopFieldCollector` we need to change 
   `totalHitsRelation = TotalHits.Relation.GREATER_THAN_OR_EQUAL_TO;`
   
   2)  we also need to check `hitsThresholdChecker.isThresholdReached()`, and 
passing not strictly related object `hitsThresholdChecker` to a comparator's 
constructor doesn't look nice to me. 
   Please let me know if you think otherwise


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394663323
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/Weight.java
 ##
 @@ -249,6 +249,10 @@ static int scoreRange(LeafCollector collector, 
DocIdSetIterator iterator, TwoPha
  *  See https://issues.apache.org/jira/browse/LUCENE-5487";>LUCENE-5487 */
 static void scoreAll(LeafCollector collector, DocIdSetIterator iterator, 
TwoPhaseIterator twoPhase, Bits acceptDocs) throws IOException {
   if (twoPhase == null) {
+if (collector.iterator() != null) {
 
 Review comment:
   @jimczi Thanks for the initial review.   
   
   >  I wonder how this would look like if we build a conjunction from the main 
query and the collector iterator directly when building the weight (in 
IndexSearcher) 
   
   I could not think of any clever way  to do this in `IndexSearcher`,  I would 
appreciate your help if you can suggest any such way.
   
   I looked at classes that override `BulkScorer` and many of them still refer 
to a default `BulkScorer`, and for those that don't such as `BooleanScorer` I 
found its logic to be too complex to combine with  a collector's iterator. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394663323
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/Weight.java
 ##
 @@ -249,6 +249,10 @@ static int scoreRange(LeafCollector collector, 
DocIdSetIterator iterator, TwoPha
  *  See https://issues.apache.org/jira/browse/LUCENE-5487";>LUCENE-5487 */
 static void scoreAll(LeafCollector collector, DocIdSetIterator iterator, 
TwoPhaseIterator twoPhase, Bits acceptDocs) throws IOException {
   if (twoPhase == null) {
+if (collector.iterator() != null) {
 
 Review comment:
   @jimczi Thanks for the initial review.   
   
   >  I wonder how this would look like if we build a conjunction from the main 
query and the collector iterator directly when building the weight (in 
IndexSearcher) 
   
   I could not think of any clever way  to do this in `IndexSearcher`,  I would 
appreciate your help if you can suggest any such way. I just redesigned 
`DefaultBulkScorer` to use a conjunction of a scorer's and collector's 
iterators.
   
   I looked at classes that override `BulkScorer` and many of them still refer 
to a default `BulkScorer`, and for those that don't such as `BooleanScorer` I 
found its logic to be too complex to combine with  a collector's iterator. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394663323
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/Weight.java
 ##
 @@ -249,6 +249,10 @@ static int scoreRange(LeafCollector collector, 
DocIdSetIterator iterator, TwoPha
  *  See https://issues.apache.org/jira/browse/LUCENE-5487";>LUCENE-5487 */
 static void scoreAll(LeafCollector collector, DocIdSetIterator iterator, 
TwoPhaseIterator twoPhase, Bits acceptDocs) throws IOException {
   if (twoPhase == null) {
+if (collector.iterator() != null) {
 
 Review comment:
   @jimczi Thanks for the initial review.   
   
   >  I wonder how this would look like if we build a conjunction from the main 
query and the collector iterator directly when building the weight (in 
IndexSearcher) 
   
   I could not think of any clever way  to do this in `IndexSearcher`,  I would 
appreciate your help if you can suggest any such way. I just redesigned 
`DefaultBulkScorer` to use a conjunction of a scorer's and collector's 
iterators.
   
   I looked at classes that override `BulkScorer` and many of them still refer 
to a default `BulkScorer`, and for those that don't such as `BooleanScorer` I 
found its logic to be too complex to understand and for me to combine with  a 
collector's iterator. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14349) Upgrade to slf4j 2x when available and encourage using lambdas for expensive methods called in log statements.

2020-03-18 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14349:
-

 Summary: Upgrade to slf4j 2x when available and encourage using 
lambdas for expensive methods called in log statements.
 Key: SOLR-14349
 URL: https://issues.apache.org/jira/browse/SOLR-14349
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson


Log4j2 already supports lazily-evaluated lambda expressions, but slf4j won't 
support them until slf4j-2.0.0, see: 
[https://jira.qos.ch/browse/SLF4J-371|https://jira.qos.ch/browse/SLF4J-371.]. 
(thanks M. Drob for finding this!)

Why it matters:

A line like *log.trace("stuff {}", object.someMethod())* will execute 
object.someMethod() regardless of the log level, leading to unnecessary work. 
One example is SOLR-12353.

Uwe Schindler pointed out that in the log4j2 context, constructs like: 
*log.info("stuff {}", () -> object.someMethod(param))* are evaluated lazily, 
i.e. the *object.comeMethod(param)* method is called if (and only if) the log 
level includes "INFO".

Until slf4j2.0 comes out, we'll handle these expensive calls on a case-by-case 
basis, the simplest way is to wrap expensive calls in an if clause, e.g.  *if 
(log.isTraceEnabled)* *\{log.trace}*. Ugly, but it works. Once we can 
upgrade to slf4j2.0, we should encourage using lambdas for any expensive method 
call that's part of a log message.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9170) wagon-ssh Maven HTTPS issue

2020-03-18 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062095#comment-17062095
 ] 

Mike Drob commented on LUCENE-9170:
---

I've run into this as well on master. Instead of hardcoding the https path, we 
can reuse

{{url="${ivy_bootstrap_url1}"}}

> wagon-ssh Maven HTTPS issue
> ---
>
> Key: LUCENE-9170
> URL: https://issues.apache.org/jira/browse/LUCENE-9170
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.5
>
> Attachments: LUCENE-9170.patch, LUCENE-9170.patch
>
>
> When I do, from lucene/ in branch_8_4:
> ant -Dversion=8.4.2 generate-maven-artifacts 
> I see that wagon-ssh is being resolved from http://repo1.maven.org/maven2 
> instead of https equivalent. This is surprising to me, since I can't find the 
> http URL anywhere.
> Here's my log:
> https://paste.centos.org/view/be2d3f3f
> This is a critical issue since releases won't work without this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to 
skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394677039
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/Weight.java
 ##
 @@ -201,20 +201,54 @@ public long cost() {
 @Override
 public int score(LeafCollector collector, Bits acceptDocs, int min, int 
max) throws IOException {
   collector.setScorer(scorer);
+  DocIdSetIterator scorerIterator = twoPhase == null? iterator: 
twoPhase.approximation();
+  DocIdSetIterator combinedIterator = collector.iterator() == null ? 
scorerIterator: combineScorerAndCollectorIterators(scorerIterator, collector);
   if (scorer.docID() == -1 && min == 0 && max == 
DocIdSetIterator.NO_MORE_DOCS) {
-scoreAll(collector, iterator, twoPhase, acceptDocs);
+scoreAll(collector, combinedIterator, twoPhase, acceptDocs);
 return DocIdSetIterator.NO_MORE_DOCS;
   } else {
 int doc = scorer.docID();
-if (doc < min) {
-  if (twoPhase == null) {
-doc = iterator.advance(min);
-  } else {
-doc = twoPhase.approximation().advance(min);
+if (doc < min) scorerIterator.advance(min);
 
 Review comment:
   ```suggestion
   if (doc < min) {
 doc = combinedIterator.advance(min);
   }
   ```
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to 
skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394671963
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/Weight.java
 ##
 @@ -201,20 +201,54 @@ public long cost() {
 @Override
 public int score(LeafCollector collector, Bits acceptDocs, int min, int 
max) throws IOException {
   collector.setScorer(scorer);
+  DocIdSetIterator scorerIterator = twoPhase == null? iterator: 
twoPhase.approximation();
+  DocIdSetIterator combinedIterator = collector.iterator() == null ? 
scorerIterator: combineScorerAndCollectorIterators(scorerIterator, collector);
   if (scorer.docID() == -1 && min == 0 && max == 
DocIdSetIterator.NO_MORE_DOCS) {
-scoreAll(collector, iterator, twoPhase, acceptDocs);
+scoreAll(collector, combinedIterator, twoPhase, acceptDocs);
 return DocIdSetIterator.NO_MORE_DOCS;
   } else {
 int doc = scorer.docID();
-if (doc < min) {
-  if (twoPhase == null) {
-doc = iterator.advance(min);
-  } else {
-doc = twoPhase.approximation().advance(min);
+if (doc < min) scorerIterator.advance(min);
+return scoreRange(collector, combinedIterator, twoPhase, acceptDocs, 
doc, max);
+  }
+}
+
+// conjunction iterator between scorer's iterator and collector's iterator
 
 Review comment:
   you can replace this with `ConjunctionDISI#intersectIterators` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to 
skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394676728
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/Weight.java
 ##
 @@ -223,44 +257,23 @@ public int score(LeafCollector collector, Bits 
acceptDocs, int min, int max) thr
  *  See https://issues.apache.org/jira/browse/LUCENE-5487";>LUCENE-5487 */
 static int scoreRange(LeafCollector collector, DocIdSetIterator iterator, 
TwoPhaseIterator twoPhase,
 Bits acceptDocs, int currentDoc, int end) throws IOException {
-  if (twoPhase == null) {
-while (currentDoc < end) {
-  if (acceptDocs == null || acceptDocs.get(currentDoc)) {
-collector.collect(currentDoc);
-  }
-  currentDoc = iterator.nextDoc();
-}
-return currentDoc;
-  } else {
-final DocIdSetIterator approximation = twoPhase.approximation();
-while (currentDoc < end) {
-  if ((acceptDocs == null || acceptDocs.get(currentDoc)) && 
twoPhase.matches()) {
-collector.collect(currentDoc);
-  }
-  currentDoc = approximation.nextDoc();
+  while (currentDoc < end) {
+if ((acceptDocs == null || acceptDocs.get(currentDoc)) && (twoPhase == 
null || twoPhase.matches())) {
+  collector.collect(currentDoc);
 }
-return currentDoc;
+currentDoc = iterator.nextDoc();
   }
+  return currentDoc;
 
 Review comment:
   this change is not required ? I see hotspot in the javadoc comment above so  
we shouldn't touch it if it's not  required ;).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-18 Thread GitBox
jimczi commented on a change in pull request #1351: LUCENE-9280: Collectors to 
skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r394683505
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/FieldComparator.java
 ##
 @@ -928,4 +928,9 @@ public int compareTop(int doc) throws IOException {
 @Override
 public void setScorer(Scorable scorer) {}
   }
+
+  public static abstract class IteratorSupplierComparator extends 
FieldComparator implements LeafFieldComparator {
+abstract DocIdSetIterator iterator();
+abstract void updateIterator() throws IOException;
 
 Review comment:
   For 1. we could set the totalHitsRelation when we reach the total hits 
threshold in the TOP_DOCS mode ? 
   For 2. I wonder if we could pass the hitsThresholdChecker to the 
LeafFieldComparator like we do for the scorer ?
   This way we can update the iterator internally when a new bottom is set or 
when `compareBottom` is called ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash computation optional and resilient

2020-03-18 Thread GitBox
mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash 
computation optional and resilient
URL: https://github.com/apache/lucene-solr/pull/1359#discussion_r394707461
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/store/blob/metadata/ServerSideMetadata.java
 ##
 @@ -229,13 +232,18 @@ public String getDirectoryHash() {
   /**
* Returns true if the contents of the directory passed into 
this method is identical to the contents of
* the directory of the Solr core of this instance, taken at instance 
creation time. If the directory hash was not 
-   * computed at the instance creation time, then this returns 
false
+   * computed at the instance creation time, then we throw an 
IllegalStateException indicating a programming error.
*
* Passing in the Directory (expected to be the directory of the same core 
used during construction) because it seems
* safer than trying to get it again here...
+   * 
+   * @throw IllegalStateException is this instance was not created with a 
computed directoryHash 
 
 Review comment:
   typo is=if?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash computation optional and resilient

2020-03-18 Thread GitBox
mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash 
computation optional and resilient
URL: https://github.com/apache/lucene-solr/pull/1359#discussion_r394705034
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/store/blob/metadata/ServerSideMetadata.java
 ##
 @@ -212,9 +212,12 @@ public long getGeneration() {
   }
 
   /**
-   * @return Null if the directory hash was not computed for the given Core 
+   * @throw IllegalStateException is this instance was not created with a 
computed directoryHash 
 
 Review comment:
   typo: is=if?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash computation optional and resilient

2020-03-18 Thread GitBox
mbwaheed commented on a change in pull request #1359: SOLR-13101: Make dir hash 
computation optional and resilient
URL: https://github.com/apache/lucene-solr/pull/1359#discussion_r394707206
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/store/blob/metadata/ServerSideMetadata.java
 ##
 @@ -212,9 +212,12 @@ public long getGeneration() {
   }
 
   /**
-   * @return Null if the directory hash was not computed for the given Core 
+   * @throw IllegalStateException is this instance was not created with a 
computed directoryHash 
 
 Review comment:
   Actually I don't see this method being used any where. It'll be better to 
delete the method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on issue #1362: SOLR-13768 Remove requiresub parameter in JWTAuthPlugin

2020-03-18 Thread GitBox
janhoy commented on issue #1362: SOLR-13768 Remove requiresub parameter in 
JWTAuthPlugin
URL: https://github.com/apache/lucene-solr/pull/1362#issuecomment-600921994
 
 
   Thanks for the contribution. I think your implementation is correct, and 
should be aimed at v9, i.e. master branch only.
   
   Please add a line to CHANGES.txt about the change, as well as a line in 
reference guide chapter `major-changes-in-solr-9.adoc` so those upgrading can 
prepare to no longer use the parameter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org