[jira] [Comment Edited] (SOLR-13765) Deadlock on Solr cloud request causing 'Too many open files' error

2020-03-13 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17053061#comment-17053061
 ] 

Munendra S N edited comment on SOLR-13765 at 3/13/20, 8:28 AM:
---

This issue is resolved in SOLR-13793 by limiting the number of forwarding 
request to total number of replicas. 


{code:java}
Not sure what the purpose of finding an inactive node to handle request in 
HttpSolrCall.getRemoteCoreUrl but taking that out seems to fix the problem
{code}
Context for this change is provided in SOLR-4553 and SOLR-13793
[~ichattopadhyaya] [~vincewu] should this issue be resolved?
This fix is also backported to 7_7. So, if there is 7.7.3 it will contain the 
fix


was (Author: munendrasn):
This issue is resolved in SOLR-13793 by limiting the number of forwarding 
request to total number of replicas. 


{code:java}
Not sure what the purpose of finding an inactive node to handle request in 
HttpSolrCall.getRemoteCoreUrl but taking that out seems to fix the problem
Context for this change is provided in SOLR-4553 and SOLR-13793
[~ichattopadhyaya] [~vincewu] should this issue be resolved?
This fix is also backported to 7_7. So, if there is 7.7.3 it will contain the 
fix

> Deadlock on Solr cloud request causing 'Too many open files' error
> --
>
> Key: SOLR-13765
> URL: https://issues.apache.org/jira/browse/SOLR-13765
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.7.2
>Reporter: Lei Wu
>Priority: Major
>
> Hi there,
> We are seeing an issue about deadlock on Solr cloud request. 
> Say we have a collection with one shard and two replicas for that shard. For 
> whatever reason the cluster appears to be active but each individual replica 
> is down. And when a request comes in, Solr (replica 1) tries to find a remote 
> node (replica 2) to handle the request since the local core (replica 1) is 
> down and when the other node (replica 2) receives the request it does the 
> same to forward the request back to the original node (replica 1). This 
> causes deadlock and eventually uses all the socket causing 
> `{color:#FF}Too many open files{color}`.
> Not sure what the purpose of finding an inactive node to handle request in 
> HttpSolrCall.getRemoteCoreUrl but taking that out seems to fix the problem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] micpalmia commented on issue #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
micpalmia commented on issue #1343: LUCENE-8103: Use two-phase iteration in 
Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#issuecomment-598668787
 
 
   Thank you for your time and help! Fwiw, I appreciate it a lot. I'll be to 
Buzzwords too, hoping it's actually gonna happen.
   
   `QueryValueSource.exists(int doc)` is probably looking more complicated than 
it could because it assumes docs can be requested out of order and because it 
keeps state on the availability of a scorer. I tried to make it tidier, but 
it's purely aesthetical.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] micpalmia edited a comment on issue #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
micpalmia edited a comment on issue #1343: LUCENE-8103: Use two-phase iteration 
in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#issuecomment-598668787
 
 
   Thank you for your time and help! Fwiw, I appreciate it a lot. I'll be at 
Buzzwords too, hoping it's actually gonna happen.
   
   `QueryValueSource.exists(int doc)` is probably looking more complicated than 
it could because it assumes docs can be requested out of order and because it 
keeps state on the availability of a scorer. I tried to make it tidier, but 
it's purely aesthetical.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14314) Solr does not response most of the update request some times

2020-03-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-14314.

Resolution: Invalid

Closing. Please take further questions to the 
[solr-u...@lucene.apache.org|mailto:solr-u...@lucene.apache.org] mailing list.

> Solr does not response most of the update request some times
> 
>
> Key: SOLR-14314
> URL: https://issues.apache.org/jira/browse/SOLR-14314
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Aaron Sun
>Priority: Critical
> Attachments: jstack_bad_state.log, solrlog.tar.gz, solrlog.tar.gz
>
>
> Solr version:
> {noformat}
> solr-spec
> 8.4.1
> solr-impl
> 8.4.1 832bf13dd9187095831caf69783179d41059d013 - ishan - 2020-01-10 13:40:28
> lucene-spec
> 8.4.1
> lucene-impl
> 8.4.1 832bf13dd9187095831caf69783179d41059d013 - ishan - 2020-01-10 13:35:00
> {noformat}
>  
> Java process:
> {noformat}
> java -Xms100G -Xmx200G -DSTOP.PORT=8078 -DSTOP.KEY=ardsolrstop 
> -Dsolr.solr.home=/ardome/solr -Djetty.port=8983 
> -Dsolr.log.dir=/var/ardendo/log -jar start.jar --module=http
> {noformat}
> Run on a powerful server with 32 cores, 265GB RAM.
> The problem is that time to time it start to get very slow to update solr 
> documents, for example time out after 30 minutes.
> document size is around 20k~50K each, each http request send to /update is 
> around 4MB~10MB.
> /update request is done by multi processes.
> Some of the update get response, but the differences between "QTime"  and 
> http response time is big, one example, qtime = 66s, http response time is 
> 2304s.
> According to jstack for the thread state, lots of BLOCKED state.
> thread dump log is attached.
> Any hint would be appreciate, thanks!
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13264) IndexSizeTrigger aboveOp / belowOp properties not in valid properties

2020-03-13 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-13264:

Summary: IndexSizeTrigger aboveOp / belowOp properties not in valid 
properties  (was: unexpected autoscaling set-trigger response)

> IndexSizeTrigger aboveOp / belowOp properties not in valid properties
> -
>
> Key: SOLR-13264
> URL: https://issues.apache.org/jira/browse/SOLR-13264
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Reporter: Christine Poerschke
>Assignee: Andrzej Bialecki
>Priority: Minor
> Attachments: SOLR-13264.patch, SOLR-13264.patch
>
>
> Steps to reproduce:
> {code}
> ./bin/solr start -cloud -noprompt
> ./bin/solr create -c demo -d _default -shards 1 -replicationFactor 1
> curl "http://localhost:8983/solr/admin/autoscaling"; -d'
> {
>   "set-trigger" : {
> "name" : "index_size_trigger",
> "event" : "indexSize",
> "aboveDocs" : 12345,
> "aboveOp" : "SPLITSHARD",
> "enabled" : true,
> "actions" : [
>   {
> "name" : "compute_plan",
> "class": "solr.ComputePlanAction"
>   }
> ]
>   }
> }
> '
> ./bin/solr stop -all
> {code}
> The {{aboveOp}} is documented on 
> https://lucene.apache.org/solr/guide/7_6/solrcloud-autoscaling-triggers.html#index-size-trigger
>  and logically should be accepted (even though it is actually the default) 
> but unexpectedly an error message is returned {{"Error validating trigger 
> config index_size_trigger: 
> TriggerValidationException\{name=index_size_trigger, 
> details='\{aboveOp=unknown property\}'\}"}}.
> From a quick look it seems that in the {{IndexSizeTrigger}} constructor 
> additional values need to be passed to the {{TriggerUtils.validProperties}} 
> method i.e. aboveOp, belowOp and maybe others too i.e. 
> aboveSize/belowSize/etc. Illustrative patch to follow. Thank you.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-7893) Document ZooKeeper SSL support

2020-03-13 Thread Ryan Rockenbaugh (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058716#comment-17058716
 ] 

Ryan Rockenbaugh commented on SOLR-7893:


Also a related Zookeeper SSL issue for Solr:  SOLR-14027 Zookeeper Status Page 
does not work with SSL Enabled Zookeeper

> Document ZooKeeper SSL support
> --
>
> Key: SOLR-7893
> URL: https://issues.apache.org/jira/browse/SOLR-7893
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: ssl, zookeeper
>
> Once ZooKeeper supports SSL properly, Solr should start using it for all 
> communication. See comments in 
> https://cwiki.apache.org/confluence/display/solr/Enabling+SSL
> {quote}
> ZooKeeper does not support encrypted communication with clients like Solr.  
> There are several related JIRA tickets where SSL support is being 
> planned/worked on: ZOOKEEPER-235; ZOOKEEPER-236; ZOOKEEPER-733; and  
> ZOOKEEPER-1000.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase 
iteration in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#discussion_r39264
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/DoubleValuesSource.java
 ##
 @@ -604,18 +604,26 @@ public DoubleValues getValues(LeafReaderContext ctx, 
DoubleValues scores) throws
   Scorer scorer = weight.scorer(ctx);
   if (scorer == null)
 return DoubleValues.EMPTY;
-  DocIdSetIterator it = scorer.iterator();
+
   return new DoubleValues() {
+private final TwoPhaseIterator tpi = scorer.twoPhaseIterator();
+private final DocIdSetIterator disi = (tpi == null) ? 
scorer.iterator() : tpi.approximation();
+
+private int scorerDoc = -1;
+private boolean thisDocMatches = false;
+
 @Override
 public double doubleValue() throws IOException {
-  return scorer.score();
+  return thisDocMatches ? scorer.score() : Double.NaN;
 }
 
 @Override
 public boolean advanceExact(int doc) throws IOException {
-  if (it.docID() > doc)
-return false;
-  return it.docID() == doc || it.advance(doc) == doc;
+  if (scorerDoc < doc) {
+scorerDoc = disi.advance(doc);
+thisDocMatches = tpi==null || tpi.matches();
 
 Review comment:
   If scorerDoc is > "this" doc (doc param), then we should not even compute 
tpi.matches(); it's wasted potentially expensive computation.  Maybe if 
thisDocMatches is a Object Boolean, we can differentiate an unkinown state.
   
   Perhaps
   ```
   if (scorerDoc < doc) {
 scorerDoc = disi.advance(doc);
 thisDocMatches = null;
   }
   if (scorerDoc == doc) {
if (thisDocMatches == null) {
 thisDocMatches = tpi == null || tpi.matches();
}
return thisDocMatches;
   } else {
 return thisDocMatches = false;
   }
   ```
   
   And I don't think we even need scorerDoc to be a field since it's 
disi.doc();  Local-variable is nice.
   Hmm; if I'm not mistaken, the very first check (if scorerDoc < doc) can be 
replaced with an assertion since otherwise the caller is violating the API 
contract.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase 
iteration in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#discussion_r392228458
 
 

 ##
 File path: 
lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/QueryValueSource.java
 ##
 @@ -156,42 +136,35 @@ public float floatVal(int doc) {
   @Override
   public boolean exists(int doc) {
 try {
+  if (noMatches) return false;
   if (doc < lastDocRequested) {
-if (noMatches) return false;
 scorer = weight.scorer(readerContext);
-scorerDoc = -1;
 if (scorer==null) {
   noMatches = true;
   return false;
 }
-it = scorer.iterator();
+tpi = scorer.twoPhaseIterator();
+it = tpi==null ? scorer.iterator() : tpi.approximation();
+scorerDoc = -1;
+thisDocMatches = false;
   }
   lastDocRequested = doc;
 
   if (scorerDoc < doc) {
 scorerDoc = it.advance(doc);
-  }
-
-  if (scorerDoc > doc) {
-// query doesn't match this document... either because we hit the
-// end, or because the next doc is after this doc.
-return false;
+thisDocMatches = tpi == null || tpi.matches();
 
 Review comment:
   Again, lets not compute thisDocMatches if the scorerDoc doesn't even align.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14325) Core status could be improved to not require an IndexSearcher

2020-03-13 Thread David Smiley (Jira)
David Smiley created SOLR-14325:
---

 Summary: Core status could be improved to not require an 
IndexSearcher
 Key: SOLR-14325
 URL: https://issues.apache.org/jira/browse/SOLR-14325
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


When the core status is told to request "indexInfo", it currently grabs the 
SolrIndexSearcher but only to grab the Directory.  SolrCore.getIndexSize also 
only requires the Directory.  By insisting on a SolrIndexSearcher, we 
potentially block for awhile if the core is in recovery since there is no 
SolrIndexSearcher.

[https://lists.apache.org/thread.html/r076218c964e9bd6ed0a53133be9170c3cf36cc874c1b4652120db417%40%3Cdev.lucene.apache.org%3E]

It'd be nice to have a solution that conditionally used the Directory of the 
SolrIndexSearcher only if it's present so that we don't waste time creating one 
either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Reopened] (SOLR-13502) Investigate using something other than ZooKeeper's "4 letter words" for the admin UI status

2020-03-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reopened SOLR-13502:

  Assignee: (was: Erick Erickson)

I'm re-opening this, as I believe it is something that should be done, but it 
is not very urgent until people start migrating their ZKs to NIO transport with 
SSL, and disabling the old ports with 4lw. AdminServer is probably going to see 
more improvement and perhaps support in zk Java client? so hopefully 
implementing support for it will be easier in a future version of ZK.

> Investigate using something other than ZooKeeper's "4 letter words" for the 
> admin UI status
> ---
>
> Key: SOLR-13502
> URL: https://issues.apache.org/jira/browse/SOLR-13502
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Priority: Major
>
> ZooKeeper 3.5.5 requires a whitelist of allowed "4 letter words". The only 
> place I see on a quick look at the Solr code where 4lws are used is in the 
> admin UI "ZK Status" link.
> In order to use the admin UI "ZK Status" link, users will have to modify 
> their zoo.cfg file with
> {code}
> 4lw.commands.whitelist=mntr,conf,ruok
> {code}
> This JIRA is to see if there are alternatives to using 4lw for the admin UI.
> This depends on SOLR-8346. If we find an alternative, we need to remove the 
> additions to the ref guide that mention changing zoo.cfg (just scan for 4lw 
> in all the .adoc files) and remove SolrZkServer.ZK_WHITELIST_PROPERTY and all 
> references to it (SolrZkServer and SolrTestCaseJ4).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on issue #1298: SOLR-14289 Skip ZkChroot check when not necessary

2020-03-13 Thread GitBox
madrob commented on issue #1298: SOLR-14289 Skip ZkChroot check when not 
necessary
URL: https://github.com/apache/lucene-solr/pull/1298#issuecomment-598756692
 
 
   Switched to a boolean, and rebased to account for SOLR-14197 changes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14326) Number of tlog replicas off by one when restoring collections

2020-03-13 Thread John Wooden (Jira)
John Wooden created SOLR-14326:
--

 Summary: Number of tlog replicas off by one when restoring 
collections
 Key: SOLR-14326
 URL: https://issues.apache.org/jira/browse/SOLR-14326
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Backup/Restore
Affects Versions: 8.0, 7.7.2
Reporter: John Wooden


When making a request to restore a collection, the quantity of tlog replicas 
will always be off by one when restoring a collection that doesn't contain nrt 
replicas or when specifying the quantity of replicas in the request itself.
{quote}/admin/collections?action=RESTORE&name=&location=&collection=&tlogReplicas=1&pullReplicas=1
{quote}
Despite the backup AND/OR the request specifying 1 Tlog & 1 Pull replica, this 
request will create 2 Tlog replicas. On a 2-node cluster with 
maxShardsPerNode=1, the 1 pull replica is never created due to the excess tlog 
replica meeting the maxShardsPerNode limit.

 

This is due to a flawed comparison where an int meant to be an iterator for 
tlog replicas is checked if it is greater than zero, however, since that 
variable was initialized as 0 just prior it will never be greater than zero. 
The fix is to compare the _desired_ number of tlog replicas (like nrt) rather 
than the iterator.

 
{quote}int createdNrtReplicas = 0, {color:#de350b}createdTlogReplicas = 
0{color}, createdPullReplicas = 0;

// We already created either a NRT or an TLOG replica as leader
 if (numNrtReplicas > 0) {
 createdNrtReplicas++;
 } else if ({color:#de350b}createdTlogReplicas > 0{color}) {
 createdTlogReplicas++;
 }
{quote}
 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14284) Document that you can add a new stream function via add-expressible

2020-03-13 Thread David Eric Pugh (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058810#comment-17058810
 ] 

David Eric Pugh commented on SOLR-14284:


[~ctargett]any thoughts on adding this now?   I was hoping to see this show up 
in 8.5, though maybe I missed my window?

> Document that you can add a new stream function via add-expressible
> ---
>
> Key: SOLR-14284
> URL: https://issues.apache.org/jira/browse/SOLR-14284
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.5
>Reporter: David Eric Pugh
>Priority: Minor
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> I confirmed that in Solr 8.5 you will be able to dynamically add a Stream 
> function (assuming the Jar is in the path) via the configset api:
> curl -X POST -H 'Content-type:application/json'  -d '{
>   "add-expressible": {
> "name": "dog",
> "class": "org.apache.solr.handler.CatStream"
>   }
> }' http://localhost:8983/solr/gettingstarted/config



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14073) Fix segment look ahead NPE in CollapsingQParserPlugin

2020-03-13 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058807#comment-17058807
 ] 

Joel Bernstein commented on SOLR-14073:
---

You can control the order of multiple collapses with the "cost" parameter. The 
cost parameter should be set to above 100 in all cases, but lower cost 
collapses will come before higher cost collapses:
{code:java}
fq={!collapse cost=200 field=a}&fq={!collapse cost=300 field=b}  {code}

> Fix segment look ahead NPE in CollapsingQParserPlugin
> -
>
> Key: SOLR-14073
> URL: https://issues.apache.org/jira/browse/SOLR-14073
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.5
>
> Attachments: SOLR-14073.patch, SOLR-14073.patch, SOLR-14073.patch
>
>
> The CollapsingQParserPlugin has a bug that if every segment is not visited 
> during the collect it throws an NPE. This causes the CollapsingQParserPlugin 
> to not work when used with any feature that short circuits the segments 
> during the collect. This includes using the CollapsingQParserPlugin twice in 
> the same query and the time limiting collector.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] CBWallaby opened a new pull request #1348: SOLR-14326 Number of tlog replicas off by one when restoring collections

2020-03-13 Thread GitBox
CBWallaby opened a new pull request #1348: SOLR-14326 Number of tlog replicas 
off by one when restoring collections
URL: https://github.com/apache/lucene-solr/pull/1348
 
 
   https://issues.apache.org/jira/browse/SOLR-14326
   
   When making a request to restore a collection, the quantity of tlog replicas 
will always be off by one when restoring a collection that doesn't contain nrt 
replicas or when specifying the quantity of replicas in the request itself.
   
   This is due to a flawed comparison where an int meant to be an iterator for 
tlog replicas is checked if it is greater than zero, however, since that 
variable was initialized as 0 just prior it will never be greater than zero. 
The fix is to compare the desired number of tlog replicas (like nrt) rather 
than the iterator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9276) Consolidate DW(PT)#updateDocument and #updateDocuments

2020-03-13 Thread Michael McCandless (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058815#comment-17058815
 ] 

Michael McCandless commented on LUCENE-9276:


+1, nice simplification!

I tested indexing throughput using {{luceneutil}} against trunk and saw no 
measurable impact.

> Consolidate DW(PT)#updateDocument and #updateDocuments
> --
>
> Key: LUCENE-9276
> URL: https://issues.apache.org/jira/browse/LUCENE-9276
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (9.0), 8.5
>Reporter: Simon Willnauer
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> While I was working on another IW related issue I made some changes to 
> DW#updateDocument but forgot DW#updateDocuments which is annoying since the 
> code is 99% identical. The same applies to DWPT#updateDocument[s]. IMO this 
> is the wrong place to optimize in order to safe one or two object creations. 
> Maybe we can remove this code duplication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14073) Fix segment look ahead NPE in CollapsingQParserPlugin

2020-03-13 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14073:

Attachment: SOLR-14073-expand.patch

> Fix segment look ahead NPE in CollapsingQParserPlugin
> -
>
> Key: SOLR-14073
> URL: https://issues.apache.org/jira/browse/SOLR-14073
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.5
>
> Attachments: SOLR-14073-expand.patch, SOLR-14073.patch, 
> SOLR-14073.patch, SOLR-14073.patch
>
>
> The CollapsingQParserPlugin has a bug that if every segment is not visited 
> during the collect it throws an NPE. This causes the CollapsingQParserPlugin 
> to not work when used with any feature that short circuits the segments 
> during the collect. This includes using the CollapsingQParserPlugin twice in 
> the same query and the time limiting collector.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14321) SolrJ with Kerberos docs have removed HttpClientUtil.setConfigurer

2020-03-13 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058877#comment-17058877
 ] 

Kevin Risden commented on SOLR-14321:
-

Since this has to update old guide pages, I assume I would make the 
modification in git on the affected branches and then build and push based on 
this guide:

https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/meta-docs/publish.adoc

[~ctargett] does this sound correct?

> SolrJ with Kerberos docs have removed HttpClientUtil.setConfigurer 
> ---
>
> Key: SOLR-14321
> URL: https://issues.apache.org/jira/browse/SOLR-14321
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 
> 8.3, 8.4
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
>
> https://lucene.apache.org/solr/guide/8_4/kerberos-authentication-plugin.html#using-solrj-with-a-kerberized-solr
> {code:java}
> HttpClientUtil.setConfigurer(new Krb5HttpClientConfigurer());
> {code}
> This was removed in Solr 7.0.0 with SOLR-4509 
> (https://github.com/apache/lucene-solr/commit/ce172acb8fec6c3bbb18837a4d640da6c5aad649#diff-ab69354d287d536ce35357f6023bafceL104).
>  This is briefly mentioned here: 
> https://lucene.apache.org/solr/7_0_0/changes/Changes.html#v7.0.0.upgrading_from_solr_6.x
> The replacement should be:
> {code:java}
> HttpClientUtil.setHttpClientBuilder(new Krb5HttpClientBuilder().getBuilder());
> {code}
> An example of this is: 
> https://github.com/apache/ranger/blob/master/security-admin/src/main/java/org/apache/ranger/solr/SolrMgr.java#L114



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14073) Fix segment look ahead NPE in CollapsingQParserPlugin

2020-03-13 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058883#comment-17058883
 ] 

Munendra S N commented on SOLR-14073:
-

 [^SOLR-14073-expand.patch] 
cost would control the order of execution and when it is greater than 100 it 
would  be executed as postFilter

My query was  more related to how expand component will behave in case of 
multiple collapse filters. It  would be pick the only one collapse. The 
question is which collapse field would  be picked
{code:java}
https://github.com/apache/lucene-solr/blob/0f10b5f0424d592ed319e1b90d91f28c3a8625a9/solr/core/src/java/org/apache/solr/handler/component/ExpandComponent.java#L147
{code}
Based on this, the last collapse field is picked for expand irrespective of the 
cost. Here, we can make change such that field is picked up based on cost 
instead of order or maybe for trunk support expand for all collapse fields(if 
possible)
I'm not sure if above things come under this issue or not, just wanted to share 
my  thoughts.
Thanks for fixing multiple collapse issue. Also, I'm going to resolve the 
related issues, as they look similar
Sincere apologies, if I wasn't clear in  the initial comment


> Fix segment look ahead NPE in CollapsingQParserPlugin
> -
>
> Key: SOLR-14073
> URL: https://issues.apache.org/jira/browse/SOLR-14073
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.5
>
> Attachments: SOLR-14073-expand.patch, SOLR-14073.patch, 
> SOLR-14073.patch, SOLR-14073.patch
>
>
> The CollapsingQParserPlugin has a bug that if every segment is not visited 
> during the collect it throws an NPE. This causes the CollapsingQParserPlugin 
> to not work when used with any feature that short circuits the segments 
> during the collect. This includes using the CollapsingQParserPlugin twice in 
> the same query and the time limiting collector.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14073) Fix segment look ahead NPE in CollapsingQParserPlugin

2020-03-13 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058883#comment-17058883
 ] 

Munendra S N edited comment on SOLR-14073 at 3/13/20, 4:11 PM:
---

 [^SOLR-14073-expand.patch]  (not a complete test no assertion as came up in 
lesser time, ran in debug mode to check picked collapse  field)
cost would control the order of execution and when it is greater than 100 it 
would  be executed as postFilter

My query was  more related to how expand component will behave in case of 
multiple collapse filters. It  would be pick the only one collapse. The 
question is which collapse field would  be picked
{code:java}
https://github.com/apache/lucene-solr/blob/0f10b5f0424d592ed319e1b90d91f28c3a8625a9/solr/core/src/java/org/apache/solr/handler/component/ExpandComponent.java#L147
{code}
Based on this, the last collapse field is picked for expand irrespective of the 
cost. Here, we can make change such that field is picked up based on cost 
instead of order or maybe for trunk support expand for all collapse fields(if 
possible)
I'm not sure if above things come under this issue or not, just wanted to share 
my  thoughts.
Thanks for fixing multiple collapse issue. Also, I'm going to resolve the 
related issues, as they look similar
Sincere apologies, if I wasn't clear in  the initial comment



was (Author: munendrasn):
 [^SOLR-14073-expand.patch] 
cost would control the order of execution and when it is greater than 100 it 
would  be executed as postFilter

My query was  more related to how expand component will behave in case of 
multiple collapse filters. It  would be pick the only one collapse. The 
question is which collapse field would  be picked
{code:java}
https://github.com/apache/lucene-solr/blob/0f10b5f0424d592ed319e1b90d91f28c3a8625a9/solr/core/src/java/org/apache/solr/handler/component/ExpandComponent.java#L147
{code}
Based on this, the last collapse field is picked for expand irrespective of the 
cost. Here, we can make change such that field is picked up based on cost 
instead of order or maybe for trunk support expand for all collapse fields(if 
possible)
I'm not sure if above things come under this issue or not, just wanted to share 
my  thoughts.
Thanks for fixing multiple collapse issue. Also, I'm going to resolve the 
related issues, as they look similar
Sincere apologies, if I wasn't clear in  the initial comment


> Fix segment look ahead NPE in CollapsingQParserPlugin
> -
>
> Key: SOLR-14073
> URL: https://issues.apache.org/jira/browse/SOLR-14073
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.5
>
> Attachments: SOLR-14073-expand.patch, SOLR-14073.patch, 
> SOLR-14073.patch, SOLR-14073.patch
>
>
> The CollapsingQParserPlugin has a bug that if every segment is not visited 
> during the collect it throws an NPE. This causes the CollapsingQParserPlugin 
> to not work when used with any feature that short circuits the segments 
> during the collect. This includes using the CollapsingQParserPlugin twice in 
> the same query and the time limiting collector.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8306) Enhance ExpandComponent to allow expand.hits=0

2020-03-13 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058941#comment-17058941
 ] 

Lucene/Solr QA commented on SOLR-8306:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m  7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m  7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 17s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.search.CurrencyRangeFacetCloudTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-8306 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12996620/SOLR-8306.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 0f10b5f0424 |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/711/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/711/testReport/ |
| modules | C: solr solr/core U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/711/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Enhance ExpandComponent to allow expand.hits=0
> --
>
> Key: SOLR-8306
> URL: https://issues.apache.org/jira/browse/SOLR-8306
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.3.1
>Reporter: Marshall Sanders
>Priority: Minor
>  Labels: expand
> Fix For: 5.5
>
> Attachments: SOLR-8306.patch, SOLR-8306.patch, SOLR-8306.patch, 
> SOLR-8306_branch_5x@1715230.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This enhancement allows the ExpandComponent to allow expand.hits=0 for those 
> who don't want an expanded document returned and only want the numFound from 
> the expand section.
> This is useful for "See 54 more like this" use cases, but without the 
> performance hit of gathering an entire expanded document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] micpalmia commented on a change in pull request #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
micpalmia commented on a change in pull request #1343: LUCENE-8103: Use 
two-phase iteration in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#discussion_r392367810
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/DoubleValuesSource.java
 ##
 @@ -604,18 +604,26 @@ public DoubleValues getValues(LeafReaderContext ctx, 
DoubleValues scores) throws
   Scorer scorer = weight.scorer(ctx);
   if (scorer == null)
 return DoubleValues.EMPTY;
-  DocIdSetIterator it = scorer.iterator();
+
   return new DoubleValues() {
+private final TwoPhaseIterator tpi = scorer.twoPhaseIterator();
+private final DocIdSetIterator disi = (tpi == null) ? 
scorer.iterator() : tpi.approximation();
+
+private int scorerDoc = -1;
+private boolean thisDocMatches = false;
+
 @Override
 public double doubleValue() throws IOException {
-  return scorer.score();
+  return thisDocMatches ? scorer.score() : Double.NaN;
 }
 
 @Override
 public boolean advanceExact(int doc) throws IOException {
-  if (it.docID() > doc)
-return false;
-  return it.docID() == doc || it.advance(doc) == doc;
+  if (scorerDoc < doc) {
+scorerDoc = disi.advance(doc);
+thisDocMatches = tpi==null || tpi.matches();
 
 Review comment:
   I've done as you suggested. I'm a bit concerned about the performance 
implications of the unboxing though - this implementation is transforming a 
Boolean to a boolean for every matching document. I can look at running 
performance tests if you think that would be important.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14321) SolrJ with Kerberos docs have removed HttpClientUtil.setConfigurer

2020-03-13 Thread Cassandra Targett (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058956#comment-17058956
 ] 

Cassandra Targett commented on SOLR-14321:
--

That page is actually published with the Ref Guide online: 
https://lucene.apache.org/solr/guide/8_4/how-to-contribute.html#ref-guide-publication-process.
 It's the same page, just packaged into that one during the build. Just FYI.

That is the process, but to date we have never re-published any guide. This 
isn't the first mistake we've found that impacts every previously published 
version. I'm not saying we can't or shouldn't re-publish, only that we never 
have. 

Before 8.2, the Guide was officially actually the PDF, which required a VOTE. 
One reason why we didn't re-publish was because those older versions would have 
required a new VOTE, etc., and it never seemed worth it. I think it would be a 
valid question to wonder if you plan on re-publishing 6.6 and all of 7.x and 
8.x so far, do you need to generate a new PDF for those versions where it was 
the official version (and VOTE on them)?

One reason I wanted to drop the PDF (can't remember if I articulated it or not) 
was to make the potential to re-publish simpler. Maybe you can make it easier 
on yourself and only re-publish the last couple versions? Not sure how critical 
you think this error is, though, so it's probably up to you if you want to do 
all that.

> SolrJ with Kerberos docs have removed HttpClientUtil.setConfigurer 
> ---
>
> Key: SOLR-14321
> URL: https://issues.apache.org/jira/browse/SOLR-14321
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 
> 8.3, 8.4
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
>
> https://lucene.apache.org/solr/guide/8_4/kerberos-authentication-plugin.html#using-solrj-with-a-kerberized-solr
> {code:java}
> HttpClientUtil.setConfigurer(new Krb5HttpClientConfigurer());
> {code}
> This was removed in Solr 7.0.0 with SOLR-4509 
> (https://github.com/apache/lucene-solr/commit/ce172acb8fec6c3bbb18837a4d640da6c5aad649#diff-ab69354d287d536ce35357f6023bafceL104).
>  This is briefly mentioned here: 
> https://lucene.apache.org/solr/7_0_0/changes/Changes.html#v7.0.0.upgrading_from_solr_6.x
> The replacement should be:
> {code:java}
> HttpClientUtil.setHttpClientBuilder(new Krb5HttpClientBuilder().getBuilder());
> {code}
> An example of this is: 
> https://github.com/apache/ranger/blob/master/security-admin/src/main/java/org/apache/ranger/solr/SolrMgr.java#L114



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14327) Autoscaling trigger to perform replica rebalancing

2020-03-13 Thread Megan Carey (Jira)
Megan Carey created SOLR-14327:
--

 Summary: Autoscaling trigger to perform replica rebalancing
 Key: SOLR-14327
 URL: https://issues.apache.org/jira/browse/SOLR-14327
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling, SolrCloud
Affects Versions: 8.3
Reporter: Megan Carey


While the Autoscaling framework provides the ability to determine initial 
replica placement according to policies and preferences, it does not 
automatically redistribute replicas if they have become imbalanced.

I plan to build a scheduled trigger which evaluates the cluster state and 
executes replica moves to improve distribution of replicas on nodes across the 
cluster. This might incorporate the Suggestions API and apply the provided 
suggestions (TBD).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase 
iteration in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#discussion_r392381180
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/DoubleValuesSource.java
 ##
 @@ -608,22 +608,25 @@ public DoubleValues getValues(LeafReaderContext ctx, 
DoubleValues scores) throws
   return new DoubleValues() {
 private final TwoPhaseIterator tpi = scorer.twoPhaseIterator();
 private final DocIdSetIterator disi = (tpi == null) ? 
scorer.iterator() : tpi.approximation();
-
-private int scorerDoc = -1;
-private boolean thisDocMatches = false;
+private Boolean thisDocMatches = false;
 
 @Override
 public double doubleValue() throws IOException {
-  return thisDocMatches ? scorer.score() : Double.NaN;
+  return (thisDocMatches != null && thisDocMatches) ? scorer.score() : 
Double.NaN;
 }
 
 @Override
 public boolean advanceExact(int doc) throws IOException {
-  if (scorerDoc < doc) {
-scorerDoc = disi.advance(doc);
-thisDocMatches = tpi==null || tpi.matches();
+  if (disi.docID() < doc) {
 
 Review comment:
   Have you determined if this condition can be replaced with an assert?  I bet 
tests pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase 
iteration in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#discussion_r392379518
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/search/DoubleValuesSource.java
 ##
 @@ -608,22 +608,25 @@ public DoubleValues getValues(LeafReaderContext ctx, 
DoubleValues scores) throws
   return new DoubleValues() {
 private final TwoPhaseIterator tpi = scorer.twoPhaseIterator();
 private final DocIdSetIterator disi = (tpi == null) ? 
scorer.iterator() : tpi.approximation();
-
-private int scorerDoc = -1;
-private boolean thisDocMatches = false;
+private Boolean thisDocMatches = false;
 
 @Override
 public double doubleValue() throws IOException {
-  return thisDocMatches ? scorer.score() : Double.NaN;
+  return (thisDocMatches != null && thisDocMatches) ? scorer.score() : 
Double.NaN;
 
 Review comment:
   Just compare with Boolean.TRUE; one condition


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase 
iteration in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#discussion_r392387868
 
 

 ##
 File path: 
lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/QueryValueSource.java
 ##
 @@ -156,42 +135,38 @@ public float floatVal(int doc) {
   @Override
   public boolean exists(int doc) {
 try {
-  if (doc < lastDocRequested) {
-if (noMatches) return false;
+  if (noMatches) return false;
+  if (scorer == null) {
 scorer = weight.scorer(readerContext);
-scorerDoc = -1;
-if (scorer==null) {
+if (scorer == null) {
   noMatches = true;
   return false;
 }
-it = scorer.iterator();
+tpi = scorer.twoPhaseIterator();
+disi = tpi==null ? scorer.iterator() : tpi.approximation();
+thisDocMatches = false;
   }
+  assert doc >= lastDocRequested : "values requested out of order; last=" 
+ lastDocRequested + ", requested=" + doc;
 
 Review comment:
   Hmm; I wasn't sure about this but the more I look, it appears that the 
ValueSource API (here) also has a forward-only requirement.  A few years ago 
this was not the case.  So we can assume this and throw an exception.  See 
JoinDocFreqValueSource for an example.  I think this check you added here 
should be way up front at this method, thus clearly showing as a pre-condition, 
not down here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase iteration in Query- and DoubleValuesSource

2020-03-13 Thread GitBox
dsmiley commented on a change in pull request #1343: LUCENE-8103: Use two-phase 
iteration in Query- and DoubleValuesSource
URL: https://github.com/apache/lucene-solr/pull/1343#discussion_r392383623
 
 

 ##
 File path: 
lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/QueryValueSource.java
 ##
 @@ -156,42 +135,38 @@ public float floatVal(int doc) {
   @Override
   public boolean exists(int doc) {
 try {
-  if (doc < lastDocRequested) {
-if (noMatches) return false;
+  if (noMatches) return false;
+  if (scorer == null) {
 scorer = weight.scorer(readerContext);
-scorerDoc = -1;
-if (scorer==null) {
+if (scorer == null) {
   noMatches = true;
   return false;
 }
-it = scorer.iterator();
+tpi = scorer.twoPhaseIterator();
+disi = tpi==null ? scorer.iterator() : tpi.approximation();
+thisDocMatches = false;
 
 Review comment:
   You should set thisDocMatches to null because at this point you do not know 
if there is a match.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dnhatn commented on a change in pull request #1346: LUCENE-9276: Use same code-path for updateDocuments and updateDocument

2020-03-13 Thread GitBox
dnhatn commented on a change in pull request #1346: LUCENE-9276: Use same 
code-path for updateDocuments and updateDocument
URL: https://github.com/apache/lucene-solr/pull/1346#discussion_r392396808
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
 ##
 @@ -1655,7 +1655,7 @@ private long updateDocument(final 
DocumentsWriterDeleteQueue.Node delNode,
 ensureOpen();
 boolean success = false;
 try {
-  final long seqNo = maybeProcessEvents(docWriter.updateDocument(doc, 
analyzer, delNode));
+  final long seqNo = 
maybeProcessEvents(docWriter.updateDocuments(List.of(doc), analyzer, delNode));
 
 Review comment:
   Can we just delegate this method to `IndexWriter#updateDocuments`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14321) SolrJ with Kerberos docs have removed HttpClientUtil.setConfigurer

2020-03-13 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058980#comment-17058980
 ] 

Kevin Risden commented on SOLR-14321:
-

Thanks [~ctargett] I just know that google tends to index random versions and 
people get different version links - so I was thinking to "fix all the broken 
versions". Hmm I do remember the PDF thing as well. 

{quote}I think it would be a valid question to wonder if you plan on 
re-publishing 6.6 and all of 7.x and 8.x so far, do you need to generate a new 
PDF for those versions where it was the official version (and VOTE on 
them)?{quote}

{quote}Maybe you can make it easier on yourself and only re-publish the last 
couple versions? Not sure how critical you think this error is, though, so it's 
probably up to you if you want to do all that.{quote}

Yea good points. Let me at least get the current version up to date and then 
probably start a dev thread on what I think we should do.

> SolrJ with Kerberos docs have removed HttpClientUtil.setConfigurer 
> ---
>
> Key: SOLR-14321
> URL: https://issues.apache.org/jira/browse/SOLR-14321
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 
> 8.3, 8.4
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
>
> https://lucene.apache.org/solr/guide/8_4/kerberos-authentication-plugin.html#using-solrj-with-a-kerberized-solr
> {code:java}
> HttpClientUtil.setConfigurer(new Krb5HttpClientConfigurer());
> {code}
> This was removed in Solr 7.0.0 with SOLR-4509 
> (https://github.com/apache/lucene-solr/commit/ce172acb8fec6c3bbb18837a4d640da6c5aad649#diff-ab69354d287d536ce35357f6023bafceL104).
>  This is briefly mentioned here: 
> https://lucene.apache.org/solr/7_0_0/changes/Changes.html#v7.0.0.upgrading_from_solr_6.x
> The replacement should be:
> {code:java}
> HttpClientUtil.setHttpClientBuilder(new Krb5HttpClientBuilder().getBuilder());
> {code}
> An example of this is: 
> https://github.com/apache/ranger/blob/master/security-admin/src/main/java/org/apache/ranger/solr/SolrMgr.java#L114



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob merged pull request #1298: SOLR-14289 Skip ZkChroot check when not necessary

2020-03-13 Thread GitBox
madrob merged pull request #1298: SOLR-14289 Skip ZkChroot check when not 
necessary
URL: https://github.com/apache/lucene-solr/pull/1298
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14289) Solr may attempt to check Chroot after already having connected once

2020-03-13 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved SOLR-14289.
--
Fix Version/s: master (9.0)
   Resolution: Fixed

> Solr may attempt to check Chroot after already having connected once
> 
>
> Key: SOLR-14289
> URL: https://issues.apache.org/jira/browse/SOLR-14289
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: Screen Shot 2020-02-26 at 2.56.14 PM.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> On server startup, we will attempt to load the solr.xml from zookeeper if we 
> have the right properties set, and then later when starting up the core 
> container will take time to verify (and create) the chroot even if it is the 
> same string that we already used before. We can likely skip the second 
> short-lived zookeeper connection to speed up our startup sequence a little 
> bit.
>  
> See this attached image from thread profiling during startup.
> !Screen Shot 2020-02-26 at 2.56.14 PM.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw merged pull request #1346: LUCENE-9276: Use same code-path for updateDocuments and updateDocument

2020-03-13 Thread GitBox
s1monw merged pull request #1346: LUCENE-9276: Use same code-path for 
updateDocuments and updateDocument
URL: https://github.com/apache/lucene-solr/pull/1346
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija opened a new pull request #1349: Document sort param tiebreak logic

2020-03-13 Thread GitBox
gerlowskija opened a new pull request #1349: Document sort param tiebreak logic
URL: https://github.com/apache/lucene-solr/pull/1349
 
 
   # Description
   
   I was trying to explain Solr's not-completely-deterministic `sort` param 
resolution to a coworker this week, and realized that this behavior wasn't 
documented afaict.
   
   # Solution
   
   A short ref-guide change to mention the tie-break behavior.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [ ] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [x] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [x] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw opened a new pull request #1350: Cleanup DWPT for readability

2020-03-13 Thread GitBox
s1monw opened a new pull request #1350: Cleanup DWPT for readability
URL: https://github.com/apache/lucene-solr/pull/1350
 
 
   DWPT had some complicated logic to account for failures etc.
   This change cleans up this logic and simplifies the document processing
   loop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw commented on issue #1350: Cleanup DWPT for readability

2020-03-13 Thread GitBox
s1monw commented on issue #1350: Cleanup DWPT for readability
URL: https://github.com/apache/lucene-solr/pull/1350#issuecomment-598890923
 
 
   @dnhatn wanna take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14312) Upgrade Zookeeper to 3.5.7

2020-03-13 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14312:
--
Summary: Upgrade Zookeeper to 3.5.7  (was: Upgrade Zookeeper to 3.5.7 or 
3.6.0?)

> Upgrade Zookeeper to 3.5.7
> --
>
> Key: SOLR-14312
> URL: https://issues.apache.org/jira/browse/SOLR-14312
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I started looking at upgrading ZK to 3.5.7, and meanwhile 3.6.0 was just 
> released. The 3.5.7 upgrade is pretty painless, just upgrade the version and 
> all tests pass. There have been some third-party CVEs and the like fixed, so 
> it seems worthwhile.
> But I've been working on this _very_ gradually while traveling, and meanwhile 
> 3.6.0 just came out. Should we:
> 1> upgrade to 3.5.7 now and then worry about 3.6.0 after it has some more 
> mileage
> 2> upgrade to 3.6.0 now
> 3> do nothing until 3.6.0 has some more mileage on it?
> <1> gets my vote



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14320) TestSQLHandler failures

2020-03-13 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059052#comment-17059052
 ] 

Mike Drob commented on SOLR-14320:
--

I tried looking into this and I think there is a distributed deadlock that 
happens here? Not sure.

Also, I noticed that there are a lot of very short lived ZK connections getting 
created in {{SolrSchema.getTableMap}} - maybe it's possible to cache those or 
even to reuse the existing connection that the solr server has?

> TestSQLHandler failures
> ---
>
> Key: SOLR-14320
> URL: https://issues.apache.org/jira/browse/SOLR-14320
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: MacOS 10.15.3
> OpenJDK 11.0.5
>Reporter: Mike Drob
>Assignee: Joel Bernstein
>Priority: Major
>
> I'm getting a reproducible failure locally with:
> {{gradlew clean :solr:core:test --tests 
> "org.apache.solr.handler.TestSQLHandler" -Ptests.seed=57C9372573E8FDEC}}
>  on current HEAD of master commit {{32a2076c605}}
> cc: [~jbernste]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova opened a new pull request #1351: Collectors to skip noncompetitive documents

2020-03-13 Thread GitBox
mayya-sharipova opened a new pull request #1351: Collectors to skip 
noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351
 
 
   Similar how scorers can update their iterators to skip non-competitive
   documents, collectors and comparators should also provide and update
   iterators that allow them to skip non-competive documents
   
   This could be useful if we want to sort by some field.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on issue #1351: Collectors to skip noncompetitive documents

2020-03-13 Thread GitBox
mayya-sharipova commented on issue #1351: Collectors to skip noncompetitive 
documents
URL: https://github.com/apache/lucene-solr/pull/1351#issuecomment-598953853
 
 
   @jimczi I have created a draft PR for comparators and collectors to skip 
non-competitive docs.  Can you please have a look at it and see if we are happy 
with this approach. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13807) Caching for term facet counts

2020-03-13 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059139#comment-17059139
 ] 

Chris M. Hostetter commented on SOLR-13807:
---

Hey Michael, I've been offline all week, but hopefully i'll be able to start 
digging in an reviewing some of this more at some point next week.

as far as some of your process questions: I don't know if/how force-pushing 
affects things, but from a reviewing / "get more eyeballs" on things i really 
think it would be cleaner to have 2 discrete PRs that we can iterate from, and 
link those 2 PRs from each of the 2 distinct jiras, so we can keep the 
comments/discussion on each distinct, particularly since one or the other might 
attract more/less attention from folks who are more/less passionate about the 
new functionality and/or concerned about the internal change.

FWIW: I personally don't care about PRs (or any github "value add" 
functionality for that matter) at all – to me they are just patch files i can 
find at special URLs by adding ".patch" to the end.  i tell you this not to 
discourage you from using PRs (my anachronistic view on development shouldn't 
stop you from using the tools you're comfortable with, and other people in the 
community are – or at least claim to be – more likely to help review PRs then 
patches) but just to clarify that you certainly don't need to worry about 
trying to re-use the existing PR ... feel free to close it and open new ones 
for each of the distinct issues – the github-to-Jira bridge should pick them up.

 

> Caching for term facet counts
> -
>
> Key: SOLR-13807
> URL: https://issues.apache.org/jira/browse/SOLR-13807
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Affects Versions: master (9.0), 8.2
>Reporter: Michael Gibney
>Priority: Minor
> Attachments: SOLR-13807__SOLR-13132_test_stub.patch
>
>
> Solr does not have a facet count cache; so for _every_ request, term facets 
> are recalculated for _every_ (facet) field, by iterating over _every_ field 
> value for _every_ doc in the result domain, and incrementing the associated 
> count.
> As a result, subsequent requests end up redoing a lot of the same work, 
> including all associated object allocation, GC, etc. This situation could 
> benefit from integrated caching.
> Because of the domain-based, serial/iterative nature of term facet 
> calculation, latency is proportional to the size of the result domain. 
> Consequently, one common/clear manifestation of this issue is high latency 
> for faceting over an unrestricted domain (e.g., {{\*:\*}}), as might be 
> observed on a top-level landing page that exposes facets. This type of 
> "static" case is often mitigated by external (to Solr) caching, either with a 
> caching layer between Solr and a front-end application, or within a front-end 
> application, or even with a caching layer between the end user and a 
> front-end application.
> But in addition to the overhead of handling this caching elsewhere in the 
> stack (or, for a new user, even being aware of this as a potential issue to 
> mitigate), any external caching mitigation is really only appropriate for 
> relatively static cases like the "landing page" example described above. A 
> Solr-internal facet count cache (analogous to the {{filterCache}}) would 
> provide the following additional benefits:
>  # ease of use/out-of-the-box configuration to address a common performance 
> concern
>  # compact (specifically caching count arrays, without the extra baggage that 
> accompanies a naive external caching approach)
>  # NRT-friendly (could be implemented to be segment-aware)
>  # modular, capable of reusing the same cached values in conjunction with 
> variant requests over the same result domain (this would support common use 
> cases like paging, but also potentially more interesting direct uses of 
> facets). 
>  # could be used for distributed refinement (i.e., if facet counts over a 
> given domain are cached, a refinement request could simply look up the 
> ordinal value for each enumerated term and directly grab the count out of the 
> count array that was cached during the first phase of facet calculation)
>  # composable (e.g., in aggregate functions that calculate values based on 
> facet counts across different domains, like SKG/relatedness – see SOLR-13132)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14012) Different data type for unique aggregation and countDistinct

2020-03-13 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059141#comment-17059141
 ] 

Chris M. Hostetter commented on SOLR-14012:
---

[~munendrasn] -  at a glance it seems ok, but it looks like there's still an 
int based {{getCardinality()}} in {{UniqueAgg.BaseNumericAcc}} that could just 
be deleted and all callers replaced with {{getLong()}} ?

(If there's some reason why we need both then i'd suggest refactoring 
{{getCardinality()}} to call {{getLong()}} and cast the result, and include a 
comment explaining what/why it exists.

> Different data type for unique aggregation and countDistinct
> 
>
> Key: SOLR-14012
> URL: https://issues.apache.org/jira/browse/SOLR-14012
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14012.patch, SOLR-14012.patch
>
>
> countDistinct value is long but unique aggregation's value is either long or 
> int depending on shard count



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] andyvuong commented on a change in pull request #1317: SOLR-13101: Create metadataSuffix znode only at common shard creating api calls

2020-03-13 Thread GitBox
andyvuong commented on a change in pull request #1317: SOLR-13101: Create 
metadataSuffix znode only at common shard creating api calls
URL: https://github.com/apache/lucene-solr/pull/1317#discussion_r392544473
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/store/shared/metadata/SharedShardMetadataController.java
 ##
 @@ -146,20 +148,18 @@ public void cleanUpMetadataNodes(String collectionName, 
String shardName) {
 }
   }
 
-  private void createPersistentNodeIfNonExistent(String path, byte[] data) {
+  private void createPersistentNode(String path, byte[] data) {
 try {
   if (!stateManager.hasData(path)) {
-try {
-  stateManager.makePath(path, data, CreateMode.PERSISTENT, /* 
failOnExists */ false);
-} catch (AlreadyExistsException e) {
-  // it's okay if another beats us creating the node
-}
+stateManager.makePath(path, data, CreateMode.PERSISTENT, /* 
failOnExists */ true);
 
 Review comment:
   #makePath will also throw an AlreadyExistsException if the node exists and 
failOnExists=true


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mbwaheed commented on a change in pull request #1317: SOLR-13101: Create metadataSuffix znode only at common shard creating api calls

2020-03-13 Thread GitBox
mbwaheed commented on a change in pull request #1317: SOLR-13101: Create 
metadataSuffix znode only at common shard creating api calls
URL: https://github.com/apache/lucene-solr/pull/1317#discussion_r392545214
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/store/shared/metadata/SharedShardMetadataControllerTest.java
 ##
 @@ -77,10 +77,26 @@ public void cleanup() throws Exception {
*/
   @Test
   public void testSetupMetadataNode() throws Exception {
-shardMetadataController.ensureMetadataNodeExists(TEST_COLLECTION_NAME, 
TEST_SHARD_NAME);
+shardMetadataController.createMetadataNode(TEST_COLLECTION_NAME, 
TEST_SHARD_NAME);
 assertTrue(cluster.getZkClient().exists(metadataNodePath, false));
   }
   
+  /**
+   * Test if we try to create the same metadat anode twice, we faile
 
 Review comment:
   few typos.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley merged pull request #1332: SOLR-14254: Docs for text tagger: FST50 trade-off

2020-03-13 Thread GitBox
dsmiley merged pull request #1332: SOLR-14254: Docs for text tagger: FST50 
trade-off
URL: https://github.com/apache/lucene-solr/pull/1332
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14254) Index backcompat break between 8.3.1 and 8.4.1

2020-03-13 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-14254.
-
Resolution: Won't Fix

I committed and pushed doc improvements from the PR.  Oddly no bot comments 
here but the master commit hash is 30810b13dfed234c7c125423de6d1c9b565a01f9

> Index backcompat break between 8.3.1 and 8.4.1
> --
>
> Key: SOLR-14254
> URL: https://issues.apache.org/jira/browse/SOLR-14254
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jason Gerlowski
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> I believe I found a backcompat break between 8.4.1 and 8.3.1.
> I encountered this when a Solr 8.3.1 cluster was upgraded to 8.4.1.  On 8.4. 
> nodes, several collections had cores fail to come up with 
> {{CorruptIndexException}}:
> {code}
> 2020-02-10 20:58:26.136 ERROR 
> (coreContainerWorkExecutor-2-thread-1-processing-n:192.168.1.194:8983_solr) [ 
>   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on startup 
> => org.apache.sol
> r.common.SolrException: Unable to create core 
> [testbackcompat_shard1_replica_n1]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1313)
> org.apache.solr.common.SolrException: Unable to create core 
> [testbackcompat_shard1_replica_n1]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1313)
>  ~[?:?]
> at 
> org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:788) 
> ~[?:?]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
>  ~[metrics-core-4.0.5.jar:4.0.5]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
> at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.(SolrCore.java:1072) ~[?:?]
> at org.apache.solr.core.SolrCore.(SolrCore.java:901) ~[?:?]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1292)
>  ~[?:?]
> ... 7 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2182) 
> ~[?:?]
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2302) 
> ~[?:?]
> at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1132) 
> ~[?:?]
> at org.apache.solr.core.SolrCore.(SolrCore.java:1013) ~[?:?]
> at org.apache.solr.core.SolrCore.(SolrCore.java:901) ~[?:?]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1292)
>  ~[?:?]
> ... 7 more
> Caused by: org.apache.lucene.index.CorruptIndexException: codec mismatch: 
> actual codec=Lucene50PostingsWriterDoc vs expected 
> codec=Lucene84PostingsWriterDoc 
> (resource=MMapIndexInput(path="/Users/jasongerlowski/run/solrdata/data/testbackcompat_shard1_replica_n1/data/index/_0_FST50_0.doc"))
> at 
> org.apache.lucene.codecs.CodecUtil.checkHeaderNoMagic(CodecUtil.java:208) 
> ~[?:?]
> at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:198) 
> ~[?:?]
> at 
> org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:255) ~[?:?]
> at 
> org.apache.lucene.codecs.lucene84.Lucene84PostingsReader.(Lucene84PostingsReader.java:82)
>  ~[?:?]
> at 
> org.apache.lucene.codecs.memory.FSTPostingsFormat.fieldsProducer(FSTPostingsFormat.java:66)
>  ~[?:?]
> at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:315)
>  ~[?:?]
> at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:395)
>  ~[?:?]
> at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:114)
>  ~[?:?]
> at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:84) ~[?:?]
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:177)
>  ~[?:?]
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:219)
>  ~[?:?]
> at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:109)
>  ~[?:?]
> 

[jira] [Closed] (SOLR-14254) Index backcompat break between 8.3.1 and 8.4.1

2020-03-13 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-14254.
---

> Index backcompat break between 8.3.1 and 8.4.1
> --
>
> Key: SOLR-14254
> URL: https://issues.apache.org/jira/browse/SOLR-14254
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jason Gerlowski
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> I believe I found a backcompat break between 8.4.1 and 8.3.1.
> I encountered this when a Solr 8.3.1 cluster was upgraded to 8.4.1.  On 8.4. 
> nodes, several collections had cores fail to come up with 
> {{CorruptIndexException}}:
> {code}
> 2020-02-10 20:58:26.136 ERROR 
> (coreContainerWorkExecutor-2-thread-1-processing-n:192.168.1.194:8983_solr) [ 
>   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on startup 
> => org.apache.sol
> r.common.SolrException: Unable to create core 
> [testbackcompat_shard1_replica_n1]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1313)
> org.apache.solr.common.SolrException: Unable to create core 
> [testbackcompat_shard1_replica_n1]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1313)
>  ~[?:?]
> at 
> org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:788) 
> ~[?:?]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
>  ~[metrics-core-4.0.5.jar:4.0.5]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
> at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.(SolrCore.java:1072) ~[?:?]
> at org.apache.solr.core.SolrCore.(SolrCore.java:901) ~[?:?]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1292)
>  ~[?:?]
> ... 7 more
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2182) 
> ~[?:?]
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2302) 
> ~[?:?]
> at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1132) 
> ~[?:?]
> at org.apache.solr.core.SolrCore.(SolrCore.java:1013) ~[?:?]
> at org.apache.solr.core.SolrCore.(SolrCore.java:901) ~[?:?]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1292)
>  ~[?:?]
> ... 7 more
> Caused by: org.apache.lucene.index.CorruptIndexException: codec mismatch: 
> actual codec=Lucene50PostingsWriterDoc vs expected 
> codec=Lucene84PostingsWriterDoc 
> (resource=MMapIndexInput(path="/Users/jasongerlowski/run/solrdata/data/testbackcompat_shard1_replica_n1/data/index/_0_FST50_0.doc"))
> at 
> org.apache.lucene.codecs.CodecUtil.checkHeaderNoMagic(CodecUtil.java:208) 
> ~[?:?]
> at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:198) 
> ~[?:?]
> at 
> org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:255) ~[?:?]
> at 
> org.apache.lucene.codecs.lucene84.Lucene84PostingsReader.(Lucene84PostingsReader.java:82)
>  ~[?:?]
> at 
> org.apache.lucene.codecs.memory.FSTPostingsFormat.fieldsProducer(FSTPostingsFormat.java:66)
>  ~[?:?]
> at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:315)
>  ~[?:?]
> at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:395)
>  ~[?:?]
> at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:114)
>  ~[?:?]
> at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:84) ~[?:?]
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:177)
>  ~[?:?]
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:219)
>  ~[?:?]
> at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:109)
>  ~[?:?]
> at 
> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:526) ~[?:?]
> at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:116) ~[?:?]
>

[jira] [Created] (LUCENE-9278) Make javadoc folder structure follow Gradle project path

2020-03-13 Thread Tomoko Uchida (Jira)
Tomoko Uchida created LUCENE-9278:
-

 Summary: Make javadoc folder structure follow Gradle project path
 Key: LUCENE-9278
 URL: https://issues.apache.org/jira/browse/LUCENE-9278
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Reporter: Tomoko Uchida


Current javadoc folder structure is derived from Ant project name. e.g.:

[https://lucene.apache.org/core/8_4_1/analyzers-icu/index.html]
 [https://lucene.apache.org/solr/8_4_1/solr-solrj/index.html]

For Gradle build, it should also follow gradle project structure (path) instead 
of ant one, to keep things simple to manage [1]. Hence, it will look like this:

[https://lucene.apache.org/core/9_0_0/analysis/icu/index.html]
 [https://lucene.apache.org/solr/9_0_0/solr/solrj/index.html]

[1] The change was suggested at the conversation between Dawid Weiss and I on a 
github pr: [https://github.com/apache/lucene-solr/pull/1304]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14012) Different data type for unique aggregation and countDistinct

2020-03-13 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14012:

Attachment: SOLR-14012.patch

> Different data type for unique aggregation and countDistinct
> 
>
> Key: SOLR-14012
> URL: https://issues.apache.org/jira/browse/SOLR-14012
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14012.patch, SOLR-14012.patch, SOLR-14012.patch
>
>
> countDistinct value is long but unique aggregation's value is either long or 
> int depending on shard count



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14012) Different data type for unique aggregation and countDistinct

2020-03-13 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059193#comment-17059193
 ] 

Munendra S N commented on SOLR-14012:
-

 [^SOLR-14012.patch] 
I have renamed it {{getLong}} to {{getNonShardValue}} to reflect about its 
usage. I had kept {{getCardinality}} as it is used by {{getShardValue}} and the 
value is used to assign initial size for {{ArrayList}} which needs to be {{int}}
With the latest patch, I have delegated {{getNonShardValue}} to 
{{getCardinality}} and then casting to long. I have added javadocs for  
{{getCardinality}} usage

> Different data type for unique aggregation and countDistinct
> 
>
> Key: SOLR-14012
> URL: https://issues.apache.org/jira/browse/SOLR-14012
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14012.patch, SOLR-14012.patch, SOLR-14012.patch
>
>
> countDistinct value is long but unique aggregation's value is either long or 
> int depending on shard count



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta opened a new pull request #1352: LUCENE-9278: Make javadoc folder structure follow Gradle project path

2020-03-13 Thread GitBox
mocobeta opened a new pull request #1352: LUCENE-9278: Make javadoc folder 
structure follow Gradle project path
URL: https://github.com/apache/lucene-solr/pull/1352
 
 
   
   
   # Description
   
   See https://issues.apache.org/jira/browse/LUCENE-9278
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1352: LUCENE-9278: Make javadoc folder structure follow Gradle project path

2020-03-13 Thread GitBox
mocobeta commented on a change in pull request #1352: LUCENE-9278: Make javadoc 
folder structure follow Gradle project path
URL: https://github.com/apache/lucene-solr/pull/1352#discussion_r392562400
 
 

 ##
 File path: gradle/render-javadoc.gradle
 ##
 @@ -17,11 +17,15 @@
 
 // generate javadocs by using Ant javadoc task
 
+// utility function to convert project path to document output dir
+// e.g.: ':lucene:analysis:common' => 'analysis/common'
+def pathToDocdir = { path -> path.split(':').drop(2).join('/') }
 
 Review comment:
   There might be better way?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1352: LUCENE-9278: Make javadoc folder structure follow Gradle project path

2020-03-13 Thread GitBox
mocobeta commented on a change in pull request #1352: LUCENE-9278: Make javadoc 
folder structure follow Gradle project path
URL: https://github.com/apache/lucene-solr/pull/1352#discussion_r392562835
 
 

 ##
 File path: gradle/render-javadoc.gradle
 ##
 @@ -294,35 +305,41 @@ configure(project(':solr:core')) {
   plugins.withType(JavaPlugin) {
 // specialized to ONLY depend on solrj
 renderJavadoc {
-  dependsOn ':solr:solrj:renderJavadoc'
-  linkHref += [ '../solr-solrj' ]
+  [':solr:solrj'].collect { path ->
+dependsOn "${path}:renderJavadoc"
+linkHref += [ "../" + pathToDocdir(path) ]
+  }
 }
   }
 }
 
 configure(subprojects.findAll { it.path.startsWith(':solr:contrib') }) {
   plugins.withType(JavaPlugin) {
 renderJavadoc {
-  dependsOn ':solr:solrj:renderJavadoc'
-  dependsOn ':solr:core:renderJavadoc'
-  linkHref += [ '../solr-solrj', '../solr-core' ]
+  [':solr:solrj', ':solr:core'].collect { path ->
+dependsOn "${path}:renderJavadoc"
+linkHref += [ "../../" + pathToDocdir(path) ]
+  }
 }
   }
 }
 
 configure(project(':solr:contrib:dataimporthandler-extras')) {
   plugins.withType(JavaPlugin) {
 renderJavadoc {
-  dependsOn ':solr:contrib:dataimporthandler:renderJavadoc'
-  linkHref += [ '../solr-dataimporthandler' ]
+  [':solr:contrib:dataimporthandler'].collect { path ->
+dependsOn "${path}:renderJavadoc"
+linkHref += [ "../../" + pathToDocdir(path) ]
+  }
 }
   }
 }
 
+
 configure(project(':solr:contrib:extraction')) {
   plugins.withType(JavaPlugin) {
 ext {
-  javadocDestDir = "${javadocRoot}/solr-cell"
+  javadocDestDir = "${javadocRoot}/contrib/cell"
 
 Review comment:
   I'm not sure this tweak is needed for gradle build. Simply, 
"solr/contrib/extraction" (same as other contrib modules) may be okay?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org