[GitHub] [lucene-solr] noblepaul commented on pull request #1744: SOLR-14731: Add SingleThreaded Annotation to Class

2020-08-13 Thread GitBox


noblepaul commented on pull request #1744:
URL: https://github.com/apache/lucene-solr/pull/1744#issuecomment-673330445


   I don't really know what purpose is solved by this annotation. If it is for 
documentation purpose, this may make sense. But, there are a million other 
places where the classes are not thread-safe. What's the point of just adding 
it here



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9448) Make an equivalent to Ant's "run" target for Luke module

2020-08-13 Thread Tomoko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176864#comment-17176864
 ] 

Tomoko Uchida commented on LUCENE-9448:
---

Fix for this issue (merged): [https://github.com/apache/lucene-solr/pull/1742]

I think this can be resolved?

> Make an equivalent to Ant's "run" target for Luke module
> 
>
> Key: LUCENE-9448
> URL: https://issues.apache.org/jira/browse/LUCENE-9448
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Tomoko Uchida
>Priority: Minor
> Attachments: LUCENE-9448.patch, LUCENE-9448.patch
>
>
> With Ant build, Luke Swing app can be launched by "ant run" after checking 
> out the source code. "ant run" allows developers to immediately see the 
> effects of UI changes without creating the whole zip/tgz package (originally, 
> it was suggested when integrating Luke to Lucene).
> In Gradle, {{:lucene:luke:run}} task would be easily implemented with 
> {{JavaExec}}, I think.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14749) Provide a clean API for cluster-level event processing

2020-08-13 Thread Andrzej Bialecki (Jira)
Andrzej Bialecki created SOLR-14749:
---

 Summary: Provide a clean API for cluster-level event processing
 Key: SOLR-14749
 URL: https://issues.apache.org/jira/browse/SOLR-14749
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Andrzej Bialecki
Assignee: Andrzej Bialecki


This is a companion issue to SOLR-14613 and it aims at providing a clean, 
strongly typed API for the functionality formerly known as "triggers" - that 
is, a component for generating cluster-level events corresponding to changes in 
the cluster state, and a pluggable API for processing these events.

The 8x triggers have been removed so this functionality is currently missing in 
9.0. However, this functionality is crucial for implementing the automatic 
collection repair and re-balancing as the cluster state changes (nodes going 
down / up, becoming overloaded / unused / decommissioned, etc).

For this reason we need this API and a default implementation of triggers that 
at least can perform automatic collection repair (maintaining the desired 
replication factor in presence of live node changes).

As before, the actual changes to the collections will be executed using 
existing CollectionAdmin API, which in turn may use the placement plugins from 
SOLR-14613.
h3. Division of responsibility
 * built-in Solr components (non-pluggable):
 ** cluster state monitoring and event generation,
 ** simple scheduler to periodically generate scheduled events
 * plugins:
 ** automatic collection repair on {{nodeLost}} events (provided by default)
 ** re-balancing of replicas (periodic or on {{nodeAdded}} events)
 ** reporting (eg. requesting additional node provisioning)
 ** scheduled maintenance (eg. removing inactive shards after split)

h3. Other considerations

These plugins (unlike the placement plugins) need to execute on one designated 
node in the cluster. Currently the easiest way to implement this is to run them 
on the Overseer leader node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14687) Make child/parent query parsers natively aware of _nest_path_

2020-08-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176922#comment-17176922
 ] 

Jan Høydahl commented on SOLR-14687:


Agree, this is super complex and trappy and this will be a great usability 
improvement! Thanks, Hoss!

> Make child/parent query parsers natively aware of _nest_path_
> -
>
> Key: SOLR-14687
> URL: https://issues.apache.org/jira/browse/SOLR-14687
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Chris M. Hostetter
>Priority: Major
>
> A long standing pain point of the parent/child QParsers is the "all parents" 
> bitmask/filter specified via the "which" and "of" params (respectively).
> This is particularly tricky/painful to "get right" when dealing with 
> multi-level nested documents...
>  * 
> https://issues.apache.org/jira/browse/SOLR-14383?focusedCommentId=17166339&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17166339
>  * 
> [https://lists.apache.org/thread.html/r7633a366dd76e7ce9d98e6b9f2a65da8af8240e846f789d938c8113f%40%3Csolr-user.lucene.apache.org%3E]
> ...and it's *really* hard to get right when the nested structure isn't 100% 
> consistent among all docs:
>  * collections that mix docs w/o children and docs that have children.
>  ** Ex: blog posts, some of which have child docs that are "comments", but 
> some don't
>  * when some "types" of documents can exist at multiple levels:
>  ** Ex: top level "product" documents, which may have 2 types of children: 
> "skus" and "manuals", but "skus" may also have their own wku-specific child 
> "manuals"
> BUT! ... now that we have some semi-native support for the {{_nest_path_}} 
> field, i think it may be possible to offer an "easier to use" variant syntax 
> of the parent/child QParsers that directly depends on these fields. This new 
> syntax should be optional – and purely syntactic sugar. "expert" users should 
> be able to do all the same things using the existing syntax (possibly more 
> efficiently depending on what invarients exist in their data model)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14750) Harden TestBulkSchemaConcurrent

2020-08-13 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14750:
-

 Summary: Harden TestBulkSchemaConcurrent
 Key: SOLR-14750
 URL: https://issues.apache.org/jira/browse/SOLR-14750
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Erick Erickson
Assignee: Erick Erickson


This test has been failing quite often lately. I poked around a bit and see 
what I _think_ is evidence of a race condition in CoreContainer.reload where a 
reload on the same core is happening from two places in close succession. I'll 
attach a preliminary patch soon.

Without this patch I had 25 failures out of 1,000 runs, with it 0.

I consider this patch a WIP, putting up for comment. Well, it has nocommits 
so... But In particular, I have to review some changes I made about which name 
we're using for PendingCoreOps. I also want to back out my changes and beast it 
again with some more logging to see if I can nail down that multiple reloads 
are happening before declaring victory.

What this does is put the name of the core we're reloading in pendingCoreOps 
earlier in the reload process. Then the second call to reload will wait until 
the first is completed. I also restructured it a bit because I don't like if 
clauses that go on forever and a small else clause way down the code. I 
inverted the test and bailed out of the method rather than fall off the end 
after the else clause.

One thing I don't like about this is two reloads in such rapid succession seems 
wasteful. Even so, I can imagine that one reload gets through far enough to 
load the schema, then a schema update changes the schema _then_ calls reload. 
So I don't think just returning if there's a reload happening on that core 
already is valid.

More to come.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood commented on pull request #1708: LUCENE-9445 Add support for case insensitive regex searches in QueryParser

2020-08-13 Thread GitBox


markharwood commented on pull request #1708:
URL: https://github.com/apache/lucene-solr/pull/1708#issuecomment-673437428


   Progress update  - I'm struggling a bit with how to make the parser stricter 
i.e. ensuring there's a space between `/Foo/i` and the next search term. I'd 
also need to allow for `( A OR /Foo/i)` with the closing bracket instead of a 
space.
   I'm not yet clear in JavaCC how to write that rule without also consuming 
the closing bracket. Some BWC issues to work through here too.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14750) Harden TestBulkSchemaConcurrent

2020-08-13 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14750:
--
Attachment: SOLR-14750.patch
Status: Open  (was: Open)

> Harden TestBulkSchemaConcurrent
> ---
>
> Key: SOLR-14750
> URL: https://issues.apache.org/jira/browse/SOLR-14750
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-14750.patch
>
>
> This test has been failing quite often lately. I poked around a bit and see 
> what I _think_ is evidence of a race condition in CoreContainer.reload where 
> a reload on the same core is happening from two places in close succession. 
> I'll attach a preliminary patch soon.
> Without this patch I had 25 failures out of 1,000 runs, with it 0.
> I consider this patch a WIP, putting up for comment. Well, it has nocommits 
> so... But In particular, I have to review some changes I made about which 
> name we're using for PendingCoreOps. I also want to back out my changes and 
> beast it again with some more logging to see if I can nail down that multiple 
> reloads are happening before declaring victory.
> What this does is put the name of the core we're reloading in pendingCoreOps 
> earlier in the reload process. Then the second call to reload will wait until 
> the first is completed. I also restructured it a bit because I don't like if 
> clauses that go on forever and a small else clause way down the code. I 
> inverted the test and bailed out of the method rather than fall off the end 
> after the else clause.
> One thing I don't like about this is two reloads in such rapid succession 
> seems wasteful. Even so, I can imagine that one reload gets through far 
> enough to load the schema, then a schema update changes the schema _then_ 
> calls reload. So I don't think just returning if there's a reload happening 
> on that core already is valid.
> More to come.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9447) Make BEST_COMPRESSION compress more aggressively?

2020-08-13 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176951#comment-17176951
 ] 

Adrien Grand commented on LUCENE-9447:
--

Indeed larger blocks make retrieval slower. I left the idea of using preset 
dictionaries out for now and did more tests. In particular I played with the 
idea of compressing with DEFLATE on top of LZ4 or LZ4 on top of LZ4. Because 
LZ4 only replaces duplicate strings with references, compressing multiple times 
with it helps bring some strings closer to each other, which sometimes mean 
they would now be closer than the window size (32kB for DEFLATE, 64kB for LZ4). 
This blog post talks a bit more about this idea and here are data points on the 
same dataset as previously. LZ4H means the high-compression mode of LZ4, which 
does more work to find longer duplicate strings.

 
||Method||Index size(MB)||Index time(s)||Avg fetch time (us)||
|LZ4(16kB) (BEST_SPEED) |304,2  |9| 5|
|LZ4(60kB)  |141,7| 7,5|10|
|LZ4H+LZ4(60kB)|120,1|  16,5|   9|
|LZ4H(60kB) |120,1| 15| 8|
|LZ4H+LZ4+HUFFMAN_ONLY(60kB)|   105,8|  19| 25|
|LZ4H+HUFFMAN_ONLY(60kB)|105,7| 16,5|   23|
|LZ4(256kB) |105,1| 7,5 |33|
|LZ4H+DEFLATE(60kB) |102,7| 17,5|26|
|DEFLATE(60kB) (BEST_COMPRESSION)|  100,6|  14  |35|
|LZ4(1MB)   |96,5|  7,5 |115|
|LZ4H(256kB)|   68,4|14,5|  22|
|LZ4H+LZ4(256kB)|   64,6|   15| 29|
|DEFLATE(256kB)|63,8|   16,5|   110|
|LZ4H+HUFFMAN_ONLY(256kB)   |59,1|  15  |54|
|LZ4H+LZ4+HUFFMAN_ONLY(256kB)|  57,7|   15,5|58|
|LZ4H(1MB)  |56,1   |16,5|  76|
|DEFLATE(1MB)   |54,7|  16  |411|
|LZ4H+DEFLATE(256kB)|54,5|  15,5|   57|
|LZ4H+LZ4(1MB)| 49,4|   17| 117|
|LZ4H+HUFFMAN_ONLY(1MB)|47,9|   17.5|   170|
|LZ4H+LZ4+HUFFMAN_ONLY(1MB) |44,6|  18  |194|
|LZ4H+DEFLATE(1MB)  |40,8|  18,5|172|


Unfortunately, I get very different numbers for enwiki documents, which are 
less redundant and where Huffman compression is an important part of the 
compression ratio of DEFLATE, which makes DEFLATE alone unbeatable.


||Method||Index size(MB)||Index time(s)||Avg fetch time (us)||
|LZ4(16kB) (BEST_SPEED)|558,8|  14,5|   83|
|LZ4(60kB)  |526,2| 15| 106|
|LZ4(256kB)|523,1|  15| 323|
|LZ4(1MB)|  521,3|  15,5|1151|
|LZ4H+LZ4(60kB)|425,4   |37 |115|
|LZ4H(60kB) |424,2| 32  |112|
|LZ4H+LZ4(256kB)|397,5| 49| 267|
|LZ4H(256kB)|396,4| 43| 267|
|LZ4H+LZ4(1MB)  |390,9| 64| 875|
|LZ4H(1MB)  |389,8| 61| 887|
|LZ4H+HUFFMAN_ONLY(60kB)|   377,6|  35  |240|
|LZ4H+DEFLATE(60kB) |376,5  |41|228|
|LZ4H+HUFFMAN_ONLY(256kB)   |357,1  |45|668|
|LZ4H+DEFLATE(256kB)|356,7| 53  |694|
|LZ4H+HUFFMAN_ONLY(1MB)|352,3|  65| 2350|
|LZ4H+DEFLATE(1MB)  |352,1| 73  |2460|
|DEFLATE(60kB) (BEST_COMPRESSION)   |338,1| 34| 237|
|DEFLATE(256kB)|328,5   |37 |732|
|DEFLATE(1MB)|  326,3|  39| 2624|

Based on this data I think that Mike's suggestion to increase the block size to 
256kB is a safe trade-off. I wonder whether we should also increase the block 
size for CompressionMode.FAST to 64kB.

> Make BEST_COMPRESSION compress more aggressively?
> -
>
> Key: LUCENE-9447
> URL: https://issues.apache.org/jira/browse/LUCENE-9447
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> The Lucene86 codec supports setting a "Mode" for stored fields compression, 
> that is either "BEST_SPEED", which translates to blocks of 16kB or 128 
> documents (whichever is hit first) compressed with LZ4, or 
> "BEST_COMPRESSION", which translates to blocks of 60kB or 512 documents 
> compressed with DEFLATE with default compression level (6).
> After looking at indices that spent most disk space on stored fields 
> recently, I noticed that there was quite some room for improvement by 
> increasing the block size even further:
> ||Block size||Stored fields size||
> |60kB|168412338|
> |128kB|130813639|
> |256kB|113587009|
> |512kB|104776378|
> |1MB|100367095|
> |2MB|98152464|
> |4MB|97034425|
> |8MB|96478746|
> For this specific dataset, I had 1M documents that each had about 2kB of 
> stored fields each and quite some redundancy.
> This makes me want to look into bumping this block size to maybe 256kB. It 
> would be interesting to re-do the experiments we did on LUCENE-6100 to see 
> how this affects the merging speed. That said I don't think it would be 
> terrible if the merging time increased a bit given that we already offer the 
> BEST_SPEED option for CPU-savvy users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

--

[jira] [Created] (SOLR-14751) Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread Jira
Jan Høydahl created SOLR-14751:
--

 Summary: Zookeeper Admin screen not working for old ZK versions
 Key: SOLR-14751
 URL: https://issues.apache.org/jira/browse/SOLR-14751
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Affects Versions: 8.6
Reporter: Jan Høydahl
Assignee: Jan Høydahl


I tried viewing the ZK admin screen for Zookeeper 3.4.9, but it fails with 
stack:
{noformat}
2020-08-13 12:05:43.317 INFO  (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/zookeeper/status 
params={wt=json&_=1597320343286} status=500 QTime=2
2020-08-13 12:05:43.317 ERROR (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
null:org.apache.solr.common.SolrException: Failed to get config from zookeeper
at 
org.apache.solr.common.cloud.SolrZkClient.getConfig(SolrZkClient.java:777)
at 
org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at 
org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:500)
at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)

[GitHub] [lucene-solr] janhoy opened a new pull request #1747: SOLR-13615

2020-08-13 Thread GitBox


janhoy opened a new pull request #1747:
URL: https://github.com/apache/lucene-solr/pull/1747


   See https://issues.apache.org/jira/browse/SOLR-14751
   
   Solution is to treat `NoNodeException` not as an unrecoverable exception, 
but as expected for older ZK versions and return empty string.
   
   A workaround for 8.6.x is to do a
   
   bin/solr zkroot /zookeeper/config
   
   Then the Admin UI in 8.6 will start working with old ZK.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9447) Make BEST_COMPRESSION compress more aggressively?

2020-08-13 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176951#comment-17176951
 ] 

Adrien Grand edited comment on LUCENE-9447 at 8/13/20, 12:36 PM:
-

Indeed larger blocks make retrieval slower. I left the idea of using preset 
dictionaries out for now and did more tests. In particular I played with the 
idea of compressing with DEFLATE on top of LZ4 or LZ4 on top of LZ4. Because 
LZ4 only replaces duplicate strings with references, compressing multiple times 
with it helps bring some strings closer to each other, which sometimes mean 
they would now be closer than the window size (32kB for DEFLATE, 64kB for LZ4). 
This blog post talks a bit more about this idea and here are data points on the 
same dataset as previously. LZ4H means the high-compression mode of LZ4, which 
does more work to find longer duplicate strings.

 
||Method||Index size(MB)||Index time(s)||Avg fetch time (us)||
|LZ4(16kB) (BEST_SPEED)|304,2|9|5|
|LZ4(60kB)|141,7|7,5|10|
|LZ4H+LZ4(60kB)|120,1|16,5|9|
|LZ4H(60kB)|120,1|15|8|
|LZ4H+LZ4+HUFFMAN_ONLY(60kB)|105,8|19|25|
|LZ4H+HUFFMAN_ONLY(60kB)|105,7|16,5|23|
|LZ4(256kB)|105,1|7,5|33|
|LZ4H+DEFLATE(60kB)|102,7|17,5|26|
|DEFLATE(60kB) (BEST_COMPRESSION)|100,6|14|35|
|LZ4(1MB)|96,5|7,5|115|
|LZ4H(256kB)|68,4|14,5|22|
|LZ4H+LZ4(256kB)|64,6|15|29|
|DEFLATE(256kB)|63,8|16,5|110|
|LZ4H+HUFFMAN_ONLY(256kB)|59,1|15|54|
|LZ4H+LZ4+HUFFMAN_ONLY(256kB)|57,7|15,5|58|
|LZ4H(1MB)|56,1|16,5|76|
|DEFLATE(1MB)|54,7|16|411|
|LZ4H+DEFLATE(256kB)|54,5|15,5|57|
|LZ4H+LZ4(1MB)|49,4|17|117|
|LZ4H+HUFFMAN_ONLY(1MB)|47,9|17.5|170|
|LZ4H+LZ4+HUFFMAN_ONLY(1MB)|44,6|18|194|
|LZ4H+DEFLATE(1MB)|40,8|18,5|172|

Unfortunately, I get very different numbers for enwiki documents, which are 
less redundant and where Huffman compression is an important part of the 
compression ratio of DEFLATE, which makes DEFLATE alone unbeatable.
||Method||Index size(MB)||Index time(s)||Avg fetch time (us)||
|LZ4(16kB) (BEST_SPEED)|558,8|14,5|83|
|LZ4(60kB)|526,2|15|120|
|LZ4(256kB)|523,1|15|323|
|LZ4(1MB)|521,3|15,5|1151|
|LZ4H+LZ4(60kB)|425,4|37|115|
|LZ4H(60kB)|424,2|32|112|
|LZ4H+LZ4(256kB)|397,5|49|267|
|LZ4H(256kB)|396,4|43|267|
|LZ4H+LZ4(1MB)|390,9|64|875|
|LZ4H(1MB)|389,8|61|887|
|LZ4H+HUFFMAN_ONLY(60kB)|377,6|35|240|
|LZ4H+DEFLATE(60kB)|376,5|41|228|
|LZ4H+HUFFMAN_ONLY(256kB)|357,1|45|668|
|LZ4H+DEFLATE(256kB)|356,7|53|694|
|LZ4H+HUFFMAN_ONLY(1MB)|352,3|65|2350|
|LZ4H+DEFLATE(1MB)|352,1|73|2460|
|DEFLATE(60kB) (BEST_COMPRESSION)|338,1|34|237|
|DEFLATE(256kB)|328,5|37|732|
|DEFLATE(1MB)|326,3|39|2624|

Based on this data I think that Mike's suggestion to increase the block size to 
256kB is a safe trade-off. I wonder whether we should also increase the block 
size for CompressionMode.BEST_SPEED to 64kB.


was (Author: jpountz):
Indeed larger blocks make retrieval slower. I left the idea of using preset 
dictionaries out for now and did more tests. In particular I played with the 
idea of compressing with DEFLATE on top of LZ4 or LZ4 on top of LZ4. Because 
LZ4 only replaces duplicate strings with references, compressing multiple times 
with it helps bring some strings closer to each other, which sometimes mean 
they would now be closer than the window size (32kB for DEFLATE, 64kB for LZ4). 
This blog post talks a bit more about this idea and here are data points on the 
same dataset as previously. LZ4H means the high-compression mode of LZ4, which 
does more work to find longer duplicate strings.

 
||Method||Index size(MB)||Index time(s)||Avg fetch time (us)||
|LZ4(16kB) (BEST_SPEED) |304,2  |9| 5|
|LZ4(60kB)  |141,7| 7,5|10|
|LZ4H+LZ4(60kB)|120,1|  16,5|   9|
|LZ4H(60kB) |120,1| 15| 8|
|LZ4H+LZ4+HUFFMAN_ONLY(60kB)|   105,8|  19| 25|
|LZ4H+HUFFMAN_ONLY(60kB)|105,7| 16,5|   23|
|LZ4(256kB) |105,1| 7,5 |33|
|LZ4H+DEFLATE(60kB) |102,7| 17,5|26|
|DEFLATE(60kB) (BEST_COMPRESSION)|  100,6|  14  |35|
|LZ4(1MB)   |96,5|  7,5 |115|
|LZ4H(256kB)|   68,4|14,5|  22|
|LZ4H+LZ4(256kB)|   64,6|   15| 29|
|DEFLATE(256kB)|63,8|   16,5|   110|
|LZ4H+HUFFMAN_ONLY(256kB)   |59,1|  15  |54|
|LZ4H+LZ4+HUFFMAN_ONLY(256kB)|  57,7|   15,5|58|
|LZ4H(1MB)  |56,1   |16,5|  76|
|DEFLATE(1MB)   |54,7|  16  |411|
|LZ4H+DEFLATE(256kB)|54,5|  15,5|   57|
|LZ4H+LZ4(1MB)| 49,4|   17| 117|
|LZ4H+HUFFMAN_ONLY(1MB)|47,9|   17.5|   170|
|LZ4H+LZ4+HUFFMAN_ONLY(1MB) |44,6|  18  |194|
|LZ4H+DEFLATE(1MB)  |40,8|  18,5|172|


Unfortunately, I get very different numbers for enwiki documents, which are 
less redundant and where Huffman compression is an important part of the 
compression ratio of DEFLATE, which makes DEFLATE alone unbeatable.


||Method||Index size(MB)||Index time(s)||Avg fetch time (us)||
|LZ4(16kB) (BEST_SPEED)|558,8|  14,5|   83|
|LZ4(60kB)  |526,2

[jira] [Commented] (SOLR-14751) Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176969#comment-17176969
 ] 

Jan Høydahl commented on SOLR-14751:


See PR [https://github.com/apache/lucene-solr/pull/1747]

Trivial fix, intend to merge this to master and branch_8x tomorrow. [~houston]

> Zookeeper Admin screen not working for old ZK versions
> --
>
> Key: SOLR-14751
> URL: https://issues.apache.org/jira/browse/SOLR-14751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.6
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> I tried viewing the ZK admin screen for Zookeeper 3.4.9, but it fails with 
> stack:
> {noformat}
> 2020-08-13 12:05:43.317 INFO  (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/zookeeper/status 
> params={wt=json&_=1597320343286} status=500 QTime=2
> 2020-08-13 12:05:43.317 ERROR (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.common.SolrException: Failed to get config from zookeeper
>   at 
> org.apache.solr.common.cloud.SolrZkClient.getConfig(SolrZkClient.java:777)
>   at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
>   at 
> org.eclips

[jira] [Commented] (SOLR-14680) Provide simple interfaces to our concrete SolrCloud classes

2020-08-13 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176968#comment-17176968
 ] 

Andrzej Bialecki commented on SOLR-14680:
-

Was this change really intended for 8x? I expected this to be applicable only 
to 9.0 because it still requires a lot of refactoring to use the new APIs.

Also, there were several valid concerns that other reviewers expressed in both 
PRs that haven't been addressed in the final version. One minor issue is also 
that the code is formatted inconsistenly (4 space indents in some files).

I think this needs at least a cleanup, and probably revert from 8x.

> Provide simple interfaces to our concrete SolrCloud classes
> ---
>
> Key: SOLR-14680
> URL: https://issues.apache.org/jira/browse/SOLR-14680
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> All our current implementations of SolrCloud such as 
> # ClusterState
> # DocCollection
> # Slice
> # Replica
> etc are concrete classes. Providing alternate implementations or wrappers is 
> extremely difficult. 
> SOLR-14613 is attempting to create  such interfaces to make their sdk simpler
> The objective is not to have a comprehensive set of methods in these 
> interfaces. We will start out with a subset of required interfaces. We 
> guarantee is that signatures of methods in these interfaces will not be 
> deleted/changed . But we may add more methods as and when it suits us



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14669) Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null

2020-08-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176973#comment-17176973
 ] 

Jan Høydahl commented on SOLR-14669:


I'm fixing another ZK admin ui bug in SOLR-14751, which will go in 8.7. Let's 
try to plug this as well for 8.7.

> Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null
> ---
>
> Key: SOLR-14669
> URL: https://issues.apache.org/jira/browse/SOLR-14669
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Jörn Franke
>Priority: Minor
>
> When opening the Admin UI / ZK Status page it shows just null. Solr 8.6.0 / 
> ZK 3.6.1. Zk is a 3 node ensemble.
> It seems to be cosmetic in the UI - otherwise Solr seems to work fine. 
> Deleted already browser cache and restarted browser. Issue persists.
> In the logfiles I find the following error:
>  2020-07-20 16:34:27.853 ERROR (qtp767511741-2227) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
>      at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>      at java.base/java.lang.Integer.parseInt(Integer.java:652)
>      at java.base/java.lang.Integer.parseInt(Integer.java:770)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
>      at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:839)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:805)
>      at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:558)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>      at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>      at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>      at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>      at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at org.eclipse.jetty.server.Server.handle(Server.java:505)
>      at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>      at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>      at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at org.eclipse.jetty.io.ChannelEndPoint$

[jira] [Commented] (SOLR-14669) Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null

2020-08-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176980#comment-17176980
 ] 

Jan Høydahl commented on SOLR-14669:


Can you dump your /zookeeper/config content? I suspect the {{parseLine(String 
line)}} method in ZkDynamicConfig to get fed a {{server.N}} line from the 
config which contains "null" (either serverId, leaderPort, leaderElectionPort). 
The {{clientPort}} part already has a null-check since it is optional so I 
don't suspect that one.

{{bin/solr zk cp zk:/zookeeper/config ./zk-config.txt}}

> Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null
> ---
>
> Key: SOLR-14669
> URL: https://issues.apache.org/jira/browse/SOLR-14669
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Jörn Franke
>Priority: Minor
>
> When opening the Admin UI / ZK Status page it shows just null. Solr 8.6.0 / 
> ZK 3.6.1. Zk is a 3 node ensemble.
> It seems to be cosmetic in the UI - otherwise Solr seems to work fine. 
> Deleted already browser cache and restarted browser. Issue persists.
> In the logfiles I find the following error:
>  2020-07-20 16:34:27.853 ERROR (qtp767511741-2227) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
>      at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>      at java.base/java.lang.Integer.parseInt(Integer.java:652)
>      at java.base/java.lang.Integer.parseInt(Integer.java:770)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
>      at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:839)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:805)
>      at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:558)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>      at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>      at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>      at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>      at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at org.eclipse.jetty.server.Server.handle(Server.java:505)
>      at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>      at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>      at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427)
>      at 
> org.ec

[GitHub] [lucene-solr] madrob commented on pull request #1744: SOLR-14731: Add SingleThreaded Annotation to Class

2020-08-13 Thread GitBox


madrob commented on pull request #1744:
URL: https://github.com/apache/lucene-solr/pull/1744#issuecomment-673461640


   Rome was not built in one day, so too our code documentation Is in not 
finished in one PR



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14680) Provide simple interfaces to our concrete SolrCloud classes

2020-08-13 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176999#comment-17176999
 ] 

Uwe Schindler commented on SOLR-14680:
--

Why was the "make java 8 compatible" committed to master? Makes no sense to me, 
master is on Java 11, so it should be compatible.

In addition, this caused a build failure on master and 8.x in Solrj:

{noformat}
common.compile-core:
[mkdir] Created dir: 
/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-solrj/classes/java
[javac] Compiling 780 source files to 
/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-solrj/classes/java
[javac] 
/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/solrj/src/java/org/apache/solr/common/util/Utils.java:740:
 error: incompatible types: Charset cannot be converted to String
[javac] final String path = URLDecoder.decode(nodeName.substring(1 + 
_offset), UTF_8);
[javac] 
   ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: 
/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/Variable.java
 uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] Note: Some messages have been simplified; recompile with 
-Xdiags:verbose to get full output
[javac] 1 error
{noformat}

There's no URLDecoder taking nio.Charset.

In addition I would be careful to use URLDecoder, as this is not fully 
compliant with general URL decoding (it has special handling for "+", which is 
only applicable for form-encoded stuff, real URL decoding should be done with 
e.g. jetty classes or the implementation in RequestParsers).

> Provide simple interfaces to our concrete SolrCloud classes
> ---
>
> Key: SOLR-14680
> URL: https://issues.apache.org/jira/browse/SOLR-14680
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> All our current implementations of SolrCloud such as 
> # ClusterState
> # DocCollection
> # Slice
> # Replica
> etc are concrete classes. Providing alternate implementations or wrappers is 
> extremely difficult. 
> SOLR-14613 is attempting to create  such interfaces to make their sdk simpler
> The objective is not to have a comprehensive set of methods in these 
> interfaces. We will start out with a subset of required interfaces. We 
> guarantee is that signatures of methods in these interfaces will not be 
> deleted/changed . But we may add more methods as and when it suits us



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2020-08-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177009#comment-17177009
 ] 

David Smiley commented on SOLR-5986:


Maybe you're right.  I was thinking of them being used back & forth at runtime 
but since there's no "ExitablePostingsEnum" (it only goes to the TermsEnum 
level), all doc navigation and eventual collection is going to be *after* EDR, 
at least on a per-segment basis.  Interestingly, there is an 
ExitableIntersectVisitor for the PointValues, and there's exitable DocValues, 
(both of these at the document level) whereas not that level for the 
TermEnum->PostingsEnum... hmmm.  What got me grumbling about all this is 
there's an overall complexity with two mechanisms that look like they were 
coded by people with different visions of how to implement "timeAllowed" with 
different exceptions, terminology, code packages, and neither link to each 
other :-/

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-5986-fixtests.patch, SOLR-5986-fixtests.patch, 
> SOLR-5986-fixtests.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14680) Provide simple interfaces to our concrete SolrCloud classes

2020-08-13 Thread Noble Paul (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177021#comment-17177021
 ] 

Noble Paul commented on SOLR-14680:
---

{quote}Why was the "make java 8 compatible" committed to master? Makes no sense 
to me, master is on Java 11, so it should be compatible.

{quote}

I want the same code to be on both master and 8x so that , any future changes 
can be easily cherry-picked

{quote}
Was this change really intended for 8x? I expected this to be applicable only 
to 9.0 because it still requires a lot of refactoring to use the new APIs.
{quote}

This was not necessarily going to meet 100% the requirements of any new API. We 
can always add new things if the new API demands it. My objective was to keep 
parity between master/8x. There is no point in having divergence between the 
branches

> Provide simple interfaces to our concrete SolrCloud classes
> ---
>
> Key: SOLR-14680
> URL: https://issues.apache.org/jira/browse/SOLR-14680
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> All our current implementations of SolrCloud such as 
> # ClusterState
> # DocCollection
> # Slice
> # Replica
> etc are concrete classes. Providing alternate implementations or wrappers is 
> extremely difficult. 
> SOLR-14613 is attempting to create  such interfaces to make their sdk simpler
> The objective is not to have a comprehensive set of methods in these 
> interfaces. We will start out with a subset of required interfaces. We 
> guarantee is that signatures of methods in these interfaces will not be 
> deleted/changed . But we may add more methods as and when it suits us



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14680) Provide simple interfaces to our concrete SolrCloud classes

2020-08-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177024#comment-17177024
 ] 

ASF subversion and git services commented on SOLR-14680:


Commit 8296ccad9046bbda385050a6db2ab9a105d724ca in lucene-solr's branch 
refs/heads/branch_8x from noblepaul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8296cca ]

SOLR-14680: Compile error in 8.0


> Provide simple interfaces to our concrete SolrCloud classes
> ---
>
> Key: SOLR-14680
> URL: https://issues.apache.org/jira/browse/SOLR-14680
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> All our current implementations of SolrCloud such as 
> # ClusterState
> # DocCollection
> # Slice
> # Replica
> etc are concrete classes. Providing alternate implementations or wrappers is 
> extremely difficult. 
> SOLR-14613 is attempting to create  such interfaces to make their sdk simpler
> The objective is not to have a comprehensive set of methods in these 
> interfaces. We will start out with a subset of required interfaces. We 
> guarantee is that signatures of methods in these interfaces will not be 
> deleted/changed . But we may add more methods as and when it suits us



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14749) Provide a clean API for cluster-level event processing

2020-08-13 Thread Noble Paul (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177029#comment-17177029
 ] 

Noble Paul commented on SOLR-14749:
---

In general my suggestions are as foloows.

* we should only have ability to have generic plugins in Solr
* We can have  a new type of plugin that runs only on overseer
* Solr should not be aware of any events/triggers/actions
* If a plugin wishes to listen to a ZK node change, we should provide a simple 
API to listen to that event. 

This ensures that autoscaling is a self contained package and it all possible 
bugfixes/improvements to autoscaling can be done independent of Solr


> Provide a clean API for cluster-level event processing
> --
>
> Key: SOLR-14749
> URL: https://issues.apache.org/jira/browse/SOLR-14749
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>
> This is a companion issue to SOLR-14613 and it aims at providing a clean, 
> strongly typed API for the functionality formerly known as "triggers" - that 
> is, a component for generating cluster-level events corresponding to changes 
> in the cluster state, and a pluggable API for processing these events.
> The 8x triggers have been removed so this functionality is currently missing 
> in 9.0. However, this functionality is crucial for implementing the automatic 
> collection repair and re-balancing as the cluster state changes (nodes going 
> down / up, becoming overloaded / unused / decommissioned, etc).
> For this reason we need this API and a default implementation of triggers 
> that at least can perform automatic collection repair (maintaining the 
> desired replication factor in presence of live node changes).
> As before, the actual changes to the collections will be executed using 
> existing CollectionAdmin API, which in turn may use the placement plugins 
> from SOLR-14613.
> h3. Division of responsibility
>  * built-in Solr components (non-pluggable):
>  ** cluster state monitoring and event generation,
>  ** simple scheduler to periodically generate scheduled events
>  * plugins:
>  ** automatic collection repair on {{nodeLost}} events (provided by default)
>  ** re-balancing of replicas (periodic or on {{nodeAdded}} events)
>  ** reporting (eg. requesting additional node provisioning)
>  ** scheduled maintenance (eg. removing inactive shards after split)
> h3. Other considerations
> These plugins (unlike the placement plugins) need to execute on one 
> designated node in the cluster. Currently the easiest way to implement this 
> is to run them on the Overseer leader node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on pull request #1733: LUCENE-9450 Use BinaryDocValues in the taxonomy writer

2020-08-13 Thread GitBox


mikemccand commented on pull request #1733:
URL: https://github.com/apache/lucene-solr/pull/1733#issuecomment-673498515


   Woohoo, tests all pass now?  What a tiny change it turned out to be :)
   
   Can you try to run `luceneutil` benchmarks?  Let's see if this is net/net 
faster.  Even if it is the same speed, we should move forward -- stored fields 
are likely to get more compressed / slower to access over time, e.g. 
https://issues.apache.org/jira/browse/LUCENE-9447.
   
   We can also (separate followon issue!) better optimized the `ord -> 
FacetLabel` lookup to do them in bulk, in order, so we can share a single 
`BinaryDocValues` instance per leaf per query.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on a change in pull request #1733: LUCENE-9450 Use BinaryDocValues in the taxonomy writer

2020-08-13 Thread GitBox


mikemccand commented on a change in pull request #1733:
URL: https://github.com/apache/lucene-solr/pull/1733#discussion_r469977297



##
File path: 
lucene/facet/src/java/org/apache/lucene/facet/taxonomy/directory/DirectoryTaxonomyReader.java
##
@@ -322,9 +324,14 @@ public FacetLabel getPath(int ordinal) throws IOException {
 return res;
   }
 }

Review comment:
   Pre-existing: I don't like that we `return null` up above if the 
requested `ordinal` is out-of-bounds.  That's dangerous leniency and likely 
means the user is refreshing their main `IndexReader` and the `TaxonomyReader` 
in the wrong order.  It would be better to throw an exception here?  
@gautamworah96 could you open a follow-on issue to fix that?  Thanks.

##
File path: 
lucene/facet/src/java/org/apache/lucene/facet/taxonomy/directory/DirectoryTaxonomyWriter.java
##
@@ -494,6 +496,7 @@ private int addCategoryDocument(FacetLabel categoryPath, 
int parent) throws IOEx
 
 
fullPathField.setStringValue(FacetsConfig.pathToString(categoryPath.components, 
categoryPath.length));
 d.add(fullPathField);
+d.add(new BinaryDocValuesField(Consts.FULL, new 
BytesRef(FacetsConfig.pathToString(categoryPath.components, 
categoryPath.length;

Review comment:
   Could you factor out the `FacetsConfig.pathToString(...)` part in a new 
local variable and re-use that?  We use it in (at least?) two places here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2020-08-13 Thread Michael McCandless (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177047#comment-17177047
 ] 

Michael McCandless commented on LUCENE-8776:


{quote}*But* then there's some interesting/advanced cases where the rule is 
simply impossible to follow.
{quote}
I am still struggling to understand what use cases are actually made impossible 
by {{IndexWriter}}'s enforcing of valid offsets.

I sent a reply to [~roman.ch...@gmail.com] on the dev list to [try to 
understand those examples 
better|https://lucene.markmail.org/thread/e424bcbnyevcufg6#query:+page:1+mid:5dulutmdqqubud6p+state:results].

While the cost of writing a negative {{vInt}} was one of the motivations for 
stricter offset checking, another (stronger?) motivation was catching buggy 
analysis components that were producing invalid offsets, causing problems only 
detected much later (at search time).  {{IndexWriter}}'s offset enforcement 
allows detecting such buggy {{TokenFilter}}s earlier.

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> searching and highlighting with span queries. 
> But when I try this in Lucene 7.6, it hits the condition "Offsets must not go 
> backwards" at DefaultIndexingChain:818. This IllegalArgumentException is 
> being thrown without any comments on why this check is needed. As I explained 
> above, startOffset going backwards is perfectly valid, to deal with word 
> splitting and span operations on these specialized use cases. On the other 
> hand, it is not clear what value is added by this check and which highlighter 
> code is affected by offsets going backwards. This same check is done at 
> BaseTokenStreamTestCase:245. 
> I see others talk about how this check found bugs in WordDelimiter etc. but 
> it also prevents legitimate use cases. Can this check be removed?  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14669) Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null

2020-08-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177051#comment-17177051
 ] 

Jörn Franke commented on SOLR-14669:


sorry yes I will get it as soon as possible. thanks a lot

> Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null
> ---
>
> Key: SOLR-14669
> URL: https://issues.apache.org/jira/browse/SOLR-14669
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Jörn Franke
>Priority: Minor
>
> When opening the Admin UI / ZK Status page it shows just null. Solr 8.6.0 / 
> ZK 3.6.1. Zk is a 3 node ensemble.
> It seems to be cosmetic in the UI - otherwise Solr seems to work fine. 
> Deleted already browser cache and restarted browser. Issue persists.
> In the logfiles I find the following error:
>  2020-07-20 16:34:27.853 ERROR (qtp767511741-2227) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
>      at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>      at java.base/java.lang.Integer.parseInt(Integer.java:652)
>      at java.base/java.lang.Integer.parseInt(Integer.java:770)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
>      at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:839)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:805)
>      at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:558)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>      at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>      at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>      at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>      at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at org.eclipse.jetty.server.Server.handle(Server.java:505)
>      at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>      at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>      at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>      at 
> org.eclips

[GitHub] [lucene-solr] jpountz opened a new pull request #1748: LUCENE-9447: Increase block sizes for stored fields.

2020-08-13 Thread GitBox


jpountz opened a new pull request #1748:
URL: https://github.com/apache/lucene-solr/pull/1748


   BEST_SPEED and BEST_COMPRESSION keep the same compression algorithms but
   increase their respective block sizes from 16kB and 60kB to 64kB and
   256kB.
   
   I disabled the per-thread caching of buffers, which could likely become
   memory hogs with these greater block sizes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9447) Make BEST_COMPRESSION compress more aggressively?

2020-08-13 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177074#comment-17177074
 ] 

Adrien Grand commented on LUCENE-9447:
--

I opened a pull request with these changes: 
[https://github.com/apache/lucene-solr/pull/1748].

> Make BEST_COMPRESSION compress more aggressively?
> -
>
> Key: LUCENE-9447
> URL: https://issues.apache.org/jira/browse/LUCENE-9447
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Lucene86 codec supports setting a "Mode" for stored fields compression, 
> that is either "BEST_SPEED", which translates to blocks of 16kB or 128 
> documents (whichever is hit first) compressed with LZ4, or 
> "BEST_COMPRESSION", which translates to blocks of 60kB or 512 documents 
> compressed with DEFLATE with default compression level (6).
> After looking at indices that spent most disk space on stored fields 
> recently, I noticed that there was quite some room for improvement by 
> increasing the block size even further:
> ||Block size||Stored fields size||
> |60kB|168412338|
> |128kB|130813639|
> |256kB|113587009|
> |512kB|104776378|
> |1MB|100367095|
> |2MB|98152464|
> |4MB|97034425|
> |8MB|96478746|
> For this specific dataset, I had 1M documents that each had about 2kB of 
> stored fields each and quite some redundancy.
> This makes me want to look into bumping this block size to maybe 256kB. It 
> would be interesting to re-do the experiments we did on LUCENE-6100 to see 
> how this affects the merging speed. That said I don't think it would be 
> terrible if the merging time increased a bit given that we already offer the 
> BEST_SPEED option for CPU-savvy users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14706) Upgrading 8.6.0 to 8.6.1 causes collection creation to fail

2020-08-13 Thread Houston Putman (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman resolved SOLR-14706.
---
Fix Version/s: 8.7
   Resolution: Fixed

> Upgrading 8.6.0 to 8.6.1 causes collection creation to fail
> ---
>
> Key: SOLR-14706
> URL: https://issues.apache.org/jira/browse/SOLR-14706
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.7, 8.6.1
> Environment: 8.6.1 upgraded from 8.6.0 with more than one node
>Reporter: Gus Heck
>Assignee: Houston Putman
>Priority: Blocker
> Fix For: 8.7, 8.6.1
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The following steps will reproduce a situation in which collection creation 
> fails with this stack trace:
> {code:java}
> 2020-08-03 12:17:58.617 INFO  
> (OverseerThreadFactory-22-thread-1-processing-n:192.168.2.106:8981_solr) [   
> ] o.a.s.c.a.c.CreateCollectionCmd Create collection test861
> 2020-08-03 12:17:58.751 ERROR 
> (OverseerThreadFactory-22-thread-1-processing-n:192.168.2.106:8981_solr) [   
> ] o.a.s.c.a.c.OverseerCollectionMessageHandler Collection: test861 operation: 
> create failed:org.apache.solr.common.SolrException
>   at 
> org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:347)
>   at 
> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:264)
>   at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:517)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:212)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Only one extra tag supported for the 
> tag cores in {
>   "cores":"#EQUAL",
>   "node":"#ANY",
>   "strict":"false"}
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Clause.(Clause.java:122)
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Clause.create(Clause.java:235)
>   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>   at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>   at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:144)
>   at 
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:372)
>   at 
> org.apache.solr.cloud.api.collections.Assign.usePolicyFramework(Assign.java:300)
>   at 
> org.apache.solr.cloud.api.collections.Assign.usePolicyFramework(Assign.java:277)
>   at 
> org.apache.solr.cloud.api.collections.Assign$AssignStrategyFactory.create(Assign.java:661)
>   at 
> org.apache.solr.cloud.api.collections.CreateCollectionCmd.buildReplicaPositions(CreateCollectionCmd.java:415)
>   at 
> org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:192)
>   ... 6 more
> {code}
> Generalized steps:
> # Deploy 8.6.0 with separate data directories, create a collection to prove 
> it's working
> # download 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.1-RC1-reva32a3ac4e43f629df71e5ae30a3330be94b095f2/solr/solr-8.6.1.tgz
> # Stop the server on all nodes
> # replace the 8.6.0 with 8.6.1 
> # Start the server
> # via the admin UI create a collection
> # Observe failure warning box (with no text), check logs, find above trace
> Or more exactly here are my actual commands with a checkout of the 8.6.0 tag 
> in the working dir to which cloud.sh was configured:
> # /cloud.sh new -r upgrademe 
> # Create collection named test860 via admin ui with _default
> # ./cloud.sh stop 
> # cd upgrademe/
> # cp ../8_6_1_RC1/solr-8.6.1.tgz .
> # mv solr-8.6.0-SNAPSHOT old
> # tar xzvf solr-8.6.1.tgz
> # cd ..
> # ./cloud.sh start
> # Try to create collection test861 with _default config
> For those not familiar with it the first command there with cloud.sh builds 
> the tarball in the working directory and then makes a directory named 
> "upgrademe" copies

[GitHub] [lucene-solr] sigram closed pull request #1637: SOLR-12847 Cut over implementation of maxShardsPerNode to a collection policy

2020-08-13 Thread GitBox


sigram closed pull request #1637:
URL: https://github.com/apache/lucene-solr/pull/1637


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram closed pull request #1506: SOLR-14470: Add streaming expressions to /export handler

2020-08-13 Thread GitBox


sigram closed pull request #1506:
URL: https://github.com/apache/lucene-solr/pull/1506


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram closed pull request #1486: SOLR-14423: Use SolrClientCache instance managed by CoreContainer

2020-08-13 Thread GitBox


sigram closed pull request #1486:
URL: https://github.com/apache/lucene-solr/pull/1486


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram closed pull request #1417: SOLR-12847: Auto-create a policy rule that corresponds to maxShardsPerNode

2020-08-13 Thread GitBox


sigram closed pull request #1417:
URL: https://github.com/apache/lucene-solr/pull/1417


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn merged pull request #1746: Fix syntax warning in smokeTestRelease.py

2020-08-13 Thread GitBox


munendrasn merged pull request #1746:
URL: https://github.com/apache/lucene-solr/pull/1746


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on pull request #1746: Fix syntax warning in smokeTestRelease.py

2020-08-13 Thread GitBox


munendrasn commented on pull request #1746:
URL: https://github.com/apache/lucene-solr/pull/1746#issuecomment-673555398


   @janhoy 
   Thanks for the review



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13807) Caching for term facet counts

2020-08-13 Thread Michael Gibney (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177108#comment-17177108
 ] 

Michael Gibney commented on SOLR-13807:
---

Quick follow-up: regarding skg on master vs. 
77daac4ae2a4d1c40652eafbbdb42b582fe2d02d with no cache configured: the apparent 
performance improvement is simultaneously a little misleading (depending on 
perspective) _and also_ likely to manifest in practice for many common use 
cases. This is a byproduct of the fact that in order to support caching, we 
need to retain "query" references (for the purpose of building query-based 
cache keys), not just domain DocSets. Once this was done, these query keys (and 
in some cases extra deduping based on actual docsets) are used to ensure that 
"sweep" count accumulation only accumulates once for each unique domain. So for 
the common/default case where fgQ=q, and fgSet (as determined by 
fgFilters=fgQ∩domainFilters) is the same as the base domain, sweep collection 
only happens over the base domain (deduped with fgSet) and the bgSet -- 2 
domains instead of 3. I ran the benchmarks forcing fgSet to be (slightly) 
different than base domain, and performance is comparable to master (as one 
would expect/hope).

> Caching for term facet counts
> -
>
> Key: SOLR-13807
> URL: https://issues.apache.org/jira/browse/SOLR-13807
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Affects Versions: master (9.0), 8.2
>Reporter: Michael Gibney
>Priority: Minor
> Attachments: SOLR-13807-benchmarks.tgz, 
> SOLR-13807__SOLR-13132_test_stub.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr does not have a facet count cache; so for _every_ request, term facets 
> are recalculated for _every_ (facet) field, by iterating over _every_ field 
> value for _every_ doc in the result domain, and incrementing the associated 
> count.
> As a result, subsequent requests end up redoing a lot of the same work, 
> including all associated object allocation, GC, etc. This situation could 
> benefit from integrated caching.
> Because of the domain-based, serial/iterative nature of term facet 
> calculation, latency is proportional to the size of the result domain. 
> Consequently, one common/clear manifestation of this issue is high latency 
> for faceting over an unrestricted domain (e.g., {{\*:\*}}), as might be 
> observed on a top-level landing page that exposes facets. This type of 
> "static" case is often mitigated by external (to Solr) caching, either with a 
> caching layer between Solr and a front-end application, or within a front-end 
> application, or even with a caching layer between the end user and a 
> front-end application.
> But in addition to the overhead of handling this caching elsewhere in the 
> stack (or, for a new user, even being aware of this as a potential issue to 
> mitigate), any external caching mitigation is really only appropriate for 
> relatively static cases like the "landing page" example described above. A 
> Solr-internal facet count cache (analogous to the {{filterCache}}) would 
> provide the following additional benefits:
>  # ease of use/out-of-the-box configuration to address a common performance 
> concern
>  # compact (specifically caching count arrays, without the extra baggage that 
> accompanies a naive external caching approach)
>  # NRT-friendly (could be implemented to be segment-aware)
>  # modular, capable of reusing the same cached values in conjunction with 
> variant requests over the same result domain (this would support common use 
> cases like paging, but also potentially more interesting direct uses of 
> facets). 
>  # could be used for distributed refinement (i.e., if facet counts over a 
> given domain are cached, a refinement request could simply look up the 
> ordinal value for each enumerated term and directly grab the count out of the 
> count array that was cached during the first phase of facet calculation)
>  # composable (e.g., in aggregate functions that calculate values based on 
> facet counts across different domains, like SKG/relatedness – see SOLR-13132)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14750) Harden TestBulkSchemaConcurrent

2020-08-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177114#comment-17177114
 ] 

David Smiley commented on SOLR-14750:
-

Use PRs please?

> Harden TestBulkSchemaConcurrent
> ---
>
> Key: SOLR-14750
> URL: https://issues.apache.org/jira/browse/SOLR-14750
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-14750.patch
>
>
> This test has been failing quite often lately. I poked around a bit and see 
> what I _think_ is evidence of a race condition in CoreContainer.reload where 
> a reload on the same core is happening from two places in close succession. 
> I'll attach a preliminary patch soon.
> Without this patch I had 25 failures out of 1,000 runs, with it 0.
> I consider this patch a WIP, putting up for comment. Well, it has nocommits 
> so... But In particular, I have to review some changes I made about which 
> name we're using for PendingCoreOps. I also want to back out my changes and 
> beast it again with some more logging to see if I can nail down that multiple 
> reloads are happening before declaring victory.
> What this does is put the name of the core we're reloading in pendingCoreOps 
> earlier in the reload process. Then the second call to reload will wait until 
> the first is completed. I also restructured it a bit because I don't like if 
> clauses that go on forever and a small else clause way down the code. I 
> inverted the test and bailed out of the method rather than fall off the end 
> after the else clause.
> One thing I don't like about this is two reloads in such rapid succession 
> seems wasteful. Even so, I can imagine that one reload gets through far 
> enough to load the schema, then a schema update changes the schema _then_ 
> calls reload. So I don't think just returning if there's a reload happening 
> on that core already is valid.
> More to come.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-08-13 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177121#comment-17177121
 ] 

Mark Robert Miller commented on SOLR-14636:
---

Okay, I’m getting close to comfortable with the tests. I’m finally back to the 
point where specifying more than a single JVM for tests on my 32 core thread 
ripper is like no gain. 

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg, jenkins.png, solr-ref-branch.gif
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: *extremely stable* with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*extremely 
> stable*{color} with *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*extremely 
> stable*{color} with {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*extremely stable*{color} 
> with {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14749) Provide a clean API for cluster-level event processing

2020-08-13 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177221#comment-17177221
 ] 

Ilan Ginzburg commented on SOLR-14749:
--

if there’s an abstraction layer between Solr and Autoscaling, then fixes to 
Autoscaling can also be done independent of Solr.
If Autoscaling depends on the internals of Solr (like ZK) then changes to Solr 
are more complex and/or break Autoscaling.
A plugin API should focus on the needs of the plugins, not on the current 
implementation of Solr. It will be easier to use by plugin developers and not 
lock us (Solr) into the current way we do things. Of course it has to reflect 
Solr abstractions (node, collection, shard, replica) these likely change slower 
than their implementations.

> Provide a clean API for cluster-level event processing
> --
>
> Key: SOLR-14749
> URL: https://issues.apache.org/jira/browse/SOLR-14749
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>
> This is a companion issue to SOLR-14613 and it aims at providing a clean, 
> strongly typed API for the functionality formerly known as "triggers" - that 
> is, a component for generating cluster-level events corresponding to changes 
> in the cluster state, and a pluggable API for processing these events.
> The 8x triggers have been removed so this functionality is currently missing 
> in 9.0. However, this functionality is crucial for implementing the automatic 
> collection repair and re-balancing as the cluster state changes (nodes going 
> down / up, becoming overloaded / unused / decommissioned, etc).
> For this reason we need this API and a default implementation of triggers 
> that at least can perform automatic collection repair (maintaining the 
> desired replication factor in presence of live node changes).
> As before, the actual changes to the collections will be executed using 
> existing CollectionAdmin API, which in turn may use the placement plugins 
> from SOLR-14613.
> h3. Division of responsibility
>  * built-in Solr components (non-pluggable):
>  ** cluster state monitoring and event generation,
>  ** simple scheduler to periodically generate scheduled events
>  * plugins:
>  ** automatic collection repair on {{nodeLost}} events (provided by default)
>  ** re-balancing of replicas (periodic or on {{nodeAdded}} events)
>  ** reporting (eg. requesting additional node provisioning)
>  ** scheduled maintenance (eg. removing inactive shards after split)
> h3. Other considerations
> These plugins (unlike the placement plugins) need to execute on one 
> designated node in the cluster. Currently the easiest way to implement this 
> is to run them on the Overseer leader node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-08-13 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177245#comment-17177245
 ] 

Mark Robert Miller commented on SOLR-14636:
---

It may be Monday rather than Friday. We will see how angry the wife remains 
today with my absence. It’s hard to pull back with a lot in the air. I’ve got 
to get the overseer back to proper batching though and there is still a small 
batch of other things. 

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg, jenkins.png, solr-ref-branch.gif
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: *extremely stable* with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*extremely 
> stable*{color} with *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*extremely 
> stable*{color} with {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*extremely stable*{color} 
> with {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gautamworah96 commented on a change in pull request #1733: LUCENE-9450 Use BinaryDocValues in the taxonomy writer

2020-08-13 Thread GitBox


gautamworah96 commented on a change in pull request #1733:
URL: https://github.com/apache/lucene-solr/pull/1733#discussion_r470160799



##
File path: 
lucene/facet/src/java/org/apache/lucene/facet/taxonomy/directory/DirectoryTaxonomyWriter.java
##
@@ -494,6 +496,7 @@ private int addCategoryDocument(FacetLabel categoryPath, 
int parent) throws IOEx
 
 
fullPathField.setStringValue(FacetsConfig.pathToString(categoryPath.components, 
categoryPath.length));
 d.add(fullPathField);
+d.add(new BinaryDocValuesField(Consts.FULL, new 
BytesRef(FacetsConfig.pathToString(categoryPath.components, 
categoryPath.length;

Review comment:
   Yep, this could be cleaner. Extracted out in the next commit





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on pull request #1740: LUCENE-9458: WDGF and WDF should tie-break by endOffset

2020-08-13 Thread GitBox


dsmiley commented on pull request #1740:
URL: https://github.com/apache/lucene-solr/pull/1740#issuecomment-673640254


   > I wonder if this is something that we should enforce somewhere. It seems 
that you only want the original term to come first so maybe we can mark it with 
a special flag ?
   
   Even if the test is changed to not involve the original term (thus don't 
expect it in output either), the test would reveal a problem with the 
"catenateAll" token (different from "preserveOriginal").
   
   > Why does it matter that it appears first ? Is it to be compatible with 
other filters like the synonym filter ?
   
   It "looks right" to me that it comes first intuitively.  Longest tokens 
first that start at the same position.  That's not a great reason, I realize.  
We should tie break on something so that it's not arbitrary -- a better reason. 
 Also, at my company I have a delegating TokenFilter to WDGF that presumes this 
is the case, and this is how I found this inconsistency.
   
   I want to try a "fuzz test" of WDGF to see if a pure start & end offset 
based ordering is sufficient, or is it truly necessary to also look at the 
position increment and position length.  My theory is that with what WDGF/WDF 
does, the offsets alone are fine to sort on because I don't think any later 
sub-token ("later" by offset) would have a token position happening earlier, 
and that likewise longer pos lengths come first.  The fuzz test would use a 
small dictionary of small sub-tokens and then it'd recombine them randomly to 
see if the graph it produces is identical to an alternate WDGF/WDF with tweaked 
token sort rules.  I don't think this'd be committable because the "alternate" 
would be temporary changes to the sorter compare implementation that looks at a 
system property.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1740: LUCENE-9458: WDGF and WDF should tie-break by endOffset

2020-08-13 Thread GitBox


dsmiley commented on a change in pull request #1740:
URL: https://github.com/apache/lucene-solr/pull/1740#discussion_r470163319



##
File path: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/miscellaneous/WordDelimiterFilter.java
##
@@ -398,10 +399,20 @@ public void reset() throws IOException {
   private class OffsetSorter extends InPlaceMergeSorter {
 @Override
 protected int compare(int i, int j) {
+  // earlier start offset
   int cmp = Integer.compare(startOff[i], startOff[j]);
-  if (cmp == 0) {
-cmp = Integer.compare(posInc[j], posInc[i]);
+  if (cmp != 0) {
+return cmp;
   }
+  
+  // earlier position
+  cmp = Integer.compare(posInc[j], posInc[i]);

Review comment:
   @bruno-roustant pointed out to me that the existing comparison here 
looks faulty -- notice it's doing the *later* position due to the swap of "j" 
and "i".  This is how it was before.  I think it's wrong, and makes me suspect 
that it's likely this comparison yields a zero.  If true, this is more evidence 
that suggest the sort can merely look at start & end offsets.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14749) Provide a clean API for cluster-level event processing

2020-08-13 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177290#comment-17177290
 ] 

Ilan Ginzburg commented on SOLR-14749:
--

Not all plugins need to execute on one designated node. Most of the time the 
constraint is have a single concurrent invocation.
For collection repair for example, we need a single concurrent invocation per 
collection but it is likely desirable to be able to invoke repair for multiple 
collections concurrently. And these repair triggers do not have to run in the 
same JVM.

> Provide a clean API for cluster-level event processing
> --
>
> Key: SOLR-14749
> URL: https://issues.apache.org/jira/browse/SOLR-14749
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>
> This is a companion issue to SOLR-14613 and it aims at providing a clean, 
> strongly typed API for the functionality formerly known as "triggers" - that 
> is, a component for generating cluster-level events corresponding to changes 
> in the cluster state, and a pluggable API for processing these events.
> The 8x triggers have been removed so this functionality is currently missing 
> in 9.0. However, this functionality is crucial for implementing the automatic 
> collection repair and re-balancing as the cluster state changes (nodes going 
> down / up, becoming overloaded / unused / decommissioned, etc).
> For this reason we need this API and a default implementation of triggers 
> that at least can perform automatic collection repair (maintaining the 
> desired replication factor in presence of live node changes).
> As before, the actual changes to the collections will be executed using 
> existing CollectionAdmin API, which in turn may use the placement plugins 
> from SOLR-14613.
> h3. Division of responsibility
>  * built-in Solr components (non-pluggable):
>  ** cluster state monitoring and event generation,
>  ** simple scheduler to periodically generate scheduled events
>  * plugins:
>  ** automatic collection repair on {{nodeLost}} events (provided by default)
>  ** re-balancing of replicas (periodic or on {{nodeAdded}} events)
>  ** reporting (eg. requesting additional node provisioning)
>  ** scheduled maintenance (eg. removing inactive shards after split)
> h3. Other considerations
> These plugins (unlike the placement plugins) need to execute on one 
> designated node in the cluster. Currently the easiest way to implement this 
> is to run them on the Overseer leader node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9460) getPath in DirectoryTaxonomyReader should throw an exception

2020-08-13 Thread Gautam Worah (Jira)
Gautam Worah created LUCENE-9460:


 Summary: getPath in DirectoryTaxonomyReader should throw an 
exception
 Key: LUCENE-9460
 URL: https://issues.apache.org/jira/browse/LUCENE-9460
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Affects Versions: 8.5.2
Reporter: Gautam Worah


This issue is a spillover from [LUCENE-9450 
PR|https://github.com/apache/lucene-solr/pull/1733] and was suggested by 
[~mikemccand]

If the {{ordinal}} is out of bound it indicates that the user called their main 
{{IndexReader}} and the {{TaxonomyReader}} in the wrong order. 

In this case, we should throw an {{IllegalArgumentException}} to warn the user 
instead of returning {{null}}

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gautamworah96 commented on a change in pull request #1733: LUCENE-9450 Use BinaryDocValues in the taxonomy writer

2020-08-13 Thread GitBox


gautamworah96 commented on a change in pull request #1733:
URL: https://github.com/apache/lucene-solr/pull/1733#discussion_r470204070



##
File path: 
lucene/facet/src/java/org/apache/lucene/facet/taxonomy/directory/DirectoryTaxonomyReader.java
##
@@ -322,9 +324,14 @@ public FacetLabel getPath(int ordinal) throws IOException {
 return res;
   }
 }

Review comment:
   Opened 
[LUCENE-9460](https://issues.apache.org/jira/browse/LUCENE-9460?filter=-2) for 
this





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9448) Make an equivalent to Ant's "run" target for Luke module

2020-08-13 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-9448.
-
Fix Version/s: master (9.0)
   Resolution: Fixed

> Make an equivalent to Ant's "run" target for Luke module
> 
>
> Key: LUCENE-9448
> URL: https://issues.apache.org/jira/browse/LUCENE-9448
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Tomoko Uchida
>Priority: Minor
> Fix For: master (9.0)
>
> Attachments: LUCENE-9448.patch, LUCENE-9448.patch
>
>
> With Ant build, Luke Swing app can be launched by "ant run" after checking 
> out the source code. "ant run" allows developers to immediately see the 
> effects of UI changes without creating the whole zip/tgz package (originally, 
> it was suggested when integrating Luke to Lucene).
> In Gradle, {{:lucene:luke:run}} task would be easily implemented with 
> {{JavaExec}}, I think.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14711) Incorrect insecure settings check in CoreContainer

2020-08-13 Thread Jason Gerlowski (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-14711.

Resolution: Duplicate

Hey Mark, closing this out as a duplicate of a separate issue I'd created for 
this.

> Incorrect insecure settings check in CoreContainer
> --
>
> Key: SOLR-14711
> URL: https://issues.apache.org/jira/browse/SOLR-14711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Todd
>Priority: Major
>
> I've configured SolrCloud (8.5) with both SSL and Authentication which is 
> working correctly. However, I get the following warning in the logs
>  
> "Solr authentication is enabled, but SSL is off. Consider enabling SSL to 
> protect user credentials and data with encryption"
>  
> Looking at the source code for SolrCloud there appears to be a bug
> if (authenticationPlugin !=null && 
> StringUtils.isNotEmpty(System.getProperty("solr.jetty.https.port"))) {
> log.warn("Solr authentication is enabled, but SSL is off.  Consider enabling 
> SSL to protect user credentials and data with encryption.");
> }
>  
> Rather than checking for an empty system property (which would indicate SLL 
> is off) its checking for a populated one which is what you get when SSL is on.
> This is a major issue because administrators are very concerned that Solr has 
> been deployed in an insecure fashion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] yuriy-b-koval commented on pull request #1713: SOLR-14703 Edismax parser replaces whitespace characters with spaces

2020-08-13 Thread GitBox


yuriy-b-koval commented on pull request #1713:
URL: https://github.com/apache/lucene-solr/pull/1713#issuecomment-673719711


   > Hey @yuriy-b-koval , it took me awhile to understand it, but I spent some 
time brushing up on edismax and am confident enough to say this LGTM now.
   > 
   > That said - one thing that might still be nice here would be to have unit 
tests on ExtendedDismaxQParser.splitIntoClauses itself. That'd give us more 
targeted validation of your change here. It'd also give us a regression barrier 
in case someone breaks the logic in splitIntoClauses somewhere down the road.
   > 
   > Would you be willing to tackle those tests? (Or did you already try to 
write tests at that level and run into some roadblock or other?) They're not 
strictly necessary, but I'd like one of us to give them a shot if we can before 
merging.
   
   Adding some tests calling ExtendedDismaxQParser.splitIntoClauses directly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on pull request #1623: LUCENE-8962: Merge segments on getReader

2020-08-13 Thread GitBox


mikemccand commented on pull request #1623:
URL: https://github.com/apache/lucene-solr/pull/1623#issuecomment-673725282


   > I kicked off beasting of all Lucene (core + modules) tests with this 
change ... no failures yet after 31 iterations.
   
   OK, it has run 1061 iterations now of all Lucene (core + modules) tests, and 
some interesting (~15) failures:
   
   ```
   ant test  -Dtestcase=TestIndexWriter -Dtests.method=testRandomOperations 
-Dtests.seed=432EB011B0898067 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=pt-ST -Dtests.timezone=US/Mountain -Dtests.asserts=true -Dtest\
   s.file.encoding=UTF-8
   
  [junit4]   2> ago 13, 2020 3:37:55 DA TARDE 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[Thread-0,5,TGRP-TestIndexWriter]
  [junit4]   2> java.lang.AssertionError: java.lang.IllegalStateException: 
this writer hit an unrecoverable error; cannot commit
  [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([432EB011B0898067]:0)
  [junit4]   2>at 
org.apache.lucene.index.TestIndexWriter.lambda$testRandomOperations$48(TestIndexWriter.java:3886)
  [junit4]   2>at java.base/java.lang.Thread.run(Thread.java:834)
  [junit4]   2> Caused by: java.lang.IllegalStateException: this writer hit 
an unrecoverable error; cannot commit
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4930)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3365)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3664)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3622)
  [junit4]   2>at 
org.apache.lucene.index.TestIndexWriter.lambda$testRandomOperations$48(TestIndexWriter.java:3879)
  [junit4]   2>... 1 more
  [junit4]   2>Suppressed: 
org.apache.lucene.store.AlreadyClosedException: refusing to delete any files: 
this IndexWriter hit an unrecoverable exception
  [junit4]   2>at 
org.apache.lucene.index.IndexFileDeleter.ensureOpen(IndexFileDeleter.java:349)
  [junit4]   2>at 
org.apache.lucene.index.IndexFileDeleter.deleteFiles(IndexFileDeleter.java:669)
  [junit4]   2>at 
org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:589)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3375)
  [junit4]   2>... 4 more
  [junit4]   2>Caused by: 
org.apache.lucene.index.CorruptIndexException: Problem reading index from 
MockDirectoryWrapper(ByteBuffersDirectory@ebddcb6 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@2bd239b2) 
(resource=MockDir\
   ectoryWrapper(ByteBuffersDirectory@ebddcb6 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@2bd239b2))
  [junit4]   2>at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:142)
  [junit4]   2>at 
org.apache.lucene.index.SegmentReader.(SegmentReader.java:83)
  [junit4]   2>at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:171)
  [junit4]   2>at 
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:213)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.lambda$getReader$0(IndexWriter.java:568)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.lambda$getReader$1(IndexWriter.java:614)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter$2.onMergeComplete(IndexWriter.java:3461)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.commitMerge(IndexWriter.java:4078)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4697)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4257)
  [junit4]   2>at 
org.apache.lucene.index.IndexWriter$IndexWriterMergeSource.merge(IndexWriter.java:5808)
  [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
  [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:682)
  [junit4]   2>Caused by: java.io.FileNotFoundException: _1m.fnm in 
dir=ByteBuffersDirectory@ebddcb6 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@2bd239b2
  [junit4]   2>at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:748)
  [junit4]   2>at 
org.apache.lucene.store.Directory.openChecksu

[GitHub] [lucene-solr] mikemccand commented on pull request #1743: Gradual naming convention enforcement.

2020-08-13 Thread GitBox


mikemccand commented on pull request #1743:
URL: https://github.com/apache/lucene-solr/pull/1743#issuecomment-673726633


   Another few, not sure if these also fail on mainline (though prolly we have 
seed shifting?):
   
   ```
   [junit4:pickseed] Seed property 'tests.seed' already defined: 
B87E3065EF9405AA
  [junit4]  says ᐊᐃ! Master seed: B87E3065EF9405AA
  [junit4] Executing 1 suite with 1 JVM.
  [junit4]
  [junit4] Started J0 PID(1341994@localhost).
  [junit4] Suite: org.apache.lucene.index.TestIndexWriterReader
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestIndexWriterReader -Dtests.method=testAddCloseOpen 
-Dtests.seed=B87E3065EF9405AA -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sg-CF -Dtests.timezone=Africa/Asmera -Dtests.ass\
   erts=true -Dtests.file.encoding=UTF-8
  [junit4] ERROR   0.86s | TestIndexWriterReader.testAddCloseOpen <<<
  [junit4]> Throwable #1: 
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
  [junit4]>at 
__randomizedtesting.SeedInfo.seed([B87E3065EF9405AA:1C60E9EE76043EA5]:0)
  [junit4]>at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
  [junit4]>at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
  [junit4]>at 
org.apache.lucene.index.IndexWriter.lambda$getReader$2(IndexWriter.java:653)
  [junit4]>at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:105)
  [junit4]>at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:642)
  [junit4]>at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:471)
  [junit4]>at 
org.apache.lucene.index.TestIndexWriterReader.testAddCloseOpen(TestIndexWriterReader.java:81)
  [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
  [junit4]>at java.base/java.lang.Thread.run(Thread.java:834)
  [junit4]   2> NOTE: test params are: codec=Asserting(Lucene86): 
{field1=FST50, indexname=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
field3=PostingsFormat(name=LuceneVarGapDocFreqInterva\
   l), field2=PostingsFormat(name=Asserting), field5=FST50, field4=Lucene84}, 
docValues:{}, maxPointsInLeafNode=1841, maxMBSortInHeap=6.859086864809992, 
sim=Asserting(RandomSimilarity(queryNorm=false): {field1=DFR GL1, field3=IB 
LL-LZ(0.3), field2=DFR\
I(F)L1, field5=LM Jelinek-Mercer(0.10), field4=DFR GBZ(0.3)}), 
locale=sg-CF, timezone=Africa/Asmera
  [junit4]   2> NOTE: Linux 5.5.6-arch1-1 amd64/Oracle Corporation 11.0.6 
(64-bit)/cpus=128,threads=1,free=243485936,total=536870912
  [junit4]   2> NOTE: All tests run in this JVM: [TestIndexWriterReader]
  [junit4] Completed [1/1 (1!)] in 1.06s, 1 test, 1 error <<< FAILURES!
  [junit4]
  [junit4]
  [junit4] Tests with failures [seed: B87E3065EF9405AA]:
  [junit4]   - 
org.apache.lucene.index.TestIndexWriterReader.testAddCloseOpen
  [junit4]
  [junit4]
  [junit4] JVM J0: 0.34 .. 1.95 = 1.61s
  [junit4] Execution time total: 1.95 sec.
  [junit4] Tests summary: 1 suite, 1 test, 1 error
   ```
   
   and this exciting one:
   
   ```
  [junit4] Executing 1 suite with 1 JVM.
  [junit4]
  [junit4] Started J0 PID(1359835@localhost).
  [junit4] Suite: org.apache.lucene.index.TestForTooMuchCloning
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestForTooMuchCloning -Dtests.method=test 
-Dtests.seed=B9D29D93C0F80019 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=fr-BJ -Dtests.timezone=Europe/London -Dtests.asserts=true -D\
   tests.file.encoding=UTF-8
  [junit4] FAILURE 0.33s | TestForTooMuchCloning.test <<<
  [junit4]> Throwable #1: java.lang.AssertionError: too many calls to 
IndexInput.clone during merging: 567
  [junit4]>at 
__randomizedtesting.SeedInfo.seed([B9D29D93C0F80019:3186A2496E046DE1]:0)
  [junit4]>at 
org.apache.lucene.index.TestForTooMuchCloning.test(TestForTooMuchCloning.java:59)
  [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  [junit4]>  

[GitHub] [lucene-solr] mikemccand commented on pull request #1743: Gradual naming convention enforcement.

2020-08-13 Thread GitBox


mikemccand commented on pull request #1743:
URL: https://github.com/apache/lucene-solr/pull/1743#issuecomment-673726880


   Ugh sorry wrong PR!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on pull request #1623: LUCENE-8962: Merge segments on getReader

2020-08-13 Thread GitBox


mikemccand commented on pull request #1623:
URL: https://github.com/apache/lucene-solr/pull/1623#issuecomment-673727019


   Ugh, also these test failures that I put onto the wrong PR: 
https://github.com/apache/lucene-solr/pull/1743#issuecomment-673726633



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] msfroh commented on pull request #1623: LUCENE-8962: Merge segments on getReader

2020-08-13 Thread GitBox


msfroh commented on pull request #1623:
URL: https://github.com/apache/lucene-solr/pull/1623#issuecomment-673730625


   I went through the code yesterday to try to understand what's going on. 
   
   I'm not familiar with the pooled reader stuff, but it seems reasonable to 
me. I trust the tests (especially w/ added test coverage) far more than I trust 
my knowledge of that part of the code.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] msfroh commented on a change in pull request #1623: LUCENE-8962: Merge segments on getReader

2020-08-13 Thread GitBox


msfroh commented on a change in pull request #1623:
URL: https://github.com/apache/lucene-solr/pull/1623#discussion_r470274067



##
File path: lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
##
@@ -3321,8 +3395,11 @@ private long prepareCommitInternal() throws IOException {
* below.  We also ensure that we pull the merge readers while holding 
{@code IndexWriter}'s lock.  Otherwise
* we could see concurrent deletions/updates applied that do not belong to 
the segment.

Review comment:
   This Javadoc will need to be updated to reflect the broader use of this 
method.
   
   Also, is `preparePointInTimeMerge` (without the `on`) a better name?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14669) Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null

2020-08-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177355#comment-17177355
 ] 

Jörn Franke commented on SOLR-14669:


server.1=serverA:2888:3888:participant
server.2=serverB:2888:3888:participant
server.3=serverC:2888:3888:participant
version=0

> Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null
> ---
>
> Key: SOLR-14669
> URL: https://issues.apache.org/jira/browse/SOLR-14669
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Jörn Franke
>Priority: Minor
>
> When opening the Admin UI / ZK Status page it shows just null. Solr 8.6.0 / 
> ZK 3.6.1. Zk is a 3 node ensemble.
> It seems to be cosmetic in the UI - otherwise Solr seems to work fine. 
> Deleted already browser cache and restarted browser. Issue persists.
> In the logfiles I find the following error:
>  2020-07-20 16:34:27.853 ERROR (qtp767511741-2227) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
>      at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>      at java.base/java.lang.Integer.parseInt(Integer.java:652)
>      at java.base/java.lang.Integer.parseInt(Integer.java:770)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
>      at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:839)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:805)
>      at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:558)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>      at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>      at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>      at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>      at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at org.eclipse.jetty.server.Server.handle(Server.java:505)
>      at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>      at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>      at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at org.eclipse.jetty.io.Ch

[jira] [Commented] (SOLR-14669) Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null

2020-08-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177363#comment-17177363
 ] 

Jan Høydahl commented on SOLR-14669:


Hmm, you are also missing client port (e.g. {{;2181}} from the server lines, 
which makes this smell like SOLR-14671. There is not exactly the same 
stacktrace, but would you care to add client port to all server lines and see 
if that fixes the crash for you? If so, it is a duplicate of SOLR-14671. If 
your stack trace persists, it must be a different issue.

> Solr 8.6 / ZK 3.6.1 / Admin UI - ZK Status Null
> ---
>
> Key: SOLR-14669
> URL: https://issues.apache.org/jira/browse/SOLR-14669
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Jörn Franke
>Priority: Minor
>
> When opening the Admin UI / ZK Status page it shows just null. Solr 8.6.0 / 
> ZK 3.6.1. Zk is a 3 node ensemble.
> It seems to be cosmetic in the UI - otherwise Solr seems to work fine. 
> Deleted already browser cache and restarted browser. Issue persists.
> In the logfiles I find the following error:
>  2020-07-20 16:34:27.853 ERROR (qtp767511741-2227) [   ] 
> o.a.s.h.RequestHandlerBase java.lang.NumberFormatException: For input string: 
> "null"
>      at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>      at java.base/java.lang.Integer.parseInt(Integer.java:652)
>      at java.base/java.lang.Integer.parseInt(Integer.java:770)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.getZkStatus(ZookeeperStatusHandler.java:116)
>      at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:78)
>      at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:839)
>      at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:805)
>      at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:558)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>      at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>      at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>      at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>      at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>      at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>      at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>      at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>      at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>      at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>      at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>      at org.eclipse.jetty.server.Server.handle(Server.java:505)
>      at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>      at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>      at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>      at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>      at 
> org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427)
>      at 
> org.eclipse.jetty.io.ssl.SslConnecti

[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-08-13 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177369#comment-17177369
 ] 

Mark Robert Miller commented on SOLR-14636:
---

Yeah, Monday will be the big day. My overseer improvements were very limited 
compared to what can be done (though it still should get some more fundamental 
changes - no reason for such a focus on a single node to do this stuff) and I 
want to take the time to nail it. There is just no reason the overseer should 
get slammed and abused like it does. 

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg, jenkins.png, solr-ref-branch.gif
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: *extremely stable* with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*extremely 
> stable*{color} with *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*extremely 
> stable*{color} with {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*extremely stable*{color} 
> with {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on pull request #1747: SOLR-14751: Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread GitBox


janhoy commented on pull request #1747:
URL: https://github.com/apache/lucene-solr/pull/1747#issuecomment-673759242


   SOLR-14751



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #1747: SOLR-14751: Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread GitBox


janhoy merged pull request #1747:
URL: https://github.com/apache/lucene-solr/pull/1747


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14751) Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177379#comment-17177379
 ] 

ASF subversion and git services commented on SOLR-14751:


Commit bed3b8fbfbc5a2e35e987040ec8fbce1df0988bd in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bed3b8f ]

SOLR-14751: Zookeeper Admin screen not working for old ZK versions



> Zookeeper Admin screen not working for old ZK versions
> --
>
> Key: SOLR-14751
> URL: https://issues.apache.org/jira/browse/SOLR-14751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.6
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I tried viewing the ZK admin screen for Zookeeper 3.4.9, but it fails with 
> stack:
> {noformat}
> 2020-08-13 12:05:43.317 INFO  (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/zookeeper/status 
> params={wt=json&_=1597320343286} status=500 QTime=2
> 2020-08-13 12:05:43.317 ERROR (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.common.SolrException: Failed to get config from zookeeper
>   at 
> org.apache.solr.common.cloud.SolrZkClient.getConfig(SolrZkClient.java:777)
>   at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>   at 
> org.eclipse.

[jira] [Commented] (SOLR-14751) Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177381#comment-17177381
 ] 

ASF subversion and git services commented on SOLR-14751:


Commit 024f0ee9cba6f6f4a2335ea38340f0df90104955 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=024f0ee ]

SOLR-14751: Zookeeper Admin screen not working for old ZK versions

(cherry picked from commit bed3b8fbfbc5a2e35e987040ec8fbce1df0988bd)


> Zookeeper Admin screen not working for old ZK versions
> --
>
> Key: SOLR-14751
> URL: https://issues.apache.org/jira/browse/SOLR-14751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.6
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I tried viewing the ZK admin screen for Zookeeper 3.4.9, but it fails with 
> stack:
> {noformat}
> 2020-08-13 12:05:43.317 INFO  (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/zookeeper/status 
> params={wt=json&_=1597320343286} status=500 QTime=2
> 2020-08-13 12:05:43.317 ERROR (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.common.SolrException: Failed to get config from zookeeper
>   at 
> org.apache.solr.common.cloud.SolrZkClient.getConfig(SolrZkClient.java:777)
>   at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.Ch

[jira] [Updated] (SOLR-14751) Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-14751:
---
Fix Version/s: 8.7

> Zookeeper Admin screen not working for old ZK versions
> --
>
> Key: SOLR-14751
> URL: https://issues.apache.org/jira/browse/SOLR-14751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.6
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I tried viewing the ZK admin screen for Zookeeper 3.4.9, but it fails with 
> stack:
> {noformat}
> 2020-08-13 12:05:43.317 INFO  (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/zookeeper/status 
> params={wt=json&_=1597320343286} status=500 QTime=2
> 2020-08-13 12:05:43.317 ERROR (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.common.SolrException: Failed to get config from zookeeper
>   at 
> org.apache.solr.common.cloud.SolrZkClient.getConfig(SolrZkClient.java:777)
>   at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
>   at

[jira] [Resolved] (SOLR-14751) Zookeeper Admin screen not working for old ZK versions

2020-08-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-14751.

Resolution: Fixed

> Zookeeper Admin screen not working for old ZK versions
> --
>
> Key: SOLR-14751
> URL: https://issues.apache.org/jira/browse/SOLR-14751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.6
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I tried viewing the ZK admin screen for Zookeeper 3.4.9, but it fails with 
> stack:
> {noformat}
> 2020-08-13 12:05:43.317 INFO  (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/zookeeper/status 
> params={wt=json&_=1597320343286} status=500 QTime=2
> 2020-08-13 12:05:43.317 ERROR (qtp489411441-25) [   ] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.common.SolrException: Failed to get config from zookeeper
>   at 
> org.apache.solr.common.cloud.SolrZkClient.getConfig(SolrZkClient.java:777)
>   at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
>   at 
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>   at org.eclipse.jetty.server.Server.handle(Server.java:500)
>   at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
>   a

[GitHub] [lucene-solr] gerlowskija commented on pull request #1713: SOLR-14703 Edismax parser replaces whitespace characters with spaces

2020-08-13 Thread GitBox


gerlowskija commented on pull request #1713:
URL: https://github.com/apache/lucene-solr/pull/1713#issuecomment-673764351


   Thanks Yuriy, this LGTM.  Just need to run the test suite on it locally and 
I'll merge.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija merged pull request #1741: SOLR-14677: Always close DIH EntityProcessor/DataSource

2020-08-13 Thread GitBox


gerlowskija merged pull request #1741:
URL: https://github.com/apache/lucene-solr/pull/1741


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14677) DIH doesnt close DataSource when import encounters errors

2020-08-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177412#comment-17177412
 ] 

ASF subversion and git services commented on SOLR-14677:


Commit 216aec03a6f658f64801921e7d1c8f8c7e95626b in lucene-solr's branch 
refs/heads/master from Jason Gerlowski
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=216aec0 ]

SOLR-14677: Always close DIH EntityProcessor/DataSource (#1741)

Prior to this commit, the wrapup logic at the end of
DocBuilder.execute() closed out a series of DIH objects, but did so
in a way that an exception closing any of them resulted in the remainder
staying open.  This is especially problematic since Writer.close()
throws exceptions that DIH uses to determine the success/failure of the
run.

In practice this caused network errors sending DIH data to other Solr
nodes to result in leaked JDBC connections.

This commit changes DocBuilder's termination logic to handle exceptions
more gracefully, ensuring that errors closing a DIHWriter (for example)
don't prevent the closure of entity-processor and DataSource objects.

> DIH doesnt close DataSource when import encounters errors
> -
>
> Key: SOLR-14677
> URL: https://issues.apache.org/jira/browse/SOLR-14677
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.5, master (9.0)
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Minor
> Attachments: error-solr.log, no-error-solr.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> DIH imports don't close DataSource's (which can hold db connections, etc.) in 
> all cases.  Specifically, if an import runs into an unexpected error 
> forwarding processed docs to other nodes, it will neglect to close the 
> DataSource's when it finishes.
> This problem goes back to at least 7.5.  This is partially mitigated in older 
> versions of some DataSource implementations (e.g. JdbcDataSource) by means of 
> a "finalize" hook which invokes "close()" when the DataSource object is 
> garbage-collected.  In practice, this means that resources might be held open 
> longer than necessary but will be closed within a few seconds or minutes by 
> GC.  This only helps JdbcDataSource though - all other DataSource impl's risk 
> leaking resources. 
> In master/9.0, which requires a minimum of Java 11 and doesn't have the 
> finalize-hook, the connections are never cleaned up when an error is 
> encountered during DIH.  DIH will likely be removed for the 9.0 release, but 
> if it isn't this bug should be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-08-13 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177445#comment-17177445
 ] 

Mark Robert Miller commented on SOLR-14636:
---

I’ll make internal zk work again FYI, but it’s kind of anti feature made for a 
getting started demo. Using docker is really better and easier for that other 
than needing docker. 

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg, jenkins.png, solr-ref-branch.gif
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: *extremely stable* with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*extremely 
> stable*{color} with *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*extremely 
> stable*{color} with {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*extremely stable*{color} 
> with {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13998) Add thread safety annotation to classes

2020-08-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177466#comment-17177466
 ] 

David Smiley commented on SOLR-13998:
-

When filing an issue that creates two interfaces, please say their names in the 
description.  I noticed this issue originally but didn't look further at the 
time to see what the actual annotation names were; I wish I had.  I'm cool with 
"SolrThreadSafe" as a name, but not "SolrSingleThreaded".  Can we just call 
that "SolrNotThreadSafe" please?  "SingleThreaded" suggests it has some 
algorithm inside that does not use threads or maybe creates and uses exactly 
one... when really we are trying to say that the class is not safe for 
concurrent use _calling into it_ (thus not "thread-safe") as it's javadocs say. 
 Some googling shows "NotThreadSafe" annotation shows up in other projects; we 
can use the same understood name.

BTW What Mark miller created many months ago was not public; only his 
colleagues including Anshum are privy to it.

> Add thread safety annotation to classes
> ---
>
> Key: SOLR-13998
> URL: https://issues.apache.org/jira/browse/SOLR-13998
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (9.0), 8.4
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add annotations that can be used to mark classes as thread safe / single 
> threaded in Solr.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-08-13 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177472#comment-17177472
 ] 

Mark Robert Miller commented on SOLR-14636:
---

Well I can’t believe it, I’m fixing the damn distributed queue stuff. This is 
what convinced me curator is 1000x better than rolling our own. I’ve heard 
funny comments about that, but clearly from people that don’t understand what’s 
going on with our stuff. But whatever, this time I’ll fix it. But curator beats 
the pants off our attempts at speaking zookeeper. 

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg, jenkins.png, solr-ref-branch.gif
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: *extremely stable* with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*extremely 
> stable*{color} with *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*extremely 
> stable*{color} with {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*extremely stable*{color} 
> with {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14596) Add equals()/hashCode() impls to SolrJ Request objects

2020-08-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177474#comment-17177474
 ] 

David Smiley commented on SOLR-14596:
-

At least for CollectionAdminRequest but maybe as a base for all SolrRequest, 
can't you just call getParams and do equality on that?  I'd rather not have us 
maintain a custom equals/hashcode for _every_ subtype when I think so much of 
the time (I think) there's a generic path.

> Add equals()/hashCode() impls to SolrJ Request objects
> --
>
> Key: SOLR-14596
> URL: https://issues.apache.org/jira/browse/SOLR-14596
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master (9.0), 8.5.2
>Reporter: Jason Gerlowski
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, many SolrRequest classes (e.g. UpdateRequest) don't implement 
> {{equals()}} or {{hashCode()}}
> This isn't a problem for Solr's normal operation, but it can be a barrier in 
> unit testing SolrJ client code.  {{equals()}} implementations would make it 
> much easier to assert that client code is building the request that 
> developers _think_ it's building.  Of course, this testing benefit would 
> apply to Solr's own tests which use SolrJ.
> This ticket covers making sure that the more popular SolrRequest objects have 
> equals/hashcode implementations useful for testers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9427) Unified highlighter can fail to highlight fuzzy query

2020-08-13 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-9427:
-
Component/s: modules/highlighter

> Unified highlighter can fail to highlight fuzzy query
> -
>
> Key: LUCENE-9427
> URL: https://issues.apache.org/jira/browse/LUCENE-9427
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Julie Tibshirani
>Priority: Major
> Fix For: 8.7
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If a fuzzy query corresponds to an exact match (for example it has with 
> maxEdits: 0), then the unified highlighter doesn't produce highlights for the 
> matching terms.
> I think this is due to the fact that when visiting a fuzzy query, the exact 
> terms are now consumed separately from automata. The unified highlighter 
> doesn't account for the terms and misses highlighting them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14713) Single thread on streaming updates

2020-08-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177483#comment-17177483
 ] 

David Smiley commented on SOLR-14713:
-

Would a better async API be another path that still sends docs in parallel but 
more under the covers (by Jetty/Http2) and thus with much less threads?  Maybe 
a rewrite will lead to less complexity, but probably not the least complexity 
that you will get with single threaded code.   Maybe I'm totally off; I'm sure 
you have a better understanding of all this than I.

> Single thread on streaming updates
> --
>
> Key: SOLR-14713
> URL: https://issues.apache.org/jira/browse/SOLR-14713
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Or great simplify SolrCmdDistributor
> h2. Current way for fan out updates of Solr
> Currently on receiving an updateRequest, Solr will create a new 
> UpdateProcessors for handling that request, then it parses one by one 
> document from the request and let’s processor handle it.
> {code:java}
> onReceiving(UpdateRequest update):
>   processors = createNewProcessors();
>   for (Document doc : update) {
> processors.handle(doc)
> }
> {code}
> Let’s say the number of replicas in the current shard is N, updateProcessor 
> will create N-1 queues and runners for each other replica.
>  Runner is basically a thread that dequeues updates from its corresponding 
> queue and sends it to a corresponding replica endpoint.
> Note 1: all Runners share the same client hence connection pool and same 
> thread pool. 
>  Note 2: A runner will send all documents of its UpdateRequest in a single 
> HTTP POST request (to reduce the number of threads for handling requests on 
> the other side). Therefore its lifetime equals the total time of handling its 
> UpdateRequest. Below is a typical activity that happens in a runner's life 
> cycle.
> h2. Problems of current approach
> The current approach have two problems:
>  - Problem 1: It uses lots of threads for fan out requests.
>  - Problem 2 which is more important: it is very complex. Solr is also using 
> ConcurrentUpdateSolrClient (CUSC for short) for that, CUSC implementation 
> allows using a single queue but multiple runners for same queue (although we 
> only use one runner at max) this raise the complexity of the whole flow up to 
> the top. Single fix for a problem can raise multiple problems later, i.e: in 
> SOLR-13975 on trying to handle the problem when the other endpoint is hanging 
> out for so long, we introduced a bug that lets the runner keep running even 
> when the updateRequest is fully handled in the leader.
> h2. Doing everything in single thread
> Since we are already supporting sending requests in an async manner, why 
> don’t we let the main thread which is handling the update request to send 
> updates to all others without the need of runners or queues. The code will be 
> something like this
> {code:java}
>  Class UpdateProcessor:
>Map pendingOutStreams
>
>func handleAddDoc(doc):
>   for (replica: replicas):
>   pendingOutStreams.get(replica).send(doc)
>
>func onEndUpdateRequest():
>   pendingOutStreams.values().forEach(out -> 
> closeAndHandleResponse(out)){code}
>  
> By doing this we will use less threads and the code is much more simpler and 
> cleaner. Of course that there will be some downgrade in the time for handling 
> an updateRequest since we are doing it serially instead of concurrently. In a 
> formal way it will be like this
> {code:java}
>  oldTime = timeForIndexing(update) + timeForSendingUpdates(update)
>  newTime = timeForIndexing(update) + (N-1) * 
> timeForSendingUpdates(update){code}
> But I believe that timeForIndexing is much more than timeForSendingUpdates so 
> we do not really need to be concerned about this. Even that is really a 
> problem users can simply create more threads for indexing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14639) Improve concurrency of SlowCompositeReaderWrapper.terms

2020-08-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17177511#comment-17177511
 ] 

David Smiley commented on SOLR-14639:
-

This is a depressing discovery.  I really appreciate the thoroughness of your 
analysis -- I learned some things!  Let's use Caffeine.

> Improve concurrency of SlowCompositeReaderWrapper.terms
> ---
>
> Key: SOLR-14639
> URL: https://issues.apache.org/jira/browse/SOLR-14639
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 8.4.1
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: Screen Shot 2020-07-09 at 4.38.03 PM.png
>
>
> Under heavy query load, the ConcurrentHashMap.computeIfAbsent method inside 
> the SlowCompositeReaderWrapper.terms(String) method blocks searcher threads 
> (see attached screenshot of a java flight recording).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org