[jira] [Created] (SOLR-15019) Replica placement API needs a way to fetch existing replica metrics

2020-11-26 Thread Andrzej Bialecki (Jira)
Andrzej Bialecki created SOLR-15019:
---

 Summary: Replica placement API needs a way to fetch existing 
replica metrics
 Key: SOLR-15019
 URL: https://issues.apache.org/jira/browse/SOLR-15019
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Andrzej Bialecki


Replica placement API was introduced in SOLR-14613. It offers a few sample (and 
simple) implementations of placement plugins.

However, this API doesn't offer support for retrieving per-replica metrics, 
which are required for calculating more realistic placements. For example, when 
calculating placements for ADDREPLICA on an already existing collection the 
plugin should know what is the size of replica in order to avoid placing large 
replicas on nodes with insufficient free disk space.

After discussing this with [~ilan] we propose the following additions to the 
API:

* use the existing {{AttributeFetcher}} interface as a facade for retrieving 
per-replica values (currently it only retrieves per-node values)
* add {{ShardValues}} interface to represent strongly-typed API for key 
metrics, such as replica size, number of docs, number of update and search 
requests.

Plugins could then use this API like this:
{code}
AttributeFetcher attributeFetcher = ...
SolrCollection solrCollection = ...
Set metricNames = ...
attributeFetcher.requestCollectionMetrics(solrCollection, 
solrCollection.getShardNames(), metricNames);

AttributeValues attributeValues = attributeFetcher.fetchAttributes();
ShardValues shardValues = 
attributeValues.getShardMetrics(solrCollection.getName(), shardName);
int sizeInGB = shardValues.getSizeInGB(); // retrieves shard leader metrics
int replicaSizeInGB = shardValues.getSizeInGB(replica);
{code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


iverase commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734208944


   I am working in the assumption that you don't know the Endianness of a file 
when you open it. Therefore I don't see how your approach can work except that 
the property in the DataInput / DataOutput is mutable?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


jpountz commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734218972


   SegmentInfos certainly cannot know the endianness of the file up-front. But 
for other file formats, we could know this on a per-file-format basis? E.g. 
`Lucene86PointsFormat` always uses big endian but `Lucene90PointsFormat` will 
always use little endian?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


dweiss commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734221687


   As is frequently the case I don't have all the answers until I take a look 
at this... My feeling is that this "endianness" should apply at the lowest 
common denominator where complex-types are read (readint, readlong, etc.). If 
this is datainput/dataoutput then indeed they need this information when 
they're created (and it propagates up to where they're created). This is in a 
way similar to how bytebuffers are designed... only I'd be tempted to make this 
information constant for a datainput/dataoutput (so that source.order(newOrder) 
would create a delegate, not change the source's order).
   
   To determine the byte order of a file you either need an implicit knowledge 
(known codec version) or explicit knowledge (probing for some magic sequence). 
Whether this is always possible in Lucene - I can't tell without actually 
trying.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


dweiss commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734223008


   Yes, exactly. And in those cases when you can't determine endianness from 
implicit information, some kind of probing will be required to determine it. 
Maybe you're right, @iverase that this forces the byte order property to be 
mutable - don't know this yet. I'd rather make it constant as it allows for 
easier code.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7229) Allow DIH to handle attachments as separate documents

2020-11-26 Thread Nazerke Seidan (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nazerke Seidan updated SOLR-7229:
-
Comment: was deleted

(was: Hi Tim,

I was wondering whether this project is still open or not? I would like to 
participate in GSoC'19 by contributing to solr community. )

> Allow DIH to handle attachments as separate documents
> -
>
> Key: SOLR-7229
> URL: https://issues.apache.org/jira/browse/SOLR-7229
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>  Labels: gsoc2017
>
> With Tika 1.7's RecursiveParserWrapper, it is possible to maintain metadata 
> of individual attachments/embedded documents.  Tika's default handling was to 
> maintain the metadata of the container document and concatenate the contents 
> of all embedded files.  With SOLR-7189, we added the legacy behavior.
> It might be handy, for example, to be able to send an MSG file through DIH 
> and treat the container email as well each attachment as separate (child?) 
> documents, or send a zip of jpeg files and correctly index the geo locations 
> for each image file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


dweiss commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734255849


   I took a look out of curiosity. Here are my conclusions:
   
   - making DataInput/DataOutput know about the byte-order indeed introduces a 
lot of byteOrder-juggling around. I still think this is better than explicit 
byte-reversal wrappers because even if you need to store and pass byteOrder, 
you don't necessarily care about its value - it is just accepted and passed 
where it's consumed. It does add a lot of boilerplate, agreed.
   
   - while looking at all this I think the big design flaw is that streams 
opened by Directory instances are readily suitable for reading/ writing complex 
types... Many places that consume IndexInput/IndexOutput (or 
DataInput/DataOutput) are not using byte order sensitive methods at all. These 
places only read bytes, they wouldn't need to know about the byte order (making 
things much simpler). Whenever there is a need for readInt/readLong, etc. 
DataInput/DataOutput should be able to provide an efficient implementation 
depending on the byte order requested; something like:
   ```
   ComplexTypeDataOutput getWriter(ByteOrder bo);
   ComplexTypeDataInput getReader(ByteOrder bo);
   ```
   It doesn't have to be a wrapper object (but initially it can be - a common 
one for all DataInput/DataOutput implementations!) - the performance can be 
kept high in a specific implementation by the very source object implementing 
the byte order-sensitive interface in a specific way...
   
   What I see as a big upside of this is that any such refactoring should work 
for all the codecs - previous and current - by just passing the byte order to 
where it's relevant. You could even switch the byte order arbitrarily and (with 
the exception of backward-compatibility checks) all tests should still pass... 
Eventually, the byte order flag can be removed entirely as assertions are 
introduced in places where the byte order is fixed. The rest of the code would 
happily just use complex type read/write methods, oblivious of what the byte 
order in the underlying stream is.
   
   The disadvantage, of course, is that the very core Lucene API would have to 
change (DataInput, DataOutput, IndexInput, IndexOutput)...
   
   Do you think this makes sense? Or is Ignacio's patch a better way forward? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] rmuir commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


rmuir commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734258556


   > These places only read bytes, they wouldn't need to know about the byte 
order (making things much simpler).
   
   And slower in the case of mmap: which is our default implementation! How 
would readLong() work through and through with the mmap impl? Today it is one 
bounds check.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] rmuir commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


rmuir commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734260852


   I think there is plenty of opportunity to make performance regressions with 
this change, we should have benchmarks showing the improvement before we commit 
any such drastic changes to the low level interfaces?
   
   AFAIK, we already hacked away for postings-list decode to do what it needs 
in LE order for some vectorization reasons (LUCENE-9027).
   
   Otherwise, while its a nice goal, I am highly suspicious that the bswap is 
really slowing things down. Instead in practice the patch looks very complex, 
tons of swapping changes outside of backwards codecs (?), and even some 
proposals here that will definitely slow things down.
   
   So I'd like to see benchmark results before anything is committed.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


dweiss commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734261088


   Perhaps I wasn't too clear but I think it'd be the same - for example, 
ByteBufferIndexInput would just implement ComplexTypeDataInput itself, by 
default assuming a certain byte order (LE or BE); the call to getReader with a 
matching byte order would return itself and it'd work identically as before. If 
you requested the reverse byte order it'd have to either wrap (slow) or return 
an optimized version (which in this case is simple too - clone the underlying 
buffer, switch byte order, return).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


dweiss commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734261626


   It's very much what RandomAccessInput interface is at the moment, btw.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14575) Solr restore is failing when basic authentication is enabled

2020-11-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-14575.

Resolution: Cannot Reproduce

Closing, as reporter reports that the issue was solved by upgrading to another 
Java version.

> Solr restore is failing when basic authentication is enabled
> 
>
> Key: SOLR-14575
> URL: https://issues.apache.org/jira/browse/SOLR-14575
> Project: Solr
>  Issue Type: Bug
>  Components: Backup/Restore
>Affects Versions: 8.2
>Reporter: Yaswanth
>Priority: Major
>
> Hi Team,
> I was testing backup / restore for solrcloud and its failing exactly when I 
> am trying to restore a successfully backed up collection.
> I am using solr 8.2 with basic authentication enabled and then creating a 2 
> replica collection. When I run the backup like
> curl -u xxx:xxx -k 
> '[https://x.x.x.x:8080/solr/admin/collections?action=BACKUP&name=test&collection=test&location=/solrdatabkup'|https://x.x.x.x:8080/solr/admin/collections?action=BACKUP&name=test&collection=test&location=/solrdatabkup%27]
>  it worked fine and I do see a folder was created with the collection name 
> under /solrdatabackup
> But now when I deleted the existing collection and then try running restore 
> api like
> curl -u xxx:xxx -k 
> '[https://x.x.x.x:8080/solr/admin/collections?action=RESTORE&name=test&collection=test&location=/solrdatabkup'|https://x.x.x.x:8080/solr/admin/collections?action=BACKUP&name=test&collection=test&location=/solrdatabkup%27]
>  its throwing an error like 
> {
>  "responseHeader":{
>  "status":500,
>  "QTime":457},
>  "Operation restore caused 
> *exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  ADDREPLICA failed to create replica",*
>  "exception":{
>  "msg":"ADDREPLICA failed to create replica",
>  "rspCode":500},
>  "error":{
>  "metadata":[
>  "error-class","org.apache.solr.common.SolrException",
>  "root-error-class","org.apache.solr.common.SolrException"],
>  "msg":"ADDREPLICA failed to create replica",
>  "trace":"org.apache.solr.common.SolrException: ADDREPLICA failed to create 
> replica\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:280)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:252)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:820)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:786)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:546)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(Ha

[GitHub] [lucene-solr] janhoy opened a new pull request #2098: SOLR-14851 Http2SolrClient doesn't handle keystore type

2020-11-26 Thread GitBox


janhoy opened a new pull request #2098:
URL: https://github.com/apache/lucene-solr/pull/2098


   Based on patch by Andras Salamon in 
https://issues.apache.org/jira/browse/SOLR-14851
   
   Signed-off-by: Jan Høydahl 
   
   
   # Description
   
   Make truststore and keystore type configurable in Http2SolrClient 
   
   # Tests
   
   Extended the existing test case
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14851) Http2SolrClient doesn't handle keystore type correctly

2020-11-26 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17239272#comment-17239272
 ] 

Jan Høydahl commented on SOLR-14851:


I added a unit test, ref-guide updates, CHANGES.txt entry and a PR. Please 
review, targeting 8.8

> Http2SolrClient doesn't handle keystore type correctly
> --
>
> Key: SOLR-14851
> URL: https://issues.apache.org/jira/browse/SOLR-14851
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Reporter: Andras Salamon
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14851-01.patch, SOLR-14851-02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I wanted to use Solr SSL using bcfks keystore type. Even after specifying the 
> following JVM properties, Solr was not able to start: 
> {{-Djavax.net.ssl.keyStoreType=bcfks -Djavax.net.ssl.trustStoreType=bcfks 
> -Dsolr.jetty.truststore.type=bcfks -Dsolr.jetty.keystore.type=bcfks}}
> The error message in the log:
> {noformat}2020-09-07 14:42:29.429 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: java.io.IOException: 
> Invalid keystore format
> at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:660)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:262)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:182)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
> at 
> org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
> at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
> at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
> at 
> org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
> at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
> at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
> at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
> at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:599)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:249)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> at org.eclipse.jetty.server.Server.start(Server.java:407)
> at 
> org.eclipse.jetty.util

[GitHub] [lucene-solr] rmuir commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


rmuir commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734302940


   > If you requested the reverse byte order it'd have to either wrap (slow) or 
return an optimized version (which in this case is simple too - clone the 
underlying buffer, switch byte order, return).
   
   I'm just suspicious we are only making things slower here, by explicitly 
swapping anything with java code. cpus since haswell have stuff like `movbe`, 
why should we support "slow" wrapping at all? I don't think slow should be 
possible here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #2094: LUCENE-9047: Move the Directory APIs to be little endian

2020-11-26 Thread GitBox


dweiss commented on pull request #2094:
URL: https://github.com/apache/lucene-solr/pull/2094#issuecomment-734320016


   The "slow" wrapper would be needed for the many IndexInput, IndexOutput 
implementations that don't have optimized versions and are part of the code at 
the moment. By "slow" I just mean plain Java code, not hardware-accelerated. 
For ByteBufferIndexInput (and perhaps a few other implementations) you could 
set the byte order on the underlying byte buffers and then delegate directly 
which, with some hope, would eventually get you to optimized native code. I'll 
tinker with this a bit, perhaps I can show a PoC that would be simpler to 
understand.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul opened a new pull request #2099: SOLR-14977: improved plugin configuration

2020-11-26 Thread GitBox


noblepaul opened a new pull request #2099:
URL: https://github.com/apache/lucene-solr/pull/2099


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14851) Http2SolrClient doesn't handle keystore type correctly

2020-11-26 Thread Andras Salamon (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17239522#comment-17239522
 ] 

Andras Salamon commented on SOLR-14851:
---

Thanks for the improvements [~janhoy] the change looks good to me.

> Http2SolrClient doesn't handle keystore type correctly
> --
>
> Key: SOLR-14851
> URL: https://issues.apache.org/jira/browse/SOLR-14851
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Reporter: Andras Salamon
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14851-01.patch, SOLR-14851-02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I wanted to use Solr SSL using bcfks keystore type. Even after specifying the 
> following JVM properties, Solr was not able to start: 
> {{-Djavax.net.ssl.keyStoreType=bcfks -Djavax.net.ssl.trustStoreType=bcfks 
> -Dsolr.jetty.truststore.type=bcfks -Dsolr.jetty.keystore.type=bcfks}}
> The error message in the log:
> {noformat}2020-09-07 14:42:29.429 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: java.io.IOException: 
> Invalid keystore format
> at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:660)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:262)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:182)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
> at 
> org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
> at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
> at 
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
> at 
> org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
> at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
> at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
> at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
> at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:599)
> at 
> org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:249)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> at org.eclipse.jetty.server.Server.start(Server.java:407)
> at 
> org.eclipse.jetty.util.component.ContainerLifeC