org.apache.solr.common.SolrException: this IndexWriter is closed

2021-03-05 Thread 李世明
Hello:

Have you encountered the following exception that will cause the index to not 
be written? But you can query
Version:8.7.0

org.apache.solr.common.SolrException: this IndexWriter is closed
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:234)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2627)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:795)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:568)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at 
org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:500)
at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:335)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:135)
at 
org.eclipse.jetty.http2.HTTP2Connection.produce(HTTP2Connection.java:170)
at 
org.eclipse.jetty.http2.HTTP2Connection.onFillable(HTTP2Connection.java:125)
at 
org.eclipse.jetty.http2.HTTP2Connection$FillableCallback.succeeded(HTTP2Connection.java:348)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is 
closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:877)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:891)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:2080)

Re: org.apache.solr.common.SolrException: this IndexWriter is closed

2021-03-05 Thread Dominique Bejean
Hi,
You are using RAMDirectoryFactory without enough RAM ?
regards
Dominique

Le ven. 5 mars 2021 à 16:18, 李世明  a écrit :

> Hello:
>
> Have you encountered the following exception that will cause the index to
> not be written? But you can query
> Version:8.7.0
>
> org.apache.solr.common.SolrException: this IndexWriter is closed
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:234)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2627)
> at
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:795)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:568)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
> at
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at org.eclipse.jetty.server.Server.handle(Server.java:500)
> at
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
> at
> org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
> at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
> at org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:335)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:135)
> at
> org.eclipse.jetty.http2.HTTP2Connection.produce(HTTP2Connection.java:170)
> at
> org.eclipse.jetty.http2.HTTP2Connection.onFillable(HTTP2Connection.java:125)
> at
> org.eclipse.jetty.http2.HTTP2Connection$FillableCallback.succeeded(HTTP2Connection.java:348)
> at org.eclipse.jetty.io
> .FillInterest.fillable(FillInterest.java:103)
> at org.eclipse.jetty.io
> .ChannelEndPoint$2.run(ChannelEndPoint.java:117)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
> at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
> at
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
> at java.base/java.lang.Thread.run(Unknown Source)
> Caused by: org.apache.lucene.store.AlreadyClosedExcepti

Re: Caffeine Cache Metrics Broken?

2021-03-05 Thread Stephen Lewis Bianamara
Thanks Shawn. Something seems different between the two because Caffeine
Cache is having much higher volume per hour than our previous
implementation was. So I guess it is then more likely that it is something
actually expected due to a change in what is getting kept/warmed, so I'll
look into this more and get back to you if that doesn't end up making sense
based on what I observe.

Thanks again,
Stephen

On Tue, Mar 2, 2021 at 6:35 PM Shawn Heisey  wrote:

> On 3/2/2021 3:47 PM, Stephen Lewis Bianamara wrote:
> > I'm investigating a weird behavior I've observed in the admin page for
> > caffeine cache metrics. It looks to me like on the older caches, warm-up
> > queries were not counted toward hit/miss ratios, which of course makes
> > sense, but on Caffeine cache it looks like they are. I'm using solr 8.3.
> >
> > Obviously this makes measuring its true impact a little tough. Is this by
> > any chance a known issue and already fixed in later versions?
>
> The earlier cache implementations are entirely native to Solr -- all the
> source code is include in the Solr codebase.
>
> Caffeine is a third-party cache implementation that has been integrated
> into Solr.  Some of the metrics might come directly from Caffeine, not
> Solr code.
>
> I would expect warming queries to be counted on any of the cache
> implementations.  One of the reasons that the warming capability exists
> is to pre-populate the caches before actual queries begin.  If warming
> queries are somehow excluded, then the cache metrics would not be correct.
>
> I looked into the code and did not find anything that would keep warming
> queries from affecting stats.  But it is always possible that I just
> didn't know what to look for.
>
> In the master branch (Solr 9.0), CaffeineCache is currently the only
> implementation available.
>
> Thanks,
> Shawn
>


Re: What controls field cache size and eviction rates?

2021-03-05 Thread Stephen Lewis Bianamara
Hi SOLR Community,

Just following up here with an update. I found this article which goes into
depth on the field cache though stops short of discussing how it handles
eviction. Can anyone confirm if this info is right?

https://lucidworks.com/post/scaling-lucene-and-solr/


Also, can anyone speak to how the field cache handles evictions?

Best,
Stephen

On Wed, Feb 24, 2021 at 4:43 PM Stephen Lewis Bianamara <
stephen.bianam...@gmail.com> wrote:

> Hi SOLR Community,
>
> I've been trying to understand how the field cache in SOLR manages
> its evictions, and it is not easily readable from the code or documentation
> the simple question of when and how something gets evicted from the field
> cache. This cache also doesn't show hit ratio, total hits, eviction ratio,
> total evictions, etc... in the web UI.
>
> For example: I've observed that if I write one document and trigger a
> query with a sort on the field, it will generate two entries in the field
> cache. Then if I repush the document, the entries get removed, but will
> otherwise stay there seemingly forever. If my query matches 2 docs, same
> thing but with 4 entries (2 each). Then, if I rewrite one of the docs,
> those two entries go away but not the two from the first one. This
> obviously implies that there are implications to write throughput
> performance based on this cache, so the fact that it is not configurable by
> the user and doesn't have very clear documentation is a bit worrisome.
>
> Can someone here help out and explain how the filter cache handles
> evictions, or perhaps send me the documentation if I missed it?
>
>
> Thanks!
> Stephen
>


Re: What controls field cache size and eviction rates?

2021-03-05 Thread Stephen Lewis Bianamara
Should say -- Can anyone confirm if it's right *still*, since the article
is 10 years old :)

On Fri, Mar 5, 2021 at 10:36 AM Stephen Lewis Bianamara <
stephen.bianam...@gmail.com> wrote:

> Hi SOLR Community,
>
> Just following up here with an update. I found this article which goes
> into depth on the field cache though stops short of discussing how it
> handles eviction. Can anyone confirm if this info is right?
>
> https://lucidworks.com/post/scaling-lucene-and-solr/
>
>
> Also, can anyone speak to how the field cache handles evictions?
>
> Best,
> Stephen
>
> On Wed, Feb 24, 2021 at 4:43 PM Stephen Lewis Bianamara <
> stephen.bianam...@gmail.com> wrote:
>
>> Hi SOLR Community,
>>
>> I've been trying to understand how the field cache in SOLR manages
>> its evictions, and it is not easily readable from the code or documentation
>> the simple question of when and how something gets evicted from the field
>> cache. This cache also doesn't show hit ratio, total hits, eviction ratio,
>> total evictions, etc... in the web UI.
>>
>> For example: I've observed that if I write one document and trigger a
>> query with a sort on the field, it will generate two entries in the field
>> cache. Then if I repush the document, the entries get removed, but will
>> otherwise stay there seemingly forever. If my query matches 2 docs, same
>> thing but with 4 entries (2 each). Then, if I rewrite one of the docs,
>> those two entries go away but not the two from the first one. This
>> obviously implies that there are implications to write throughput
>> performance based on this cache, so the fact that it is not configurable by
>> the user and doesn't have very clear documentation is a bit worrisome.
>>
>> Can someone here help out and explain how the filter cache handles
>> evictions, or perhaps send me the documentation if I missed it?
>>
>>
>> Thanks!
>> Stephen
>>
>


Investigating Seeming Deadlock

2021-03-05 Thread Stephen Lewis Bianamara
Hi SOLR Community,

I'm investigating a node on solr 8.3.1 running in cloud mode which appears
to have deadlocked, and I'm trying to figure out if this is a known issue
or not, and looking for some guidance in understanding both (a) whether
this is a resolved issue in future releases or needs a bug, and (b) how to
lower the risk of recurrence until it is fixed.

Here is what I've observed:

   - strace shows the main process waiting. A spot check on child processes
   shows the same, though I did not deep dive all of the threads yet (there
   are over 100).
   - the server was not doing anything or busy, except for jvm sitting at
   constant memory usage. No resource of memory, swap, cpu, etc... was limited
   or showing active usage.
   - jcmd Thread.Print shows some interesting info which suggests a
   deadlock or another type of locking issue
  - For example, I found this log suggests something unusual because it
  looks like it's trying to lock a null object
 - "Finalizer" #3 daemon prio=8 os_prio=0 cpu=11.11ms
 elapsed=11.11s tid=0x0100 nid=0x in Object.wait()
  [0x1000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(java.base@11.0.7/Native Method)
 - waiting on 
 at java.lang.ref.ReferenceQueue.remove(java.base@11.0.7
 /ReferenceQueue.java:155)
 - waiting to re-lock in wait() <0x00020020> (a
 java.lang.ref.ReferenceQueue$Lock)
 at java.lang.ref.ReferenceQueue.remove(java.base@11.0.7
 /ReferenceQueue.java:176)
 at
 java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.7
 /Finalizer.java:170)
 - I also see a lot of this. Some addressess occur multiple times,
  but one in particular occurs 31 times. Maybe related?
 - "h2sc-1-thread-11" #110 prio=5 os_prio=0 cpu=54.29ms
 elapsed=11.11s tid=0x10010100 nid=0x waiting
on condition
  [0x10011000]
java.lang.Thread.State: WAITING (parking)
 at jdk.internal.misc.Unsafe.park(java.base@11.0.7/Native
 Method)
 - parking to wait for  <0x00030033>

Can anyone help answer whether this is known or what I could look at next?

Thanks!
Stephen


Re: Investigating Seeming Deadlock

2021-03-05 Thread Mike Drob
Were you having any OOM errors beforehand? If so, that could have caused
some GC of objects that other threads still expect to be reachable, leading
to these null monitors.

On Fri, Mar 5, 2021 at 12:55 PM Stephen Lewis Bianamara <
stephen.bianam...@gmail.com> wrote:

> Hi SOLR Community,
>
> I'm investigating a node on solr 8.3.1 running in cloud mode which appears
> to have deadlocked, and I'm trying to figure out if this is a known issue
> or not, and looking for some guidance in understanding both (a) whether
> this is a resolved issue in future releases or needs a bug, and (b) how to
> lower the risk of recurrence until it is fixed.
>
> Here is what I've observed:
>
>- strace shows the main process waiting. A spot check on child processes
>shows the same, though I did not deep dive all of the threads yet (there
>are over 100).
>- the server was not doing anything or busy, except for jvm sitting at
>constant memory usage. No resource of memory, swap, cpu, etc... was
> limited
>or showing active usage.
>- jcmd Thread.Print shows some interesting info which suggests a
>deadlock or another type of locking issue
>   - For example, I found this log suggests something unusual because it
>   looks like it's trying to lock a null object
>  - "Finalizer" #3 daemon prio=8 os_prio=0 cpu=11.11ms
>  elapsed=11.11s tid=0x0100 nid=0x in
> Object.wait()
>   [0x1000]
> java.lang.Thread.State: WAITING (on object monitor)
>  at java.lang.Object.wait(java.base@11.0.7/Native Method)
>  - waiting on 
>  at java.lang.ref.ReferenceQueue.remove(java.base@11.0.7
>  /ReferenceQueue.java:155)
>  - waiting to re-lock in wait() <0x00020020> (a
>  java.lang.ref.ReferenceQueue$Lock)
>  at java.lang.ref.ReferenceQueue.remove(java.base@11.0.7
>  /ReferenceQueue.java:176)
>  at
>  java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.7
>  /Finalizer.java:170)
>  - I also see a lot of this. Some addressess occur multiple times,
>   but one in particular occurs 31 times. Maybe related?
>  - "h2sc-1-thread-11" #110 prio=5 os_prio=0 cpu=54.29ms
>  elapsed=11.11s tid=0x10010100 nid=0x waiting
> on condition
>   [0x10011000]
> java.lang.Thread.State: WAITING (parking)
>  at jdk.internal.misc.Unsafe.park(java.base@11.0.7/Native
>  Method)
>  - parking to wait for  <0x00030033>
>
> Can anyone help answer whether this is known or what I could look at next?
>
> Thanks!
> Stephen
>


RE: Programmatic Basic Auth on CloudSolrClient

2021-03-05 Thread Subhajit Das

Hi Tomas,

Tried your suggestion. But last suggestion (directly passing the httpclient) 
resilts in NonRepeatableRequestException. And using full step, also didn’t 
recognize the auth.

Anything I should look for?

Thanks,
Subhajit

From: Tomás Fernández Löbbe
Sent: 05 March 2021 04:23 AM
To: solr-user@lucene.apache.org
Subject: Re: Programmatic Basic Auth on CloudSolrClient

Ah, right, now I remember that something like this was possible with the
"http1" version of the clients, which is why I created the Jira issues for
the http2 ones. Maybe you can even skip the "LBHttpSolrClient" step, I
believe you can just pass the HttpClient to the CloudSolrClient? you will
have to make sure to close all the clients that are created externally
after done, since the Solr client won't in this case.

On Thu, Mar 4, 2021 at 1:22 PM Mark H. Wood  wrote:

> On Wed, Mar 03, 2021 at 10:34:50AM -0800, Tomás Fernández Löbbe wrote:
> > As far as I know the current OOTB options are system properties or
> > per-request (which would allow you to use different per collection, but
> > probably not ideal if you do different types of requests from different
> > parts of your code). A workaround (which I've used in the past) is to
> have
> > a custom client that overrides and sets the credentials in the "request"
> > method (you can put whatever logic there to identify which credentials to
> > use). I recently created
> https://issues.apache.org/jira/browse/SOLR-15154
> > and https://issues.apache.org/jira/browse/SOLR-15155 to try to address
> this
> > issue in future releases.
>
> I have not tried it, but could you not:
>
> 1. set up an HttpClient with an appropriate CredentialsProvider;
> 2. pass it to HttpSolrClient.Builder.withHttpClient();
> 2. pass that Builder to
> LBHttpSolrClient.Builder.withHttpSolrClientBuilder();
> 3. pass *that* Builder to
> CloudSolrClient.Builder.withLBHttpSolrClientBuilder();
>
> Now you have control of the CredentialsProvider and can have it return
> whatever credentials you wish, so long as you still have a reference
> to it.
>
> > On Wed, Mar 3, 2021 at 5:42 AM Subhajit Das 
> wrote:
> >
> > >
> > > Hi There,
> > >
> > > Is there any way to programmatically set basic authentication
> credential
> > > on CloudSolrClient?
> > >
> > > The only documentation available is to use system property. This is not
> > > useful if two collection required two separate set of credentials and
> they
> > > are parallelly accessed.
> > > Thanks in advance.
> > >
>
> --
> Mark H. Wood
> Lead Technology Analyst
>
> University Library
> Indiana University - Purdue University Indianapolis
> 755 W. Michigan Street
> Indianapolis, IN 46202
> 317-274-0749
> www.ulib.iupui.edu
>