AW: Problem with Solr 7.7.2 after OOM

2020-03-06 Thread Bunde Torsten
I set the heap to 8g but this doesn't have any effect and the problem is still 
the same.

 ~# ps -eaf | grep solr
 solr  3176 1  0 08:50 ?00:00:00 /lib/systemd/systemd --user
 solr  3177  3176  0 08:50 ?00:00:00 (sd-pam)
 solr  3238 1  0 08:50 ?00:00:06 java -server -Xms8g -Xmx8g 
-XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 
-XX:MaxTenuringThreshold=8 -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
-XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
-XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
-XX:+ParallelRefProcEnabled -verbose:gc 
-Xlog:gc*:file=/var/solr/logs/solr_gc.log:time,uptime:filecount=9,filesize=20M 
-Dsolr.log.dir=/var/solr/logs -Djetty.port=8983 -DSTOP.PORT=7983 
-DSTOP.KEY=solrrocks -Duser.timezone=UTC -Djetty.home=/opt/solr/server 
-Dsolr.solr.home=/var/solr/data -Dsolr.data.home=/var/solr/data 
-Dsolr.install.dir=/opt/solr 
-Dsolr.default.confdir=/opt/solr/server/solr/configsets/_default/conf 
-Dlog4j.configurationFile=file:/var/solr/log4j.properties -Xss256k 
-Dsolr.disable.configEdit=true -Xss256k -Dsolr.jetty.https.port=8983 
-Dsolr.log.muteconsole -XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 
/var/solr/logs -jar start.jar --module=http

 ~# free
   totalusedfree  shared  buff/cache   
available
 Mem:   16426388  62793615034128 796  764324
15488956
 Swap:969960   0  969960
 ~#

Torsten

-Ursprüngliche Nachricht-
Von: Jörn Franke 
Gesendet: Donnerstag, 5. März 2020 17:31
An: solr-user@lucene.apache.org
Betreff: Re: Problem with Solr 7.7.2 after OOM

Just keep in mind that the total memory should be much more than the heap to 
leverage Solr file caches. If you have 8 GB heap probably at least 16 gb total 
memory make sense to be available on the machine .

> Am 05.03.2020 um 16:58 schrieb Walter Underwood 
> mailto:wun...@wunderwood.org>>:
>
> 
>>
>> On Mar 5, 2020, at 4:29 AM, Bunde Torsten 
>> mailto:t.bu...@htp.net>> wrote:
>>
>> -Xms512m -Xmx512m
>
> Your heap is too small. Set this to -Xms8g -Xmx8g
>
> In solr.in.sh, that looks like this:
>
> SOLR_HEAP=8g
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>



Re: Solr 8.2.0 - Schema issue

2020-03-06 Thread Joe Obernberger

Hi All - any ideas on this?  Anything I can try?

Thank you!

-Joe

On 2/26/2020 9:01 AM, Joe Obernberger wrote:
Hi All - I have several solr collections all with the same schema.  If 
I add a field to the schema and index it into the collection on which 
I added the field, it works fine.  However, if I try to add a document 
to a different solr collection that contains the new field (and is 
using the same schema), I get an error that the field doesn't exist.


If I restart the cluster, this problem goes away and I can add a 
document with the new field to any solr collection that has the 
schema.  Any work-arounds that don't involve a restart?


Thank you!

-Joe Obernberger



OutOfMemory error solr 8.4.1

2020-03-06 Thread Srinivas Kashyap
Hi All,

I have recently upgraded solr to 8.4.1 and have installed solr as service in 
linux machine. Once I start my service, it will be up for 15-18hours and 
suddenly stops without us shutting down. In solr.log I found below error. Can 
somebody guide me what values should I be increasing in Linux machine?

Earlier, open file limit was not set and now I have increased. Below are my 
system configuration for solr:

JVM memory: 8GB
RAM: 32GB
Open file descriptor count: 50

Ulimit -v - unlimited
Ulimit -m - unlimited


ERROR STACK TRACE:

2020-03-06 12:08:03.071 ERROR (qtp1691185247-21) [   x:product] 
o.a.s.s.HttpSolrCall null:java.lang.RuntimeException: 
java.lang.OutOfMemoryError: unable to create new native thread
at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:752)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:603)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:505)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at 
org.apache.solr.handler.dataimport.DataImporter.runAsync(DataImporter.java:466)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:205)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:799)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:578)
... 36 more




java.lang.OutOfMemoryError: unable to create new native thread
at java.lan

AW: Problem with Solr 7.7.2 after OOM

2020-03-06 Thread Bunde Torsten
As an addendum: For me it looks as if the cores are simply not loaded, although 
the configuration is correct and has not been changed (apart from the 
enlargement of the heap).

Torsten

-Ursprüngliche Nachricht-
Von: Bunde Torsten  
Gesendet: Freitag, 6. März 2020 09:33
An: solr-user@lucene.apache.org
Betreff: AW: Problem with Solr 7.7.2 after OOM

I set the heap to 8g but this doesn't have any effect and the problem is still 
the same.

 ~# ps -eaf | grep solr
 solr  3176 1  0 08:50 ?00:00:00 /lib/systemd/systemd --user
 solr  3177  3176  0 08:50 ?00:00:00 (sd-pam)
 solr  3238 1  0 08:50 ?00:00:06 java -server -Xms8g -Xmx8g 
-XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 
-XX:MaxTenuringThreshold=8 -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
-XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
-XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
-XX:+ParallelRefProcEnabled -verbose:gc 
-Xlog:gc*:file=/var/solr/logs/solr_gc.log:time,uptime:filecount=9,filesize=20M 
-Dsolr.log.dir=/var/solr/logs -Djetty.port=8983 -DSTOP.PORT=7983 
-DSTOP.KEY=solrrocks -Duser.timezone=UTC -Djetty.home=/opt/solr/server 
-Dsolr.solr.home=/var/solr/data -Dsolr.data.home=/var/solr/data 
-Dsolr.install.dir=/opt/solr 
-Dsolr.default.confdir=/opt/solr/server/solr/configsets/_default/conf 
-Dlog4j.configurationFile=file:/var/solr/log4j.properties -Xss256k 
-Dsolr.disable.configEdit=true -Xss256k -Dsolr.jetty.https.port=8983 
-Dsolr.log.muteconsole -XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 
/var/solr/logs -jar start.jar --module=http

 ~# free
   totalusedfree  shared  buff/cache   
available
 Mem:   16426388  62793615034128 796  764324
15488956
 Swap:969960   0  969960
 ~#

Torsten

-Ursprüngliche Nachricht-
Von: Jörn Franke 
Gesendet: Donnerstag, 5. März 2020 17:31
An: solr-user@lucene.apache.org
Betreff: Re: Problem with Solr 7.7.2 after OOM

Just keep in mind that the total memory should be much more than the heap to 
leverage Solr file caches. If you have 8 GB heap probably at least 16 gb total 
memory make sense to be available on the machine .

> Am 05.03.2020 um 16:58 schrieb Walter Underwood 
> mailto:wun...@wunderwood.org>>:
>
> 
>>
>> On Mar 5, 2020, at 4:29 AM, Bunde Torsten 
>> mailto:t.bu...@htp.net>> wrote:
>>
>> -Xms512m -Xmx512m
>
> Your heap is too small. Set this to -Xms8g -Xmx8g
>
> In solr.in.sh, that looks like this:
>
> SOLR_HEAP=8g
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>



Re: Problem with Solr 7.7.2 after OOM

2020-03-06 Thread Erick Erickson
Is it still giving you OOMs? That was the original problem statement. If not, 
then you need to look at your Solr logs to see what error is reported. NOTE: If 
you’re still getting OOMs, then there won’t be anything obvious in the logs. 

Best,
Erick

> On Mar 6, 2020, at 06:44, Bunde Torsten  wrote:
> 
> As an addendum: For me it looks as if the cores are simply not loaded, 
> although the configuration is correct and has not been changed (apart from 
> the enlargement of the heap).
> 
> Torsten
> 
> -Ursprüngliche Nachricht-
> Von: Bunde Torsten  
> Gesendet: Freitag, 6. März 2020 09:33
> An: solr-user@lucene.apache.org
> Betreff: AW: Problem with Solr 7.7.2 after OOM
> 
> I set the heap to 8g but this doesn't have any effect and the problem is 
> still the same.
> 
> ~# ps -eaf | grep solr
> solr  3176 1  0 08:50 ?00:00:00 /lib/systemd/systemd 
> --user
> solr  3177  3176  0 08:50 ?00:00:00 (sd-pam)
> solr  3238 1  0 08:50 ?00:00:06 java -server -Xms8g 
> -Xmx8g -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 
> -XX:MaxTenuringThreshold=8 -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc 
> -Xlog:gc*:file=/var/solr/logs/solr_gc.log:time,uptime:filecount=9,filesize=20M
>  -Dsolr.log.dir=/var/solr/logs -Djetty.port=8983 -DSTOP.PORT=7983 
> -DSTOP.KEY=solrrocks -Duser.timezone=UTC -Djetty.home=/opt/solr/server 
> -Dsolr.solr.home=/var/solr/data -Dsolr.data.home=/var/solr/data 
> -Dsolr.install.dir=/opt/solr 
> -Dsolr.default.confdir=/opt/solr/server/solr/configsets/_default/conf 
> -Dlog4j.configurationFile=file:/var/solr/log4j.properties -Xss256k 
> -Dsolr.disable.configEdit=true -Xss256k -Dsolr.jetty.https.port=8983 
> -Dsolr.log.muteconsole -XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 
> /var/solr/logs -jar start.jar --module=http
> 
> ~# free
>   totalusedfree  shared  buff/cache   
> available
> Mem:   16426388  62793615034128 796  764324
> 15488956
> Swap:969960   0  969960
> ~#
> 
> Torsten
> 
> -Ursprüngliche Nachricht-
> Von: Jörn Franke 
> Gesendet: Donnerstag, 5. März 2020 17:31
> An: solr-user@lucene.apache.org
> Betreff: Re: Problem with Solr 7.7.2 after OOM
> 
> Just keep in mind that the total memory should be much more than the heap to 
> leverage Solr file caches. If you have 8 GB heap probably at least 16 gb 
> total memory make sense to be available on the machine .
> 
>> Am 05.03.2020 um 16:58 schrieb Walter Underwood 
>> mailto:wun...@wunderwood.org>>:
>> 
>> 
>>> 
 On Mar 5, 2020, at 4:29 AM, Bunde Torsten 
 mailto:t.bu...@htp.net>> wrote:
>>> 
>>> -Xms512m -Xmx512m
>> 
>> Your heap is too small. Set this to -Xms8g -Xmx8g
>> 
>> In solr.in.sh, that looks like this:
>> 
>> SOLR_HEAP=8g
>> 
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
> 


Re: OutOfMemory error solr 8.4.1

2020-03-06 Thread Erick Erickson
This one can be a bit tricky. You’re not running out of overall memory, but you 
are running out of memory to allocate stacks. Which implies that, for some 
reason, you are creating a zillion threads. Do you have any custom code?

You can take a thread dump and see what your threads are doing, and you don’t 
need to wait until you see the error. If you take a thread dump my guess is 
you’ll see the number of threads increase over time. If that’s the case, and if 
you have no custom code running, we need to see the thread dump.

Best,
Erick

> On Mar 6, 2020, at 05:54, Srinivas Kashyap  
> wrote:
> 
> Hi All,
> 
> I have recently upgraded solr to 8.4.1 and have installed solr as service in 
> linux machine. Once I start my service, it will be up for 15-18hours and 
> suddenly stops without us shutting down. In solr.log I found below error. Can 
> somebody guide me what values should I be increasing in Linux machine?
> 
> Earlier, open file limit was not set and now I have increased. Below are my 
> system configuration for solr:
> 
> JVM memory: 8GB
> RAM: 32GB
> Open file descriptor count: 50
> 
> Ulimit -v - unlimited
> Ulimit -m - unlimited
> 
> 
> ERROR STACK TRACE:
> 
> 2020-03-06 12:08:03.071 ERROR (qtp1691185247-21) [   x:product] 
> o.a.s.s.HttpSolrCall null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: unable to create new native thread
>at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:752)
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:603)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>at org.eclipse.jetty.server.Server.handle(Server.java:505)
>at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
>at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
>at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
>at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
>at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
>at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.OutOfMemoryError: unable to

Re: Solr 8.2.0 - Schema issue

2020-03-06 Thread Erick Erickson
Didn’t we talk about reloading the collections that share the schema after the 
schema change via the collections API RELOAD command?

Best,
Erick

> On Mar 6, 2020, at 05:34, Joe Obernberger  
> wrote:
> 
> Hi All - any ideas on this?  Anything I can try?
> 
> Thank you!
> 
> -Joe
> 
>> On 2/26/2020 9:01 AM, Joe Obernberger wrote:
>> Hi All - I have several solr collections all with the same schema.  If I add 
>> a field to the schema and index it into the collection on which I added the 
>> field, it works fine.  However, if I try to add a document to a different 
>> solr collection that contains the new field (and is using the same schema), 
>> I get an error that the field doesn't exist.
>> 
>> If I restart the cluster, this problem goes away and I can add a document 
>> with the new field to any solr collection that has the schema.  Any 
>> work-arounds that don't involve a restart?
>> 
>> Thank you!
>> 
>> -Joe Obernberger
>> 


Re: OutOfMemory error solr 8.4.1

2020-03-06 Thread Srinivas Kashyap
Hi Erick,

We have custom code which are schedulers to run delta imports on our cores and 
I have added that custom code as a jar and I have placed it on 
server/solr-webapp/WEB-INF/lib. Basically we are fetching the JNDI datasource 
configured in the jetty.xml(Oracle) and creating connection object. And after 
that in the finally block we are closing it too.

Never faced this issue while we were in solr5.2.1 version though. The same jar 
was placed there too.

Thanks,
Srinivas

On 06-Mar-2020 8:55 pm, Erick Erickson  wrote:
This one can be a bit tricky. You’re not running out of overall memory, but you 
are running out of memory to allocate stacks. Which implies that, for some 
reason, you are creating a zillion threads. Do you have any custom code?

You can take a thread dump and see what your threads are doing, and you don’t 
need to wait until you see the error. If you take a thread dump my guess is 
you’ll see the number of threads increase over time. If that’s the case, and if 
you have no custom code running, we need to see the thread dump.

Best,
Erick

> On Mar 6, 2020, at 05:54, Srinivas Kashyap  
> wrote:
>
> Hi All,
>
> I have recently upgraded solr to 8.4.1 and have installed solr as service in 
> linux machine. Once I start my service, it will be up for 15-18hours and 
> suddenly stops without us shutting down. In solr.log I found below error. Can 
> somebody guide me what values should I be increasing in Linux machine?
>
> Earlier, open file limit was not set and now I have increased. Below are my 
> system configuration for solr:
>
> JVM memory: 8GB
> RAM: 32GB
> Open file descriptor count: 50
>
> Ulimit -v - unlimited
> Ulimit -m - unlimited
>
>
> ERROR STACK TRACE:
>
> 2020-03-06 12:08:03.071 ERROR (qtp1691185247-21) [   x:product] 
> o.a.s.s.HttpSolrCall null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: unable to create new native thread
>at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:752)
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:603)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>at org.eclipse.jetty.server.Server.handle(Server.java:505)
>at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
>at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
>at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.j

Re: OutOfMemory error solr 8.4.1

2020-03-06 Thread Erick Erickson
I assume you recompiled the jar file? re-using the same one compiled against 5x 
is unsupported, nobody will be able to help until you recompile.

Once you’ve done that, if you still have the problem you need to take a thread 
dump to see if your custom code is leaking threads, that’s my number one 
suspect.

Best,
Erick

> On Mar 6, 2020, at 07:36, Srinivas Kashyap  
> wrote:
> 
> Hi Erick,
> 
> We have custom code which are schedulers to run delta imports on our cores 
> and I have added that custom code as a jar and I have placed it on 
> server/solr-webapp/WEB-INF/lib. Basically we are fetching the JNDI datasource 
> configured in the jetty.xml(Oracle) and creating connection object. And after 
> that in the finally block we are closing it too.
> 
> Never faced this issue while we were in solr5.2.1 version though. The same 
> jar was placed there too.
> 
> Thanks,
> Srinivas
> 
> On 06-Mar-2020 8:55 pm, Erick Erickson  wrote:
> This one can be a bit tricky. You’re not running out of overall memory, but 
> you are running out of memory to allocate stacks. Which implies that, for 
> some reason, you are creating a zillion threads. Do you have any custom code?
> 
> You can take a thread dump and see what your threads are doing, and you don’t 
> need to wait until you see the error. If you take a thread dump my guess is 
> you’ll see the number of threads increase over time. If that’s the case, and 
> if you have no custom code running, we need to see the thread dump.
> 
> Best,
> Erick
> 
>> On Mar 6, 2020, at 05:54, Srinivas Kashyap  
>> wrote:
>> 
>> Hi All,
>> 
>> I have recently upgraded solr to 8.4.1 and have installed solr as service in 
>> linux machine. Once I start my service, it will be up for 15-18hours and 
>> suddenly stops without us shutting down. In solr.log I found below error. 
>> Can somebody guide me what values should I be increasing in Linux machine?
>> 
>> Earlier, open file limit was not set and now I have increased. Below are my 
>> system configuration for solr:
>> 
>> JVM memory: 8GB
>> RAM: 32GB
>> Open file descriptor count: 50
>> 
>> Ulimit -v - unlimited
>> Ulimit -m - unlimited
>> 
>> 
>> ERROR STACK TRACE:
>> 
>> 2020-03-06 12:08:03.071 ERROR (qtp1691185247-21) [   x:product] 
>> o.a.s.s.HttpSolrCall null:java.lang.RuntimeException: 
>> java.lang.OutOfMemoryError: unable to create new native thread
>>   at 
>> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:752)
>>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:603)
>>   at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419)
>>   at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
>>   at 
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>>   at 
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>>   at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>>   at 
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>>   at 
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>   at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>>   at 
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
>>   at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>>   at 
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
>>   at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>>   at 
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>>   at 
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
>>   at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>>   at 
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
>>   at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>>   at 
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>>   at 
>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
>>   at 
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>   at 
>> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>>   at 
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>   at org.eclipse.jetty.server.Server.handle(Server.java:505)
>>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
>>   at 
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
>>   at 
>> org.eclipse.jetty.io.AbstractConnection$ReadCallback.suc

Re: Solr 8.2.0 - Schema issue

2020-03-06 Thread Joe Obernberger
Thank you Erick - I have no record of that, but will absolutely give the 
API RELOAD a shot!  Thank you!


-Joe

On 3/6/2020 10:26 AM, Erick Erickson wrote:

Didn’t we talk about reloading the collections that share the schema after the 
schema change via the collections API RELOAD command?

Best,
Erick


On Mar 6, 2020, at 05:34, Joe Obernberger  wrote:

Hi All - any ideas on this?  Anything I can try?

Thank you!

-Joe


On 2/26/2020 9:01 AM, Joe Obernberger wrote:
Hi All - I have several solr collections all with the same schema.  If I add a 
field to the schema and index it into the collection on which I added the 
field, it works fine.  However, if I try to add a document to a different solr 
collection that contains the new field (and is using the same schema), I get an 
error that the field doesn't exist.

If I restart the cluster, this problem goes away and I can add a document with 
the new field to any solr collection that has the schema.  Any work-arounds 
that don't involve a restart?

Thank you!

-Joe Obernberger



Upgrading Solrcloud indexes from 7.2 to 8.4.1

2020-03-06 Thread Webster Homer
We are looking at upgrading our Solrcoud  instances from 7.2 to the most recent 
version of solr 8.4.1 at this time. The last time we upgraded a major solr 
release we were able to upgrade the index files to the newer version, this 
prevented us from having an outage. Subsequently we've reindexed all our 
collections. However the Solr documentation for 8.4.1 states that we need to be 
at Solr 7.3 or later to run the index upgrade.  
https://lucene.apache.org/solr/guide/8_4/solr-upgrade-notes.html

So if we upgrade to 7.7,  and then move to 8.4.1  and run the index upgrade 
script just once?
I guess I'm confused about the 7.2 -> 8.* issue is it data related?

Regards,
Webster



This message and any attachment are confidential and may be privileged or 
otherwise protected from disclosure. If you are not the intended recipient, you 
must not copy this message or attachment or disclose the contents to any other 
person. If you have received this transmission in error, please notify the 
sender immediately and delete the message and any attachment from your system. 
Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not accept 
liability for any omissions or errors in this message which may arise as a 
result of E-Mail-transmission or for damages resulting from any unauthorized 
changes of the content of this message and any attachment thereto. Merck KGaA, 
Darmstadt, Germany and any of its subsidiaries do not guarantee that this 
message is free of viruses and does not accept liability for any damages caused 
by any virus transmitted therewith.



Click http://www.merckgroup.com/disclaimer to access the German, French, 
Spanish and Portuguese versions of this disclaimer.


Re: Upgrading Solrcloud indexes from 7.2 to 8.4.1

2020-03-06 Thread lstusr 5u93n4
Hi Webster,

When we upgraded from 7.5 to 8.1 we ran into a very strange issue:
https://lucene.472066.n3.nabble.com/Stored-field-values-don-t-update-after-7-gt-8-upgrade-td4442934.html


We ended up having to do a full re-index to solve this issue, but if you're
going to do this upgrade I would love to know if this issue shows up for
you too. At the very least, I'd suggest doing some variant of the test
outlined in that post, so you can be confident in your data integrity.

Kyle

On Fri, 6 Mar 2020 at 14:08, Webster Homer 
wrote:

> We are looking at upgrading our Solrcoud  instances from 7.2 to the most
> recent version of solr 8.4.1 at this time. The last time we upgraded a
> major solr release we were able to upgrade the index files to the newer
> version, this prevented us from having an outage. Subsequently we've
> reindexed all our collections. However the Solr documentation for 8.4.1
> states that we need to be at Solr 7.3 or later to run the index upgrade.
> https://lucene.apache.org/solr/guide/8_4/solr-upgrade-notes.html
>
> So if we upgrade to 7.7,  and then move to 8.4.1  and run the index
> upgrade script just once?
> I guess I'm confused about the 7.2 -> 8.* issue is it data related?
>
> Regards,
> Webster
>
>
>
> This message and any attachment are confidential and may be privileged or
> otherwise protected from disclosure. If you are not the intended recipient,
> you must not copy this message or attachment or disclose the contents to
> any other person. If you have received this transmission in error, please
> notify the sender immediately and delete the message and any attachment
> from your system. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not accept liability for any omissions or errors in this
> message which may arise as a result of E-Mail-transmission or for damages
> resulting from any unauthorized changes of the content of this message and
> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not guarantee that this message is free of viruses and does
> not accept liability for any damages caused by any virus transmitted
> therewith.
>
>
>
> Click http://www.merckgroup.com/disclaimer to access the German, French,
> Spanish and Portuguese versions of this disclaimer.
>


Re: Upgrading Solrcloud indexes from 7.2 to 8.4.1

2020-03-06 Thread Dave
You best off doing a full reindex to a single solr cloud 8.x node and then when 
done start taking down 7.x nodes, upgrade them to 8.x and add them to the new 
cluster. upgrading indexes has so many potential issues, 

> On Mar 6, 2020, at 9:21 PM, lstusr 5u93n4  wrote:
> 
> Hi Webster,
> 
> When we upgraded from 7.5 to 8.1 we ran into a very strange issue:
> https://lucene.472066.n3.nabble.com/Stored-field-values-don-t-update-after-7-gt-8-upgrade-td4442934.html
> 
> 
> We ended up having to do a full re-index to solve this issue, but if you're
> going to do this upgrade I would love to know if this issue shows up for
> you too. At the very least, I'd suggest doing some variant of the test
> outlined in that post, so you can be confident in your data integrity.
> 
> Kyle
> 
>> On Fri, 6 Mar 2020 at 14:08, Webster Homer 
>> wrote:
>> 
>> We are looking at upgrading our Solrcoud  instances from 7.2 to the most
>> recent version of solr 8.4.1 at this time. The last time we upgraded a
>> major solr release we were able to upgrade the index files to the newer
>> version, this prevented us from having an outage. Subsequently we've
>> reindexed all our collections. However the Solr documentation for 8.4.1
>> states that we need to be at Solr 7.3 or later to run the index upgrade.
>> https://lucene.apache.org/solr/guide/8_4/solr-upgrade-notes.html
>> 
>> So if we upgrade to 7.7,  and then move to 8.4.1  and run the index
>> upgrade script just once?
>> I guess I'm confused about the 7.2 -> 8.* issue is it data related?
>> 
>> Regards,
>> Webster
>> 
>> 
>> 
>> This message and any attachment are confidential and may be privileged or
>> otherwise protected from disclosure. If you are not the intended recipient,
>> you must not copy this message or attachment or disclose the contents to
>> any other person. If you have received this transmission in error, please
>> notify the sender immediately and delete the message and any attachment
>> from your system. Merck KGaA, Darmstadt, Germany and any of its
>> subsidiaries do not accept liability for any omissions or errors in this
>> message which may arise as a result of E-Mail-transmission or for damages
>> resulting from any unauthorized changes of the content of this message and
>> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
>> subsidiaries do not guarantee that this message is free of viruses and does
>> not accept liability for any damages caused by any virus transmitted
>> therewith.
>> 
>> 
>> 
>> Click http://www.merckgroup.com/disclaimer to access the German, French,
>> Spanish and Portuguese versions of this disclaimer.
>> 


Re: Upgrading Solrcloud indexes from 7.2 to 8.4.1

2020-03-06 Thread Erick Erickson
When you say “reindexed”, how exactly was that done? Because if you didn’t 
start from an empty index, you will have to re-index from scratch to use 8x.

Starting with 6.x, a marker is written whenever a segment is created indicating 
what version was used. Whenever two or more segments are merged, the earliest 
marker is preserved. Then, starting with Solr 8x, if any segment has a marker 
more than one major revision old, Lucene will refuse to open the index.

So when you say “reindexed” and all you did was re-index every doc into an 
existing index, the 6x marker is preserved and you’ll be out of luck.

IndexUpgraderTool, BTW, merely does an optimize to one segment, which has it’s 
own downsides, see: 
https://lucidworks.com/post/segment-merging-deleted-documents-optimize-may-bad/ 
and the associated post for 7.5+.

It doesn’t (and can’t) make the index look just like the current index, so 
Lucene won’t try. Paraphrasing Robert Muir, Lucene stores an efficient, derived 
index rather than the raw data. So what’s in the index is y, where y=f(x). 
Since x may not be present, Lucene can’t recreate y when major versions change. 

Best,
Erick

> On Mar 6, 2020, at 11:08, Webster Homer  
> wrote:
> 
> We are looking at upgrading our Solrcoud  instances from 7.2 to the most 
> recent version of solr 8.4.1 at this time. The last time we upgraded a major 
> solr release we were able to upgrade the index files to the newer version, 
> this prevented us from having an outage. Subsequently we've reindexed all our 
> collections. However the Solr documentation for 8.4.1 states that we need to 
> be at Solr 7.3 or later to run the index upgrade.  
> https://lucene.apache.org/solr/guide/8_4/solr-upgrade-notes.html
> 
> So if we upgrade to 7.7,  and then move to 8.4.1  and run the index upgrade 
> script just once?
> I guess I'm confused about the 7.2 -> 8.* issue is it data related?
> 
> Regards,
> Webster
> 
> 
> 
> This message and any attachment are confidential and may be privileged or 
> otherwise protected from disclosure. If you are not the intended recipient, 
> you must not copy this message or attachment or disclose the contents to any 
> other person. If you have received this transmission in error, please notify 
> the sender immediately and delete the message and any attachment from your 
> system. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not 
> accept liability for any omissions or errors in this message which may arise 
> as a result of E-Mail-transmission or for damages resulting from any 
> unauthorized changes of the content of this message and any attachment 
> thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not 
> guarantee that this message is free of viruses and does not accept liability 
> for any damages caused by any virus transmitted therewith.
> 
> 
> 
> Click http://www.merckgroup.com/disclaimer to access the German, French, 
> Spanish and Portuguese versions of this disclaimer.