Hendrik:
There are a limited number of threads that load in parallel when
starting up, depends on the configuration. The defaults are 3 threads
in stand-alone and 8 in Cloud (see: NodeConfig.java)
public static final int DEFAULT_CORE_LOAD_THREADS = 3;
public static final int DEFAULT_CORE_LOAD_THR
I increased the metaspace size to 2GB. This way I could do multiple
rounds of reloading all collections already. The GC logs do show now an
almost stable metaspace size. So maybe I did just set the limits too
low. Still a bit odd that reloading the collections results in a higher
memory usage.
Hi,
I did a simple test on a three node cluster using Solr 7.2.1. The JVMs
(Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_162
25.162-b12) have about 6.5GB heap and 1.5GB metaspace. In my test I have
1000 collections with only 1000 simple documents each. I'm then
triggering collec
No faceting. Highlighting. We have very long queries, because students are
pasting homework problems. I’ve seen 1000 word queries, but we truncate at 40
words.
We do as-you-type results, so we also have ngram fields on the 20 million
solved homework questions. This bloats the index severely. A
Walter Underwood wrote:
> I knew about SOLR-7433, but I’m really surprised that 200 incoming requests
> can need 4000 threads.
>
> We have four shards.
For that I would have expected at most 800 Threads. Are you perhaps doing
faceting on multiple fields with facet.threads=5? (kinda grasping at
I knew about SOLR-7433, but I’m really surprised that 200 incoming requests can
need 4000 threads.
We have four shards.
Why is there a thread per shard? HTTP can be done async: send1, send2, send3,
send4, recv1 recv2, recv3, recv4. I’ve been doing that for over a decade with
HTTPClient.
wunde
Walter Underwood wrote:
> I set this in jetty.xml, but it still created 4000 threads.
>
>
>
>
>
>
That sets a limit on the number of threads started by Jetty to handle incoming
connections, but does not affect how many threads Solr can create. I guess you
have ~20 shards in your
I’m pretty sure these OOMs are caused by uncontrolled thread creation, up to
4000 threads. That requires an additional 4 Gb (1 Meg per thread). It is like
Solr doesn’t use thread pools at all.
I set this in jetty.xml, but it still created 4000 threads.
wunder
Walter Underwood
wun.
I found the suggesters very memory hungry. I had one particularly large
index where the suggester should have been filtering a small number of
docs, but was mmap'ing the entire index. I only ever saw this behavior with
the suggesters.
On 22 November 2017 at 03:17, Walter Underwood
wrote:
> All o
On 11/21/2017 9:17 AM, Walter Underwood wrote:
> All our customizations are in solr.in.sh. We’re using the one we configured
> for 6.3.0. I’ll check for any differences between that and the 6.5.1 script.
The order looks correct to me -- the arguments for the OOM killer are
listed *before* the "-j
Walter:
Yeah, I've seen this on occasion. IIRC, the OOM exception will be
specific to running out of stack space, or at least slightly different
than the "standard" OOM error. That would be the "smoking gun" for too
many threads
Erick
On Tue, Nov 21, 2017 at 9:00 AM, Walter Underwood wrote:
I do have one theory about the OOM. The server is running out of memory because
there are too many threads. Instead of queueing up overload in the load
balancer, it is queue in new threads waiting to run. Setting
solr.jetty.threads.max to 10,000 guarantees this will happen under overload.
New R
bq: but those use analyzing infix, so they are search indexes, not in-memory
Sure, but they still can consume heap. Most of the index is MMapped of
course, but there are some control structures, indexes and the like
still kept on the heap.
I suppose not using the suggester would nail it though.
All our customizations are in solr.in.sh. We’re using the one we configured for
6.3.0. I’ll check for any differences between that and the 6.5.1 script.
I don’t see any arguments at all in the dashboard. I do see them in a ps
listing, right at the end.
java -server -Xms8g -Xmx8g -XX:+UseG1GC -X
On 11/20/2017 6:17 PM, Walter Underwood wrote:
When I ran load benchmarks with 6.3.0, an overloaded cluster would get super
slow but keep functioning. With 6.5.1, we hit 100% CPU, then start getting
OOMs. That is really bad, because it means we need to reboot every node in the
cluster.
Also,
Hi Walter,
you can check if the JVM OOM hook is acknowledged by JVM
and setup in the JVM. The options are "-XX:+PrintFlagsFinal -version"
You can modify your bin/solr script and tweak the function "launch_solr"
at the end of the script. Replace "-jar start.jar" with "-XX:+PrintFlagsFinal
-versio
When I ran load benchmarks with 6.3.0, an overloaded cluster would get super
slow but keep functioning. With 6.5.1, we hit 100% CPU, then start getting
OOMs. That is really bad, because it means we need to reboot every node in the
cluster.
Also, the JVM OOM hook isn’t running the process killer
Solr/Lucene really like having a bunch of files available, so bumping
the ulimit is often the right thing to do.
This assumes you don't have any custom code that is failing to close
searchers and the like.
Best,
Erick
On Mon, May 8, 2017 at 10:40 AM, Satya Marivada
wrote:
> Hi,
>
> Started gett
On 5/8/2017 11:40 AM, Satya Marivada wrote:
> Started getting below errors/exceptions. I have listed the resolution
> inline. Could you please see if I am headed right?
>
> java.lang.OutOfMemoryError: unable to create new native thread
> java.io.IOException: Too many open files
I have never had a
Hi,
Started getting below errors/exceptions. I have listed the resolution
inline. Could you please see if I am headed right?
The below error basically says that there are no more threads can be
created as the limit has reached. We have big index and I assume the
threads are being created outside
ect: SolrCloud: Failure to recover on restart following OutOfMemoryError
Hi All,
We have a SolrCloud cluster with 3 Virtual Machines, assigning 4GB to the Java
Heap.
Recently we added a number of collections to the machine going from around 80
collections (each with 3 shards x 3 replicas) to
utting off Solr, and nuking the ZooKeeper
configs, recreating the configs and restarting Solr and deleting all
collections.
Shouldn't Solr be able to recover on restart or does OutOfMemoryError cause
some kind of Zk/Solr cluster state corruption that is unrecoverable?
-Frank
[cid:9D23A7EE-B9
Perfect, thank you very much.
2016-05-27 12:44 GMT-03:00 Shawn Heisey :
> On 5/27/2016 7:05 AM, Pablo Anzorena wrote:
> > I am using solr 5.2.1 in cloud mode. My jvm arguments for the
> > OutOfMemoryError is
> > -XX:OnOutOfMemoryError='/etc/init.d/solrcloud;restart
On 5/27/2016 7:05 AM, Pablo Anzorena wrote:
> I am using solr 5.2.1 in cloud mode. My jvm arguments for the
> OutOfMemoryError is
> -XX:OnOutOfMemoryError='/etc/init.d/solrcloud;restart'
>
> In the Solr UI, the event is beign fired, but nothing happens.
In all versio
Hello,
I am using solr 5.2.1 in cloud mode. My jvm arguments for the
OutOfMemoryError is
-XX:OnOutOfMemoryError='/etc/init.d/solrcloud;restart'
In the Solr UI, the event is beign fired, but nothing happens.
What am I missing?
Regards.
> On Thu, Jan 15, 2015 at 1:54 PM, wrote:
>>>
>>> Siegfried and Michael Thank you for your replies and help.
>>>>
>>>> -Original Message-
>>>> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
>>>> Sent: Thursday, January 15
ael Thank you for your replies and help.
>>>
>>> -Original Message-
>>> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
>>> Sent: Thursday, January 15, 2015 3:45 AM
>>> To: solr-user@lucene.apache.org
>>> Subject: Re: OutOfMemoryError for P
of
> > getjmp/longjmp. But fast...
> >
> > On Thu, Jan 15, 2015 at 1:54 PM, wrote:
> >> Siegfried and Michael Thank you for your replies and help.
> >>
> >> -Original Message-
> >> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
> &
and Michael Thank you for your replies and help.
-Original Message-
From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
Sent: Thursday, January 15, 2015 3:45 AM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryError for PDF document upload into Solr
Hi Ganesh,
you can increase the
and Michael Thank you for your replies and help.
-Original Message-
From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
Sent: Thursday, January 15, 2015 3:45 AM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryError for PDF document upload into Solr
Hi Ganesh,
you can increase the
ael Thank you for your replies and help.
>
> -Original Message-
> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
> Sent: Thursday, January 15, 2015 3:45 AM
> To: solr-user@lucene.apache.org
> Subject: Re: OutOfMemoryError for PDF document upload into Solr
>
> Hi Ganes
Siegfried and Michael Thank you for your replies and help.
-Original Message-
From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
Sent: Thursday, January 15, 2015 3:45 AM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryError for PDF document upload into Solr
Hi Ganesh,
you can
Hi Ganesh,
you can increase the heap size but parsing a 4 GB PDF document will very
likely consume A LOT OF memory - I think you need to check if that large
PDF can be parsed at all :-)
Cheers,
Siegfried Goeschl
On 14.01.15 18:04, Michael Della Bitta wrote:
Yep, you'll have to increase the
Yep, you'll have to increase the heap size for your Tomcat container.
http://stackoverflow.com/questions/6897476/tomcat-7-how-to-set-initial-heap-size-correctly
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st St
Hello,
Can someone pass on the hints to get around following error? Is there any Heap
Size parameter I can set in Tomcat or in Solr webApp that gets deployed in Solr?
I am running Solr webapp inside Tomcat on my local machine which has RAM of 12
GB. I have PDF document which is 4 GB max in size
Shawn, looks like the JVM bump did the trick. Thanks!
On Tue, Dec 16, 2014 at 10:39 AM, Trilok Prithvi
wrote:
>
> Thanks Shawn. We will increase the JVM to 4GB and see how it performs.
>
> Alexandre,
> Our queries are simple (with strdist() function in almost all the
> queries). No facets, or sor
Thanks Shawn. We will increase the JVM to 4GB and see how it performs.
Alexandre,
Our queries are simple (with strdist() function in almost all the queries).
No facets, or sorts.
But we do a lot of data loads. We index data a lot (several documents,
ranging from 10 - 10 documents) and we uploa
What's your queries look like? Especially FQs, facets, sort, etc. All
of those things require caches of various sorts.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https
On 12/16/2014 9:55 AM, Trilok Prithvi wrote:
We are getting OOME pretty often (every hour or so). We are restarting
nodes to keep up with it.
Here is our setup:
SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.
Each node has:
16GB RAM
2GB JVM (Xmx 2048, Xms 1024)
~100 Million documents
We are getting OOME pretty often (every hour or so). We are restarting
nodes to keep up with it.
Here is our setup:
SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.
Each node has:
16GB RAM
2GB JVM (Xmx 2048, Xms 1024)
~100 Million documents (split among 2 shards - ~50M on each shard)
So
OutOfMemoryError: Java heap space in Solr
To: solr-user@lucene.apache.org
Date: Wednesday, 9 July, 2014, 9:24 PM
On 7/9/2014 6:02 AM,
yuvaraj ponnuswamy wrote:
> Hi,
>
> I am getting the
OutofMemory Error: "java.lang.OutOfMemoryError: Java
heap space" often in production du
On 7/9/2014 6:02 AM, yuvaraj ponnuswamy wrote:
> Hi,
>
> I am getting the OutofMemory Error: "java.lang.OutOfMemoryError: Java heap
> space" often in production due to the particular Treemap is taking more
> memory in the JVM.
>
> When i looked into the config files I am having the entity called
Hi,
I am getting the OutofMemory Error: "java.lang.OutOfMemoryError: Java heap
space" often in production due to the particular Treemap is taking more memory
in the JVM.
When i looked into the config files I am having the entity called
UserQryDocument where i am fetching the data from certain
Hi;
According to Sun, the error happens "if too much time is being spent in
garbage collection: if more than 98% of the total time is spent in garbage
collection and less than 2% of the heap is recovered, an OutOfMemoryError
will be thrown.". Specifying more memory should be helpful. On
From: François Schiettecatte
To: solr-user@lucene.apache.org; Haiying Wang
Sent: Tuesday, April 8, 2014 8:25 PM
Subject: Re: OutOfMemoryError while merging large indexes
Have you tried using:
-XX:-UseGCOverheadLimit
François
On Apr 8, 2014, at 6:06 PM, Haiying Wang wrote:
> Hi,
>
&
Have you tried using:
-XX:-UseGCOverheadLimit
François
On Apr 8, 2014, at 6:06 PM, Haiying Wang wrote:
> Hi,
>
> We were trying to merge a large index (9GB, 21 million docs) into current
> index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always
> run into OOM e
Hi,
We were trying to merge a large index (9GB, 21 million docs) into current index
(only 13MB), using mergeindexes command ofCoreAdminHandler, but always run into
OOM error. We currently set the max heap size to 4GB for the Solr server. We
are using 4.6.0, and did not change the original solrc
hi,
heap problem is due to memory full.
you should remove unnecessary data and restart server once.
On Thursday, 6 March 2014 10:39 AM, Angel Tchorbadjiiski
wrote:
Hi Shawn,
a big thanks for the long and detailed answer. I am aware of how linux
uses free RAM for caching and the the problem
Hi Shawn,
a big thanks for the long and detailed answer. I am aware of how linux
uses free RAM for caching and the the problems related to jvm and GC. It
is nice to hear how this correlates to Solr. I'll take some time and
think over it. The facet.method=enum and probably a combination of
Doc
On 3/5/2014 4:40 AM, Angel Tchorbadjiiski wrote:
> Hi Shawn,
>
> On 05.03.2014 10:05, Angel Tchorbadjiiski wrote:
>> Hi Shawn,
>>
>>> It may be your facets that are killing you here. As Toke mentioned, you
>>> have not indicated what your max heap is.20 separate facet fields with
>>> millions of
Hi Shawn,
On 05.03.2014 10:05, Angel Tchorbadjiiski wrote:
Hi Shawn,
It may be your facets that are killing you here. As Toke mentioned, you
have not indicated what your max heap is.20 separate facet fields with
millions of documents will use a lot of fieldcache memory if you use the
standard
On 05.03.2014 11:51, Toke Eskildsen wrote:
On Wed, 2014-03-05 at 09:59 +0100, Angel Tchorbadjiiski wrote:
On 04.03.2014 11:20, Toke Eskildsen wrote:
Angel Tchorbadjiiski [angel.tchorbadjii...@antibodies-online.com] wrote:
[Single shard / 2 cores Solr 4.6.1, 65M docs / 50GB, 20 facet fields]
On Wed, 2014-03-05 at 09:59 +0100, Angel Tchorbadjiiski wrote:
> On 04.03.2014 11:20, Toke Eskildsen wrote:
> > Angel Tchorbadjiiski [angel.tchorbadjii...@antibodies-online.com] wrote:
> >
> > [Single shard / 2 cores Solr 4.6.1, 65M docs / 50GB, 20 facet fields]
> >
> >> The OS in use is a 64bit li
Hi Shawn,
It may be your facets that are killing you here. As Toke mentioned, you
have not indicated what your max heap is.20 separate facet fields with
millions of documents will use a lot of fieldcache memory if you use the
standard facet.method, fc.
Try adding facet.method=enum to all your
.
I did not see your memory allocation anywhere. What is your Xmx?
At the moment I dont use it. The instance allocates 12G without the
parameter set.
P.S.: Here the complete error OutOfMemoryError message:
org.apache.solr.common.SolrException; null:java.lang.RuntimeException
On 3/4/2014 2:23 AM, Angel Tchorbadjiiski wrote:
in the last couple of weeks one of my machines is experiencing
OutOfMemoryError: Java heap space errors. In a couple of hours after
starting the SOLR instance queries with execution times of unter 100ms
need more than 10s to execute and many
: Here the complete error OutOfMemoryError message:
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException:
> java.lang.OutOfMemoryError: Java heap space
> at
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:735)
Hello list,
in the last couple of weeks one of my machines is experiencing
OutOfMemoryError: Java heap space errors. In a couple of hours after
starting the SOLR instance queries with execution times of unter 100ms
need more than 10s to execute and many Java heap space erros appear in
the
This is a known issue. Solr 4.7 will bring some relief.
See https://issues.apache.org/jira/browse/SOLR-5214
On Thu, Jan 23, 2014 at 10:10 PM, Will Butler wrote:
> We have a 125GB shard that we are attempting to split, but each time we try
> to do so, we eventually run out of memory (java.lang.
We have a 125GB shard that we are attempting to split, but each time we try to
do so, we eventually run out of memory (java.lang.OutOfMemoryError: GC overhead
limit exceeded). We have attempted it with the following heap sizes on the
shard leader: 4GB, 6GB, 12GB, and 24GB. Even if it does eventu
found Lucene45DocValuesProducer and wondered why it exists
and isn't used here. Lucene45DocValuesProducer.ramBytesUsed() also looks
different to Lucene42DocValuesProducer.ramBytesUsed()
Best Regards
Torben
2013/12/16 Shawn Heisey
> On 12/16/2013 2:34 AM, Torben Greulich wrote:
> > we get a
On 12/16/2013 2:34 AM, Torben Greulich wrote:
> we get a OutOfMemoryError in RamUsageEstimator and are a little bit
> confused about the error.
> We are using solr 4.6 and are confused about the Lucene42DocValuesProducer.
> We checked current solr code and found that Lucene42NormsFo
Hi,
we get a OutOfMemoryError in RamUsageEstimator and are a little bit
confused about the error.
We are using solr 4.6 and are confused about the Lucene42DocValuesProducer.
We checked current solr code and found that Lucene42NormsFormat will be
returned as NormFormat in Lucene46Codec and so the
...:java.lang.IllegalStateException: this writer hit an
OutOfMemoryError; cannot commit
at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2726)
at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2897)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java
(IndexingRunner.java:303)
>
> and then a little while later:
>
> auto commit error...:java.lang.IllegalStateException: this writer hit an
> OutOfMemoryError; cannot commit
> at
>
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2726)
> at
> or
On 7/24/2013 9:38 AM, jimtronic wrote:
> I've encountered an OOM that seems to come after the server has been up for a
> few weeks.
>
> While I would love for someone to just tell me "you did X wrong", I'm more
> interested in trying to debug this. So, given the error below, where would I
> look
andler.java:135)
> at
>
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at
>
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at
>
> org.eclipse.jetty.server.handler.H
-OutOfMemoryError-tp4080085p4080090.html
Sent from the Solr - User mailing list archive at Nabble.com.
java:485)
at
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-debug-an-OutOfMemoryError-tp4080085.html
Sent from the Solr - User mailing list archive at Nabble.com.
I upgraded java to version 7 and everything seems to be stable now!
BR,
Arkadi
On 03/25/2013 09:54 PM, Shawn Heisey wrote:
On 3/25/2013 1:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java proce
On 3/25/2013 1:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java process. But now the
whole machine crashes! Any idea why?
Mar 22 20:30:01 solr01-gs kernel: [716098.077809] java invoked
oom-kille
Arkadi,
jstat -gcutil -h20 2000 100 also gives useful info about GC and I use
it a lot for quick insight into what is going on with GC. SPM (see
http://sematext.com/spm/index.html ) may also be worth using.
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Mon, Mar 25, 2013 at
You can also use "-verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails
-Xloggc:gc.log"
as additional options to get a "gc.log" file and see what GC is doing.
Regards
Bernd
Am 25.03.2013 16:01, schrieb Arkadi Colson:
> How can I see if GC is actually working? Is it written in the tomcat logs as
How can I see if GC is actually working? Is it written in the tomcat
logs as well or will I only see it in the memory graphs?
BR,
Arkadi
On 03/25/2013 03:50 PM, Bernd Fehling wrote:
We use munin with jmx plugin for monitoring all server and Solr installations.
(http://munin-monitoring.org/)
On
We use munin with jmx plugin for monitoring all server and Solr installations.
(http://munin-monitoring.org/)
Only for short time monitoring we also use jvisualvm delivered with Java SE JDK.
Regards
Bernd
Am 25.03.2013 14:45, schrieb Arkadi Colson:
> Thanks for the info!
> I just upgraded java f
Thanks for the info!
I just upgraded java from 6 to 7...
How exactly do you monitor the memory usage and the affect of the
garbage collector?
On 03/25/2013 01:18 PM, Bernd Fehling wrote:
The of UseG1GC yes,
but with Solr 4.x, Jetty 8.1.8 and Java HotSpot(TM) 64-Bit Server VM (1.7.0_07).
os.a
The of UseG1GC yes,
but with Solr 4.x, Jetty 8.1.8 and Java HotSpot(TM) 64-Bit Server VM (1.7.0_07).
os.arch: amd64
os.name: Linux
os.version: 2.6.32.13-0.5-xen
Only args are "-XX:+UseG1GC -Xms16g -Xmx16g".
Monitoring shows that 16g is a bit high, I might reduce it to 10g or 12g for
the slaves
Is sombody using the UseG1GC garbage collector with Solr and Tomcat 7?
Any extra options needed?
Thanks...
On 03/25/2013 08:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m
as parameters. I also added -XX:+UseG1GC to the java process. But now
t
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java process. But now the
whole machine crashes! Any idea why?
Mar 22 20:30:01 solr01-gs kernel: [716098.077809] java invoked
oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
M
On 3/14/2013 3:35 AM, Arkadi Colson wrote:
Hi
I'm getting this error after a few hours of filling solr with documents.
Tomcat is running with -Xms1024m -Xmx4096m.
Total memory of host is 12GB. Softcommits are done every second and hard
commits every minute.
Any idea why this is happening and how
On 03/14/2013 03:11 PM, Toke Eskildsen wrote:
On Thu, 2013-03-14 at 13:10 +0100, Arkadi Colson wrote:
When I shutdown tomcat free -m and top keeps telling me the same values.
Almost no free memory...
Any idea?
Are you reading top & free right? It is standard behaviour for most
modern operatin
On Thu, 2013-03-14 at 13:10 +0100, Arkadi Colson wrote:
> When I shutdown tomcat free -m and top keeps telling me the same values.
> Almost no free memory...
>
> Any idea?
Are you reading top & free right? It is standard behaviour for most
modern operating systems to have very little free memory
When I shutdown tomcat free -m and top keeps telling me the same values.
Almost no free memory...
Any idea?
On 03/14/2013 10:35 AM, Arkadi Colson wrote:
Hi
I'm getting this error after a few hours of filling solr with
documents. Tomcat is running with -Xms1024m -Xmx4096m.
Total memory of hos
Hi
I'm getting this error after a few hours of filling solr with documents.
Tomcat is running with -Xms1024m -Xmx4096m.
Total memory of host is 12GB. Softcommits are done every second and hard
commits every minute.
Any idea why this is happening and how to avoid this?
*top*
PID USER P
TILS_LONG_PARSER=>[J#28896878
entry#20 :
'NIOFSIndexInput(path="/home/connect/ConnectPORTAL/preview/solr-home/data/index/_3kj.frq")'=>'WiringDiagramSheetImpl.versionAsDate',long,org.apache.lucene.search.FieldCache.NUMERIC_UTILS_LONG_PARSER=>[J#2986832
entry#21 :
for searches which have a very high
Qtime.
Repeat the searches with high Qtime and see if you get "insanity_counts" or
heap memory jumps in JVM.
Regards
Bernd
Am 06.12.2012 23:27, schrieb uwe72:
> Hi there,
>
> since i use a lot sorting and faceting i am getting very ofte
:35:38 org.apache.solr.common.SolrException log
SCHWERWIEGEND: auto commit error...:java.lang.IllegalStateException:
this writer hit an OutOfMemoryError; cannot commit
at
org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2650)
follow by 10 Java heap space Exceptions, and
my knowledge, in all
>> cases that IndexWriter throws an OutOfMemoryError, the original
>> OutOfMemoryError is also rethrown (not just this IllegalStateException
>> noting that at some point, it hit OOM.
>
> Hmm, i checked older logs and found something new, what i have not
> seen
ays with tomcat and open an jira-issue
if it doesn't work with it.
> do you have another exception in your logs? To my knowledge, in all
> cases that IndexWriter throws an OutOfMemoryError, the original
> OutOfMemoryError is also rethrown (not just this IllegalStateException
> noting t
ld be a number of things, including something already fixed.
>
>
> auto commit error...:java.lang.IllegalStateException: this writer hit
> an OutOfMemoryError; cannot commit
> at
> org.apache.lucene.index.IndexWriter.prepareCommit(Ind
Hi folks,
my Test-Server with Solr 4.0 from trunk(version 1292064 from late
february) throws this exception...
auto commit error...:java.lang.IllegalStateException: this writer hit
an OutOfMemoryError; cannot commit
at
org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java
gt;>
>> http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html
>> >>>
>> >>> I have index document of core1 = 5 million, core2=8million and
>> >>> core3=3million and all index are hosted in single Solr instance
>> >>>
>
I am going to use Solr for our site StubHub.com, see attached "ls -l"
> >> list
> >>> of index files for all core
> >>>
> >>> SolrConfig.xml:
> >>>
> >>>
> >>>
> >>> false
> >>&
te StubHub.com, see attached "ls -l"
>> list
>>> of index files for all core
>>>
>>> SolrConfig.xml:
>>>
>>>
>>>
>>> false
>>> 10
>>> 2147483647
>>> 1
false
> > 10
> > 2147483647
> > 1
> > 4096
> > 10
> > 1000
> > 1
> > single
> >
> >
> > 0.0
> > 10.0
&g
gt;> 4096
>> 10
>> 1000
>> 1
>> single
>>
>>
>>0.0
>>10.0
>>
>>
>>
>>false
>&g
gt;
>
> false
> 0
>
>
>
>
>
>
> 1000
>
> 900000
> false
>
>
> ${inventory.solr.softcommi
Hi,
65K is already a very large number and should have been sufficient...
However: have you increased the merge factor? Doing so increases the
open files (maps) required.
Have you disabled compound file format? (Hmmm: I think Solr does so
by default... which is dangerous). Maybe try enabling
Michael, Thanks for response
it was 65K as you mention the default value for "cat
/proc/sys/vm/max_map_count" . How we determine what value this should be?
is it number of document during hard commit in my case it is 15 minutes?
or it is number of index file or number of documents we have in all
It's the virtual memory limit that matters; yours says unlimited below
(good!), but, are you certain that's really the limit your Solr
process runs with?
On Linux, there is also a per-process map count:
cat /proc/sys/vm/max_map_count
I think it typically defaults to 65,536 but you should che
1 - 100 of 134 matches
Mail list logo