No faceting. Highlighting. We have very long queries, because students are
pasting homework problems. I’ve seen 1000 word queries, but we truncate at 40
words.
We do as-you-type results, so we also have ngram fields on the 20 million
solved homework questions. This bloats the index severely. A
Walter Underwood wrote:
> I knew about SOLR-7433, but I’m really surprised that 200 incoming requests
> can need 4000 threads.
>
> We have four shards.
For that I would have expected at most 800 Threads. Are you perhaps doing
faceting on multiple fields with facet.threads=5? (kinda grasping at
I knew about SOLR-7433, but I’m really surprised that 200 incoming requests can
need 4000 threads.
We have four shards.
Why is there a thread per shard? HTTP can be done async: send1, send2, send3,
send4, recv1 recv2, recv3, recv4. I’ve been doing that for over a decade with
HTTPClient.
wunde
Walter Underwood wrote:
> I set this in jetty.xml, but it still created 4000 threads.
>
>
>
>
>
>
That sets a limit on the number of threads started by Jetty to handle incoming
connections, but does not affect how many threads Solr can create. I guess you
have ~20 shards in your
I’m pretty sure these OOMs are caused by uncontrolled thread creation, up to
4000 threads. That requires an additional 4 Gb (1 Meg per thread). It is like
Solr doesn’t use thread pools at all.
I set this in jetty.xml, but it still created 4000 threads.
wunder
Walter Underwood
wun.
I found the suggesters very memory hungry. I had one particularly large
index where the suggester should have been filtering a small number of
docs, but was mmap'ing the entire index. I only ever saw this behavior with
the suggesters.
On 22 November 2017 at 03:17, Walter Underwood
wrote:
> All o
On 11/21/2017 9:17 AM, Walter Underwood wrote:
> All our customizations are in solr.in.sh. We’re using the one we configured
> for 6.3.0. I’ll check for any differences between that and the 6.5.1 script.
The order looks correct to me -- the arguments for the OOM killer are
listed *before* the "-j
Walter:
Yeah, I've seen this on occasion. IIRC, the OOM exception will be
specific to running out of stack space, or at least slightly different
than the "standard" OOM error. That would be the "smoking gun" for too
many threads
Erick
On Tue, Nov 21, 2017 at 9:00 AM, Walter Underwood wrote:
I do have one theory about the OOM. The server is running out of memory because
there are too many threads. Instead of queueing up overload in the load
balancer, it is queue in new threads waiting to run. Setting
solr.jetty.threads.max to 10,000 guarantees this will happen under overload.
New R
bq: but those use analyzing infix, so they are search indexes, not in-memory
Sure, but they still can consume heap. Most of the index is MMapped of
course, but there are some control structures, indexes and the like
still kept on the heap.
I suppose not using the suggester would nail it though.
All our customizations are in solr.in.sh. We’re using the one we configured for
6.3.0. I’ll check for any differences between that and the 6.5.1 script.
I don’t see any arguments at all in the dashboard. I do see them in a ps
listing, right at the end.
java -server -Xms8g -Xmx8g -XX:+UseG1GC -X
On 11/20/2017 6:17 PM, Walter Underwood wrote:
When I ran load benchmarks with 6.3.0, an overloaded cluster would get super
slow but keep functioning. With 6.5.1, we hit 100% CPU, then start getting
OOMs. That is really bad, because it means we need to reboot every node in the
cluster.
Also,
Hi Walter,
you can check if the JVM OOM hook is acknowledged by JVM
and setup in the JVM. The options are "-XX:+PrintFlagsFinal -version"
You can modify your bin/solr script and tweak the function "launch_solr"
at the end of the script. Replace "-jar start.jar" with "-XX:+PrintFlagsFinal
-versio
Solr/Lucene really like having a bunch of files available, so bumping
the ulimit is often the right thing to do.
This assumes you don't have any custom code that is failing to close
searchers and the like.
Best,
Erick
On Mon, May 8, 2017 at 10:40 AM, Satya Marivada
wrote:
> Hi,
>
> Started gett
On 5/8/2017 11:40 AM, Satya Marivada wrote:
> Started getting below errors/exceptions. I have listed the resolution
> inline. Could you please see if I am headed right?
>
> java.lang.OutOfMemoryError: unable to create new native thread
> java.io.IOException: Too many open files
I have never had a
Perfect, thank you very much.
2016-05-27 12:44 GMT-03:00 Shawn Heisey :
> On 5/27/2016 7:05 AM, Pablo Anzorena wrote:
> > I am using solr 5.2.1 in cloud mode. My jvm arguments for the
> > OutOfMemoryError is
> > -XX:OnOutOfMemoryError='/etc/init.d/solrcloud;restart'
> >
> > In the Solr UI, the ev
On 5/27/2016 7:05 AM, Pablo Anzorena wrote:
> I am using solr 5.2.1 in cloud mode. My jvm arguments for the
> OutOfMemoryError is
> -XX:OnOutOfMemoryError='/etc/init.d/solrcloud;restart'
>
> In the Solr UI, the event is beign fired, but nothing happens.
In all versions before 5.5.1, that -XX param
> On Thu, Jan 15, 2015 at 1:54 PM, wrote:
>>>
>>> Siegfried and Michael Thank you for your replies and help.
>>>>
>>>> -Original Message-
>>>> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
>>>> Sent: Thursday, January 15
ael Thank you for your replies and help.
>>>
>>> -Original Message-
>>> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
>>> Sent: Thursday, January 15, 2015 3:45 AM
>>> To: solr-user@lucene.apache.org
>>> Subject: Re: OutOfMemoryError for P
of
> > getjmp/longjmp. But fast...
> >
> > On Thu, Jan 15, 2015 at 1:54 PM, wrote:
> >> Siegfried and Michael Thank you for your replies and help.
> >>
> >> -Original Message-
> >> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
> &
and Michael Thank you for your replies and help.
-Original Message-
From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
Sent: Thursday, January 15, 2015 3:45 AM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryError for PDF document upload into Solr
Hi Ganesh,
you can increase the
and Michael Thank you for your replies and help.
-Original Message-
From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
Sent: Thursday, January 15, 2015 3:45 AM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryError for PDF document upload into Solr
Hi Ganesh,
you can increase the
ael Thank you for your replies and help.
>
> -Original Message-
> From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
> Sent: Thursday, January 15, 2015 3:45 AM
> To: solr-user@lucene.apache.org
> Subject: Re: OutOfMemoryError for PDF document upload into Solr
>
> Hi Ganes
Siegfried and Michael Thank you for your replies and help.
-Original Message-
From: Siegfried Goeschl [mailto:sgoes...@gmx.at]
Sent: Thursday, January 15, 2015 3:45 AM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryError for PDF document upload into Solr
Hi Ganesh,
you can
Hi Ganesh,
you can increase the heap size but parsing a 4 GB PDF document will very
likely consume A LOT OF memory - I think you need to check if that large
PDF can be parsed at all :-)
Cheers,
Siegfried Goeschl
On 14.01.15 18:04, Michael Della Bitta wrote:
Yep, you'll have to increase the
Yep, you'll have to increase the heap size for your Tomcat container.
http://stackoverflow.com/questions/6897476/tomcat-7-how-to-set-initial-heap-size-correctly
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st St
Shawn, looks like the JVM bump did the trick. Thanks!
On Tue, Dec 16, 2014 at 10:39 AM, Trilok Prithvi
wrote:
>
> Thanks Shawn. We will increase the JVM to 4GB and see how it performs.
>
> Alexandre,
> Our queries are simple (with strdist() function in almost all the
> queries). No facets, or sor
Thanks Shawn. We will increase the JVM to 4GB and see how it performs.
Alexandre,
Our queries are simple (with strdist() function in almost all the queries).
No facets, or sorts.
But we do a lot of data loads. We index data a lot (several documents,
ranging from 10 - 10 documents) and we uploa
What's your queries look like? Especially FQs, facets, sort, etc. All
of those things require caches of various sorts.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https
On 12/16/2014 9:55 AM, Trilok Prithvi wrote:
We are getting OOME pretty often (every hour or so). We are restarting
nodes to keep up with it.
Here is our setup:
SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.
Each node has:
16GB RAM
2GB JVM (Xmx 2048, Xms 1024)
~100 Million documents
Hi;
According to Sun, the error happens "if too much time is being spent in
garbage collection: if more than 98% of the total time is spent in garbage
collection and less than 2% of the heap is recovered, an OutOfMemoryError
will be thrown.". Specifying more memory should be helpful. On the other
From: François Schiettecatte
To: solr-user@lucene.apache.org; Haiying Wang
Sent: Tuesday, April 8, 2014 8:25 PM
Subject: Re: OutOfMemoryError while merging large indexes
Have you tried using:
-XX:-UseGCOverheadLimit
François
On Apr 8, 2014, at 6:06 PM, Haiying Wang wrote:
> Hi,
>
&
Have you tried using:
-XX:-UseGCOverheadLimit
François
On Apr 8, 2014, at 6:06 PM, Haiying Wang wrote:
> Hi,
>
> We were trying to merge a large index (9GB, 21 million docs) into current
> index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always
> run into OOM e
Hi Shawn,
thanks for your reply. But we don't think that this is really a OOM error,
because we already increased the heap to 64gb and the OOM occurs at a usage
of 30-40gb. So solr would allocate more than 20gb at once. this sounds a
little bit too much.
Furthermore we found Lucene45DocValuesProdu
On 12/16/2013 2:34 AM, Torben Greulich wrote:
> we get a OutOfMemoryError in RamUsageEstimator and are a little bit
> confused about the error.
> We are using solr 4.6 and are confused about the Lucene42DocValuesProducer.
> We checked current solr code and found that Lucene42NormsFormat will be
> r
I upgraded java to version 7 and everything seems to be stable now!
BR,
Arkadi
On 03/25/2013 09:54 PM, Shawn Heisey wrote:
On 3/25/2013 1:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java proce
On 3/25/2013 1:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java process. But now the
whole machine crashes! Any idea why?
Mar 22 20:30:01 solr01-gs kernel: [716098.077809] java invoked
oom-kille
Arkadi,
jstat -gcutil -h20 2000 100 also gives useful info about GC and I use
it a lot for quick insight into what is going on with GC. SPM (see
http://sematext.com/spm/index.html ) may also be worth using.
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Mon, Mar 25, 2013 at
You can also use "-verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails
-Xloggc:gc.log"
as additional options to get a "gc.log" file and see what GC is doing.
Regards
Bernd
Am 25.03.2013 16:01, schrieb Arkadi Colson:
> How can I see if GC is actually working? Is it written in the tomcat logs as
How can I see if GC is actually working? Is it written in the tomcat
logs as well or will I only see it in the memory graphs?
BR,
Arkadi
On 03/25/2013 03:50 PM, Bernd Fehling wrote:
We use munin with jmx plugin for monitoring all server and Solr installations.
(http://munin-monitoring.org/)
On
We use munin with jmx plugin for monitoring all server and Solr installations.
(http://munin-monitoring.org/)
Only for short time monitoring we also use jvisualvm delivered with Java SE JDK.
Regards
Bernd
Am 25.03.2013 14:45, schrieb Arkadi Colson:
> Thanks for the info!
> I just upgraded java f
Thanks for the info!
I just upgraded java from 6 to 7...
How exactly do you monitor the memory usage and the affect of the
garbage collector?
On 03/25/2013 01:18 PM, Bernd Fehling wrote:
The of UseG1GC yes,
but with Solr 4.x, Jetty 8.1.8 and Java HotSpot(TM) 64-Bit Server VM (1.7.0_07).
os.a
The of UseG1GC yes,
but with Solr 4.x, Jetty 8.1.8 and Java HotSpot(TM) 64-Bit Server VM (1.7.0_07).
os.arch: amd64
os.name: Linux
os.version: 2.6.32.13-0.5-xen
Only args are "-XX:+UseG1GC -Xms16g -Xmx16g".
Monitoring shows that 16g is a bit high, I might reduce it to 10g or 12g for
the slaves
Is sombody using the UseG1GC garbage collector with Solr and Tomcat 7?
Any extra options needed?
Thanks...
On 03/25/2013 08:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m
as parameters. I also added -XX:+UseG1GC to the java process. But now
t
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java process. But now the
whole machine crashes! Any idea why?
Mar 22 20:30:01 solr01-gs kernel: [716098.077809] java invoked
oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
M
On 3/14/2013 3:35 AM, Arkadi Colson wrote:
Hi
I'm getting this error after a few hours of filling solr with documents.
Tomcat is running with -Xms1024m -Xmx4096m.
Total memory of host is 12GB. Softcommits are done every second and hard
commits every minute.
Any idea why this is happening and how
On 03/14/2013 03:11 PM, Toke Eskildsen wrote:
On Thu, 2013-03-14 at 13:10 +0100, Arkadi Colson wrote:
When I shutdown tomcat free -m and top keeps telling me the same values.
Almost no free memory...
Any idea?
Are you reading top & free right? It is standard behaviour for most
modern operatin
On Thu, 2013-03-14 at 13:10 +0100, Arkadi Colson wrote:
> When I shutdown tomcat free -m and top keeps telling me the same values.
> Almost no free memory...
>
> Any idea?
Are you reading top & free right? It is standard behaviour for most
modern operating systems to have very little free memory
When I shutdown tomcat free -m and top keeps telling me the same values.
Almost no free memory...
Any idea?
On 03/14/2013 10:35 AM, Arkadi Colson wrote:
Hi
I'm getting this error after a few hours of filling solr with
documents. Tomcat is running with -Xms1024m -Xmx4096m.
Total memory of hos
You mean this:
stats: entries_count : 24
entry#0 :
'NIOFSIndexInput(path="/home/connect/ConnectPORTAL/preview/solr-home/data/index/_2f3.frq")'=>'WiringDiagramSheetImpl.pageNumber',class
org.apache.lucene.search.FieldCache$StringIndex,null=>org.apache.lucene.search.FieldCache$StringIndex#32159051
Hi Uwe,
sorting should be well prepared.
First rough check is fieldCache. You can see it with SolrAdmin Stats.
The "insanity_count" there should be 0 (zero).
Only sort on fields which are prepared for sorting and make sense to be sorted.
Do only faceting on fields which make sense. I've seen syst
Hi Hamid,
i also encounterd the same OOM issue on windows 2003 (32-bits) server... but
only 3 millions articles stored in solr. i would like to know your
configurations to drive so many records.
Many thanks.
Best Regards
Benson
--
View this message in context:
http://lucene.472066.n3.nabble
Nigam
RBS Global Banking & Markets
Office: +91 124 492 5506
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: 23 September 2011 09:35
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryError coming from TermVectorsReader
Anand,
But do you re
/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message -
> From: "anand.ni...@rbs.com"
> To: solr-user@lucene.apache.org
> Cc:
> Sent: Thursday, September 22, 2011 11:56 PM
> Subject: RE: OutOfMemoryError coming from
bit
Solr version : 3.4.0
Thanks & Regards
Anand
Anand Nigam
RBS Global Banking & Markets
Office: +91 124 492 5506
-Original Message-
From: Glen Newton [mailto:glen.new...@gmail.com]
Sent: 19 September 2011 16:52
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemoryErro
Please include information about your heap size, (and other Java
command line arguments) as well a platform OS (version, swap size,
etc), Java version, underlying hardware (RAM, etc) for us to better
help you.
>From the information you have given, increasing your heap size should help.
Thanks,
Gl
Zoltan - Solr is not preventing you from giving your JVM 2GB heap, something
else is. If you paste the error we may be able to help.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
> From: Zol
What do you have your commit parameters set to in
solrconfig.xml? I suspect you can make this all work by
reducing the ram threshold in the config file
Best
Erick
On Mon, May 2, 2011 at 4:55 AM, Zoltán Altfatter wrote:
> Hi,
>
> I receive OutOfMemoryError with Solr 3.1 when loading around 14
c per day
>
>
>
>
>
> From: Koji Sekiguchi
> To: solr-user@lucene.apache.org
> Sent: Sun, May 2, 2010 9:08:42 PM
> Subject: Re: OutOfMemoryError when using query with sort
>
> Hamid Vahedi wrote:
> > Hi, i using solr that running on
, 2010 9:08:42 PM
Subject: Re: OutOfMemoryError when using query with sort
Hamid Vahedi wrote:
> Hi, i using solr that running on windows server 2008 32-bit.
> I add about 100 million article into solr without set store attribute. (only
> store document id) (index file size about 164 GB)
&
Hamid Vahedi wrote:
Hi, i using solr that running on windows server 2008 32-bit.
I add about 100 million article into solr without set store attribute. (only
store document id) (index file size about 164 GB)
when try to get query without sort , it's return doc ids in some ms, but when
add sor
I reduced the size of queryResultCache in solrconfig seems to fix the issue as
well.
200
>From 500
500
Francis
-Original Message-
From: didier deshommes [mailto:dfdes...@gmail.com]
Sent: Thursday, September 24, 2009 3:32 PM
To: solr-user@lucene.apache.org
Cc: Andrew Mont
On Thu, Sep 24, 2009 at 5:40 PM, Francis Yakin wrote:
> You also can increase the JVM HeapSize if you have enough physical memory,
> like for example if you have 4GB physical, gives the JVM heapsize 2GB or
> 2.5GB.
Thanks,
we can definitely do that (we have 4GB available). I also forgot to
add
You also can increase the JVM HeapSize if you have enough physical memory, like
for example if you have 4GB physical, gives the JVM heapsize 2GB or 2.5GB.
Francis
-Original Message-
From: didier deshommes [mailto:dfdes...@gmail.com]
Sent: Thursday, September 24, 2009 3:32 PM
To: solr-u
Thanks Mike, I have 25 millions docs indexed, faceted on simple fields
(cardinality: 5 for country field and 1 for host field)
8192Mb, JRockit R27 (Java 6)
Unpredictable OOMs...
I set HashDocSet/max to 30,000, don't see any performance degradation
yet (the same response times for faceted
On 17-Jul-08, at 10:28 AM, Fuad Efendi wrote:
Change it to higher value, for instance, 3. OpenBitSet is
created for larger values and requires a lot of memory...
Careful--hash sets of that size can be quite slow. It does make sense
to bump up the value to 6000 or so for large
66 matches
Mail list logo