I tried sorting using a function query instead of the Lucene sort and found
no change in performance. I wonder if Lance's results are related to
something specific to his deployment?
--
View this message in context:
http://www.nabble.com/Performance-%22dead-zone%22-due-to-garbage-collection-tp21
I've been able to reduce these GC outages by:
1) Optimizing my schema. This reduced my index size by more than 50%
2) Smaller cache sizes. I started with filterCache, documentCache &
queryCache sizes of ~10,000. They're now at ~500
3) Reduce heap allocation. I started at 27 GB, now I'm 'only' all
On Tue, Feb 3, 2009 at 11:58 AM, wojtekpia wrote:
> I noticed your wiki post about sorting with a function query instead of the
> Lucene sort mechanism. Did you see a significantly reduced memory footprint
> by doing this?
FunctionQuery derives field values from the FieldCache... so it would
use
I noticed your wiki post about sorting with a function query instead of the
Lucene sort mechanism. Did you see a significantly reduced memory footprint
by doing this? Did you reduce the number of fields you allowed users to sort
by?
Lance Norskog-2 wrote:
>
> Sorting creates a large array with
Sorting creates a large array with "roughly" an entry for every document in
the index. If it is not on an 'integer' field it takes even more memory. If
you do a sorted request and then don't sort for a while, that will drop the
sort structures and trigger a giant GC.
We went through some serious c
I profiled our application, and GC is definitely the problem. The IBM JVM
didn't change much. I'm currently looking into ways of reducing my memory
footprint.
--
View this message in context:
http://www.nabble.com/Performance-%22dead-zone%22-due-to-garbage-collection-tp21588427p21758001.html
S
ne GC accordingly. Good luck!
--Renaud
-Original Message-
From: Feak, Todd [mailto:todd.f...@smss.sony.com]
Sent: Friday, January 23, 2009 8:13 AM
To: solr-user@lucene.apache.org
Subject: RE: Performance "dead-zone" due to garbage collection
Can you share your experience with the IBM
-user@lucene.apache.org
Subject: Re: Performance "dead-zone" due to garbage collection
I'm not sure if you suggested it, but I'd like to try the IBM JVM. Aside
from
setting my JRE paths, is there anything else I need to do run inside the
IBM
JVM? (e.g. re-compiling?)
Walter U
No need to recompile. Install it and change your JAVA_HOME
and things should work. The options are different than for
the Sun JVM. --wunder
On 1/22/09 3:46 PM, "wojtekpia" wrote:
>
> I'm not sure if you suggested it, but I'd like to try the IBM JVM. Aside from
> setting my JRE paths, is there a
I'm not sure if you suggested it, but I'd like to try the IBM JVM. Aside from
setting my JRE paths, is there anything else I need to do run inside the IBM
JVM? (e.g. re-compiling?)
Walter Underwood wrote:
>
> What JVM and garbage collector setting? We are using the IBM JVM with
> their concurre
cache changes), but it will
be quite a rough estimate.
-Todd
-Original Message-
From: wojtekpia [mailto:wojte...@hotmail.com]
Sent: Wednesday, January 21, 2009 3:08 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance "dead-zone" due to garbage collection
(Thanks fo
(Thanks for the responses)
My filterCache hit rate is ~60% (so I'll try making it bigger), and I am CPU
bound.
How do I measure the size of my per-request garbage? Is it (total heap size
before collection - total heap size after collection) / # of requests to
cause a collection?
I'll try your
ednesday, January 21, 2009 11:14 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Performance "dead-zone" due to garbage collection
>
>
> I'm using a recent version of Sun's JVM (6 update 7) and am using the
> concurrent generational collector. I've tri
ays.
-Todd Feak
-Original Message-
From: wojtekpia [mailto:wojte...@hotmail.com]
Sent: Wednesday, January 21, 2009 11:14 AM
To: solr-user@lucene.apache.org
Subject: Re: Performance "dead-zone" due to garbage collection
I'm using a recent version of Sun's JVM (6 update
9 AM
To: solr-user@lucene.apache.org
Subject: Performance "dead-zone" due to garbage collection
I'm intermittently experiencing severe performance drops due to Java
garbage
collection. I'm allocating a lot of RAM to my Java process (27GB of the
32GB
physically available). Under heavy loa
I would say that putting more Solr instances, each one with your own data
directory could help if you can qualify your docs, in such a way that you
can put "A" type docs in index "A", "B" type docs in index "B", and so on.
2009/1/21 wojtekpia
>
> I'm using a recent version of Sun's JVM (6 update
I'm using a recent version of Sun's JVM (6 update 7) and am using the
concurrent generational collector. I've tried several other collectors, none
seemed to help the situation.
I've tried reducing my heap allocation. The search performance got worse as
I reduced the heap. I didn't monitor the gar
How many boxes running your index? If it is just one, maybe distributing
your index will get you a better performance during garbage collection.
2009/1/21 wojtekpia
>
> I'm intermittently experiencing severe performance drops due to Java
> garbage
> collection. I'm allocating a lot of RAM to my
What JVM and garbage collector setting? We are using the IBM JVM with
their concurrent generational collector. I would strongly recommend
trying a similar collector on your JVM. Hint: how much memory is in
use after a full GC? That is a good approximation to the working set.
27GB is a very, very l
I'm intermittently experiencing severe performance drops due to Java garbage
collection. I'm allocating a lot of RAM to my Java process (27GB of the 32GB
physically available). Under heavy load, the performance drops approximately
every 10 minutes, and the drop lasts for 30-40 seconds. This coinci
20 matches
Mail list logo