Yeah, I sent a note to the web folks there about the images.
I'll leave the rest to people who really _understand_ all that stuff
On Thu, Sep 20, 2012 at 8:31 AM, Bernd Fehling
wrote:
> Hi Erik,
>
> thanks for the link.
> Now if we could see the images in that article that would be great
Hi Erik,
thanks for the link.
Now if we could see the images in that article that would be great :-)
By the way, one cause for the memory jumps was located as "killer search" from
a user.
The interesting part is that the verbose gc.log showed a "hiccup" in the GC.
Which means that during a GC r
Here's a wonderful writeup about GC and memory in Solr/Lucene:
http://searchhub.org/dev/2011/03/27/garbage-collection-bootcamp-1-0/
Best
Erick
On Thu, Sep 20, 2012 at 5:49 AM, Robert Muir wrote:
> On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
> wrote:
>
>> By the way while looking for upgradi
On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
wrote:
> By the way while looking for upgrading to JDK7, the release notes say under
> section
> "known issues" about the "PorterStemmer" bug:
> "...The recommended workaround is to specify -XX:-UseLoopPredicate on the
> command line."
> Is this st
That is the problem with a jvm, it is a virtual machine.
Ask 10 experts about a good jvm settings and you get 15 answers. May be a
tradeoff
of the flexibility of jvm's. There is always a right setting for any application
running on a jvm but you just have to find it.
How about a Solr Wiki page abo
I have used this setting to reduce gc pauses with CMS - java 6 u23
XX:+ParallelRefProcEnabled
With this setting, jvm does gc of weakrefs with multiple threads and pauses are
low.
Please use this option only when you have multiple cores.
For me, CMS gives better results
Sent from my iPhone
On
Ooh, that is a nasty one. Is this JDK 7 only or also in 6?
It looks like the "-XX:ConcGCThreads=1" option is a workaround, is that right?
We've had some 1.6 JVMs behave in the same way that bug describes, but I
haven't verified it is because of finalizer problems.
wunder
On Sep 19, 2012, at 5:
Two in one morning
The JVM bug I'm familiar with is here:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7112034
FWIW,
Erick
On Wed, Sep 19, 2012 at 8:20 AM, Shawn Heisey wrote:
> On 9/18/2012 9:29 PM, Lance Norskog wrote:
>>
>> There is a known JVM garbage collection bug that causes th
On 9/18/2012 9:29 PM, Lance Norskog wrote:
There is a known JVM garbage collection bug that causes this. It has to do with
reclaiming Weak references, I think in WeakHashMap. Concurrent garbage
collection collides with this bug and the result is that old field cache data
is retained after clos
he problem.)
- Original Message -
| From: "Bernd Fehling"
| To: solr-user@lucene.apache.org
| Sent: Tuesday, September 18, 2012 11:29:56 PM
| Subject: Re: SOLR memory usage jump in JVM
|
| Hi Lance,
|
| thanks for this hint. Something I also see, a sawtooth. This is
| coming from Eden
Hi Otis,
because I see this on my slave without replication there is no index file
change.
I have also tons of logged data to dig in :-)
I took dumps from different stages, fresh installed, after 5GB jump, after the
system was hanging right after replication,...
The last one was interesting when
- Original Message -
> | From: "Yonik Seeley"
> | To: solr-user@lucene.apache.org
> | Sent: Tuesday, September 18, 2012 7:38:41 AM
> | Subject: Re: SOLR memory usage jump in JVM
> |
> | On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
> | wrote:
> | > I use
Hi Bernd,
On Tue, Sep 18, 2012 at 3:09 AM, Bernd Fehling
wrote:
> Hi Otis,
>
> not really a problem because I have plenty of memory ;-)
> -Xmx25g -Xms25g -Xmn6g
Good.
> I'm just interested into this.
> Can you report similar jumps within JVM with your monitoring at sematext?
Yes. More importan
| Sent: Tuesday, September 18, 2012 7:38:41 AM
| Subject: Re: SOLR memory usage jump in JVM
|
| On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
| wrote:
| > I used GC in different situations and tried back and forth.
| > Yes, it reduces the used heap memory, but not by 5GB.
| > Even so t
On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
wrote:
> I used GC in different situations and tried back and forth.
> Yes, it reduces the used heap memory, but not by 5GB.
> Even so that GC from jconsole (or jvisualvm) is "Full GC".
Whatever "Full GC" means ;-)
In the past at least, I've found th
I used GC in different situations and tried back and forth.
Yes, it reduces the used heap memory, but not by 5GB.
Even so that GC from jconsole (or jvisualvm) is "Full GC".
But while you bring GC into this, there is another interesting thing.
- I have one slave running for a week which ends up aro
What happens if you attach jconsole (should ship with your SDK) and force a GC?
Does the extra 5G go away?
I'm wondering if you get a couple of warming searchers going simultaneously
and happened to measure after that.
Uwe has an interesting blog about memory, he recommends using as
little as pos
Hi Otis,
not really a problem because I have plenty of memory ;-)
-Xmx25g -Xms25g -Xmn6g
I'm just interested into this.
Can you report similar jumps within JVM with your monitoring at sematext?
Actually I would assume to see jumps of 0.5GB or even 1GB, but 5GB?
And what is the cause, a cache?
A
Hi Bernd,
But is this really (causing) a problem? What -Xmx are you using?
Otis
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html
On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
wrote:
> Hi list,
>
> while monitoring
Hi list,
while monitoring my systems I see a jump in memory consumption in JVM
after 2 to 5 days of running of about 5GB.
After starting the system (search node only, no replication during search)
SOLR uses between 6.5GB to 10.3GB of JVM when idle.
If the search node is online and serves requests
20 matches
Mail list logo