When we talk about Collectors, we are not just talking about
"collecting" - whatever that means. There isn't really a "collecting"
phase - the whole algorithm is garbage collecting - hence calling the
different implementations "collectors".

Usually, fragmentation is dealt with using a mark-compact collector (or
IBM has used a mark-sweep-compact collector).
Copying collectors are not only super efficient at collecting young
spaces, but they are also great for fragmentation - when you copy
everything to the new space, you can remove any fragmentation. At the
cost of double the space requirements though.

So mark-compact is a compromise. First you mark whats reachable, then
everything thats marked is copied/compacted to the bottom of the heap.
Its all part of a "collection" though.

Jonathan Ariel wrote:
> Maybe what's missing here is how did I get the 11%.I just ran solr with the
> following JVM params: -XX:+PrintGCApplicationConcurrentTime
> -XX:+PrintGCApplicationStoppedTime with that I can measure the amount of
> time the application run between collection pauses and the length of the
> collection pauses, respectively.
> I think that in this case the 11% is just for memory collection and not
> defragmentation... but I'm not 100% sure.
>
> On Fri, Sep 25, 2009 at 5:05 PM, Fuad Efendi <f...@efendi.ca> wrote:
>
>   
>> But again, GC is not just "Garbage Collection" as many in this thread
>> think... it is also "memory defragmentation" which is much costly than
>> "collection" just because it needs move somewhere _live_objects_ (and
>> wait/lock till such objects get unlocked to be moved...) - obviously more
>> memory helps...
>>
>> 11% is extremely high.
>>
>>
>> -Fuad
>> http://www.linkedin.com/in/liferay
>>
>>
>>     
>>> -----Original Message-----
>>> From: Jonathan Ariel [mailto:ionat...@gmail.com]
>>> Sent: September-25-09 3:36 PM
>>> To: solr-user@lucene.apache.org
>>> Subject: Re: FW: Solr and Garbage Collection
>>>
>>> I'm not planning on lowering the heap. I just want to lower the time
>>> "wasted" on GC, which is 11% right now.So what I'll try is changing the
>>>       
>> GC
>>     
>>> to -XX:+UseConcMarkSweepGC
>>>
>>> On Fri, Sep 25, 2009 at 4:17 PM, Fuad Efendi <f...@efendi.ca> wrote:
>>>
>>>       
>>>> Mark,
>>>>
>>>> what if piece of code needs 10 contiguous Kb to load a document field?
>>>>         
>> How
>>     
>>>> locked memory pieces are optimized/moved (putting on hold almost whole
>>>> application)?
>>>> Lowering heap is _bad_ idea; we will have extremely frequent GC
>>>>         
>> (optimize
>>     
>>>> of
>>>> live objects!!!) even if RAM is (theoretically) enough.
>>>>
>>>> -Fuad
>>>>
>>>>
>>>>         
>>>>> Faud, you didn't read the thread right.
>>>>>
>>>>> He is not having a problem with OOM. He got the OOM because he
>>>>>           
>> lowered
>>     
>>>>> the heap to try and help GC.
>>>>>
>>>>> He normally runs with a heap that can handle his FC.
>>>>>
>>>>> Please re-read the thread. You are confusing the tread.
>>>>>
>>>>> - Mark
>>>>>
>>>>>           
>>>>         
>>>>>> GC will frequently happen even if RAM is more than enough: in case
>>>>>>             
>> if
>> it
>>     
>>>> is
>>>>         
>>>>>> heavily sparse... so that have even more RAM!
>>>>>> -Fuad
>>>>>>             
>>>>
>>>>         
>>
>>     
>
>   


-- 
- Mark

http://www.lucidimagination.com



Reply via email to