Well, if my theory is right, you should be able to generate OOMs at will by
sorting and faceting on all your fields in one query.

But Lucene's cache should be garbage collected, can you take some memory
snapshots during the week? It should hit a point and stay steady there.

How much memory are you giving your JVM? It looks like a lot given your
memory snapshot.

Best
Erick

On Thu, Jun 16, 2011 at 3:01 AM, Bernd Fehling
<bernd.fehl...@uni-bielefeld.de> wrote:
> Hi Erik,
>
> yes I'm sorting and faceting.
>
> 1) Fields for sorting:
>   sort=f_dccreator_sort, sort=f_dctitle, sort=f_dcyear
>   The parameter "facet.sort=" is empty, only using parameter "sort=".
>
> 2) Fields for faceting:
>   f_dcperson, f_dcsubject, f_dcyear, f_dccollection, f_dclang, f_dctypenorm,
> f_dccontenttype
>   Other faceting parameters:
>
> ...&facet=true&facet.mincount=1&facet.limit=100&facet.sort=&facet.prefix=&...
>
> 3) The LukeRequestHandler takes too long for my huge index so this is from
>   the standalone luke (compiled for solr3.2):
>   f_dccreator_sort = 10.029.196
>   f_dctitle        = 21.514.939
>   f_dcyear         =      1.471
>   f_dcperson       = 14.138.165
>   f_dcsubject      =  8.012.319
>   f_dccollection   =      1.863
>   f_dclang         =        299
>   f_dctypenorm     =         14
>   f_dccontenttype  =        497
>
> numDocs:    28.940.964
> numTerms:  686.813.235
> optimized:        true
> hasDeletions:    false
>
> What can you read/calculate from this values?
>
> Is my index to big for Lucene/Solr?
>
> What I don't understand, why fieldCache is not garbage collected
> and therefore reduced in size from time to time.
>
> Regards
> Bernd
>
> Am 15.06.2011 17:50, schrieb Erick Erickson:
>>
>> The first question I have is whether you're sorting and/or
>> faceting on many unique string values? I'm guessing
>> that sometime you are. So, some questions to help
>> pin it down:
>> 1>  what fields are you sorting on?
>> 2>  what fields are you faceting on?
>> 3>  how many unique terms in each (see the solr admin page).
>>
>> Best
>> Erick
>>
>> On Wed, Jun 15, 2011 at 8:22 AM, Bernd Fehling
>> <bernd.fehl...@uni-bielefeld.de>  wrote:
>>>
>>> Dear list,
>>>
>>> after getting OOM exception after one week of operation with
>>> solr 3.2 I used MemoryAnalyzer for the heapdumpfile.
>>> It looks like the fieldCache eats up all memory.
>>>
>>>                                                    Objects     Shalow
>>> Heap
>>>   Retained Heap
>>> org.apache.lucene.search.FieldCache                       0
>>> 0
>>>>
>>>> = 14,636,950,632
>>>
>>> org.apache.lucene.search.FieldCacheImpl                   1
>>>  32
>>>>
>>>> = 14,636,950,384
>>>
>>> org.apache.lucene.search.FieldCacheImpl$StringIndexCache  1
>>>  32
>>>>
>>>> = 14,636,947,080
>>>
>>> org.apache.lucene.search.FieldCache$StringIndex          10
>>> 320
>>>>
>>>> = 14,636,944,352
>>>
>>> java.lang.String[]                                      519
>>> 567,811,040
>>>>
>>>> = 13,503,733,312
>>>
>>> char[]                                           81,766,595
>>>  11,604,293,712
>>>>
>>>> = 11,604,293,712
>>>
>>> fieldCache retains over 14g of heap.
>>>
>>> When looking on stats page under fieldCache the description says:
>>> "Provides introspection of the Lucene FieldCache, this is **NOT** a cache
>>> that is managed by Solr."
>>>
>>> So is this a jetty problem and not solr?
>>>
>>> Why is fieldCache growing and growing until OOM?
>>>
>>> Regards
>>> Bernd
>>>
>

Reply via email to