I agree with Yonik of course;
ButŠ

You should see OOM errors in this case. In case of "virtualization"
however it is unpredictableŠ and if JVM doesn't have few bytes to output
OOM into log file (because we are catching "throwable" and trying to
generate HTTP 500 instead AAAAAAAAAAAA!!!!!!! FreakyŠ)

OkŠ

Sorry for not contributing a patchŠ


-Fuad (ZooKeeper)
http://www.OutsideIQ.com







On 11-08-17 6:01 PM, "Yonik Seeley" <yo...@lucidimagination.com> wrote:

>On Wed, Aug 17, 2011 at 5:56 PM, Jason Toy <jason...@gmail.com> wrote:
>> I've only set set minimum memory and have not set maximum memory.  I'm
>>doing
>> more investigation and I see that I have 100+ dynamic fields for my
>> documents, not the 10 fields I quoted earlier.  I also sort against
>>those
>> dynamic fields often,  I'm reading that this potentially uses a lot of
>> memory.  Could this be the cause of my problems and if so what options
>>do I
>> have to deal with this?
>
>Yes, that's most likely the problem.
>Sorting on an integer field causes a FieldCache entry with an
>int[maxDoc] (i.e. 4 bytes per document in the index, regardless of if
>it has a value for that field or not).
>Sorting on a string field is 4 bytes per doc in the index (the ords)
>plus the memory to store the actual unique string values.
>
>-Yonik
>http://www.lucidimagination.com
>
>
>
>> On Wed, Aug 17, 2011 at 2:46 PM, Markus Jelsma
>> <markus.jel...@openindex.io>wrote:
>>
>>> Keep in mind that a commit warms up another searcher and potentially
>>> doubling
>>> RAM consumption in the back ground due to cache warming queries being
>>> executed
>>> (newSearcher event). Also, where is your Xmx switch? I don't know how
>>>your
>>> JVM
>>> will behave if you set Xms > Xmx.
>>>
>>> 65m docs is quite a lot but it should run fine with 3GB heap
>>>allocation.
>>>
>>> It's a good practice to use a master for indexing without any caches
>>>and
>>> warm-
>>> up queries when you exceed a certain amount of documents, it will bite.
>>>
>>> > I have a large ec2 instance(7.5 gb ram), it dies every few hours
>>>with out
>>> > of heap memory issues.  I started upping the min memory required,
>>> > currently I use -Xms3072M .
>>> > I insert about 50k docs an hour and I currently have about 65 million
>>> docs
>>> > with about 10 fields each. Is this already too much data for one
>>>box? How
>>> > do I know when I've reached the limit of this server? I have no idea
>>>how
>>> > to keep control of this issue.  Am I just supposed to keep upping
>>>the min
>>> > ram used for solr? How do I know what the accurate amount of ram I
>>>should
>>> > be using is? Must I keep adding more memory as the index size grows,
>>>I'd
>>> > rather the query be a little slower if I can use constant memory and
>>>have
>>> > the search read from disk.
>>>
>>
>>
>>
>> --
>> - sent from my mobile
>> 6176064373
>>


Reply via email to