OK, why can't you give the JVM more memory, perhaps on
a one-time basis to get past this problem? You've never
told us how much memory you give the JVM in the first place.

Best,
Erick

On Sun, Jan 11, 2015 at 7:54 AM, Jack Krupansky
<jack.krupan...@gmail.com> wrote:
> Usually, Lucene will be optimizing (merging) segments on the fly so that
> you should only have a fraction of your total deletions present in the
> index and should never have an absolute need to do an old-fashioned full
> optimize.
>
> What merge policy are you using?
>
> Is Solr otherwise running fine other than this optimize operation?
>
>
> -- Jack Krupansky
>
> On Sun, Jan 11, 2015 at 1:46 AM, ig01 <inna.gel...@elbitsystems.com> wrote:
>
>> Thank you all for your response,
>> The thing is that we have 180G index while half of it are deleted
>> documents.
>> We  tried to run an optimization in order to shrink index size but it
>> crashes on ‘out of memory’ when the process reaches 120G.
>> Is it possible to optimize parts of the index?
>> Please advise what can we do in this situation.
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178700.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>

Reply via email to