Hi Chris,

Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
bounded by 1K count, but after looking at heap dump I was amazed how can it
keep more than 1K entries. But Yes I see around 7M entries according to
heap dump and around 17G of memory occupied by BytesRef there.

It would be better to know why old deletes Map is used there. I am still
digging, If I find something then I will share that.

Thanks
Rohit


On Tue, Mar 21, 2017 at 4:00 PM, Chris Hostetter <hossman_luc...@fucit.org>
wrote:

>
> : facing. We are storing messages in solr as documents. We are running a
> : pruning job every night to delete old message documents. We are deleting
> : old documents by calling multiple delete by id query to solr. Document
> : count can be in millions which we are deleting using SolrJ client. We are
> : using delete by id because it is faster than delete by query. It works
> : great for few days but after a week these delete by id get accumulated in
> : Linked hash map of UpdateLog (variable name as olddeletes). Once this map
> : is full then we are seeing out of memory.
>
> first off: what version of Solr are you running?
>
> UpdateLog.oldDeletes is bounded at numDeletesToKeep=1000 entries -- any
> more then that and the oldest entry is automatically deleted when more
> items are added.  So it doesn't really make sense to me that you would be
> seeing OOMs from this map filling up endlessly.
>
> Are you seeing more then 1000 entries in this map when you look at your
> heap dumps?
>
> : I am not sure why it is keeping the reference of all old deletes.
>
> It's complicated -- the short answer is that it's protection against out
> of order updates ariving from other nodes in SolrCloud under highly
> concurrent updates.
>
>
>
> -Hoss
> http://www.lucidworks.com/
>

Reply via email to