Thanks everyone for replying to this issue. Just a final comment on this
issue which I was closely working on. We have fixed this issue. It was a
bug in our custom component which we wrote to convert delete by query to
delete by id. We were using BytesRef differently, we were not making a deep
copy
Thanks Erick for replying back. I have deployed changes to production, we
will figure it out soon if it is still causing OOM or not. And for commits
we are doing auto commits after 10K docs or 30 secs.
If I get time I will try to run a local test to check if we will hit OOM
because of 1K map entrie
Rohit:
Well, whenever I see something like "I have this custom component..."
I immediately want the problem to be demonstrated without that custom
component before trying to debug Solr.
As Chris explained, we can't clear the 1K entries. It's hard to
imagine why keeping the last 1,000 entries arou
I think we figure out the issue, When we were conventing delete by query in
a Solr Handler we were not making a deep copy of BytesRef. We were making
reference of same object, which was causing old deletes(LinkedHasmap)
adding more than 1K entries.
But I think it is still not clearing those 1K ent
For commits we are relying on auto commits. We have define following in
configs:
1
3
false
15000
One thing which I would like to mention is that we are not calling directly
deleteById from client. We
: OK, The whole DBQ thing baffles the heck out of me so this may be
: totally off base. But would committing help here? Or at least be worth
: a test?
ths isn't DBQ -- the OP specifically said deleteById, and that the
oldDeletes map (only used for DBI) was the problem acording to the heap
dumps
Chris:
OK, The whole DBQ thing baffles the heck out of me so this may be
totally off base. But would committing help here? Or at least be worth
a test?
On Tue, Mar 21, 2017 at 4:28 PM, Chris Hostetter
wrote:
>
> : Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
> : boun
: Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
: bounded by 1K count, but after looking at heap dump I was amazed how can it
: keep more than 1K entries. But Yes I see around 7M entries according to
: heap dump and around 17G of memory occupied by BytesRef there.
what
Hi Chris,
Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
bounded by 1K count, but after looking at heap dump I was amazed how can it
keep more than 1K entries. But Yes I see around 7M entries according to
heap dump and around 17G of memory occupied by BytesRef there.
It
: facing. We are storing messages in solr as documents. We are running a
: pruning job every night to delete old message documents. We are deleting
: old documents by calling multiple delete by id query to solr. Document
: count can be in millions which we are deleting using SolrJ client. We are
:
Hi All,
I am looking for some help to solve an out of memory issue which we are
facing. We are storing messages in solr as documents. We are running a
pruning job every night to delete old message documents. We are deleting
old documents by calling multiple delete by id query to solr. Document
cou
11 matches
Mail list logo