msfroh commented on pull request #2088:
URL: https://github.com/apache/lucene-solr/pull/2088#issuecomment-731354457


   > Maybe we could add a simple unit test here that keeps re-indexing the same 
small set of docs with many unique fields, then calls IW.deleteAll, many times? 
Such that the test would OOME on trunk today, and then pass with the fix? Would 
that test take too long to OOME (how slow is the leak?)? If so we could mark it 
as @Slow.
   
   I'll give that a shot. With trunk, my code ran for about 12 hours and was 
still limping along (at a rate of ~5 documents/second, versus ~250 
documents/second in the first minute). But I was doing some heavy analysis and 
flushing to disk. Adding a doc with a bunch of doc value fields, calling 
deleteAll(), and repeating, I think the problem should reproduce pretty 
quickly. I think I still need to trigger a `FieldInfos.finish()` to force it to 
allocate the large array. Alternatively, we would get an overflow on 
`lowestUnassignedFieldNumber` (though it looks like FieldNumbers will happily 
just give out negative field numbers in that case).
   
   Anyway, I'll try to come up with a test that breaks quickly.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to