[ https://issues.apache.org/jira/browse/LUCENE-10602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17552394#comment-17552394 ]
Robert Muir commented on LUCENE-10602: -------------------------------------- thanks for giving the context. I was confused whether there was some bug happening, as I see 32MB limit across all indexes by default :) but yes, if you are allocating GBs of heap for the cache because you want to cache billions of documents, then I think that's the root cause of the issue causing your cache to use GBs of heap. I don't think it makes sense to configure LRUQueryCache to hold GBs of bitsets on the heap. Adding special eviction won't make that better, the worst-case would still be horrible. > Dynamic Index Cache Sizing > -------------------------- > > Key: LUCENE-10602 > URL: https://issues.apache.org/jira/browse/LUCENE-10602 > Project: Lucene - Core > Issue Type: Improvement > Reporter: Chris Earle > Priority: Major > > Working with Lucene's filter cache, it has become apparent that it can be an > enormous drain on the heap and therefore the JVM. After extensive usage of an > index, it is not uncommon to tune performance by shrinking or altogether > removing the filter cache. > Lucene tracks hit/miss stats of the filter cache, but it does nothing with > the data other than inform an interested user about the effectiveness of > their index's caching. > It would be interesting if Lucene would be able to tune the index filter > cache heuristically based on actual usage (age, frequency, and value). > This could ultimately be used to give GBs of heap back to an individual > Lucene instance instead of burning it on cache storage that's not effectively > used (or useful). -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org