We're encountering a relatively large amount of "humongous object"
allocation with larger cores. For just under 32G configured heap,
G1HeapRegionSize seems to default to 8M or 16M depending on jdk
version. For 100M docs in a shard (e.g.), the long[] backing a
FixedBitSet (BitDocSet) will be ~12M, comfortably above the "humongous
object" threshold for either default region size.

We suspect this is contributing to some GC issues (higher pause times,
full collections, etc.). We could experiment with raising
G1HeapRegionSize (setting explicitly to 32M), but this would end up
with 1024 regions, which could have its own drawbacks, iiuc.

I'm wondering if others have encountered this type of situation, and
if so what they've done about it.

For a more robust solution than fussing with G1HeapRegionSize, I'm
wondering if it might be appropriate to change the implementation of
BitDocSet so that larger instances will be backed by an array of
multiple smaller FixedBitSet instances. This would introduce some
extra complexity for DocSets over large indexes, but it shouldn't be
terrible; it could be cleanly implemented and would ensure that we
never allocate humongous objects in the service of BitDocSets ...

Michael

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org
For additional commands, e-mail: dev-h...@solr.apache.org

Reply via email to