[ 
https://issues.apache.org/jira/browse/GEODE-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider updated GEODE-9801:
------------------------------------
    Description: 
This issue has been around since Geode 1.0.

CachePerfStats.replicatedTombstonesSize and nonReplicatedTombstonesSize are 
supposed to help geode users figure out how much memory is being used by 
tombstones. But because of some over sites in the sizing code they actual 
amount of memory used by tombstones is significantly higher. Some of the 
mistakes made:
1. A Tombstone is added to a ConcurrentLinkedQueue. This is accounted for as a 
single objRef. But each add to a ConcurrentLinkedQueue add a Node instance so 
you have the object header size plus two objRefs.
2. The size of the RegionEntry is not accounted for. The size of the key in 
this entry is but most of the memory used is in the RegionEntry fields itself. 
And the key, in some cases, is stored inline in primitive fields in the 
RegionEntry so Tombstone.getSize should not be asking the RegionEntry for its 
key and then sizing it but should instead just ask the RegionEntry for its 
memory footprint. It can then figure out if it needs to size any of the objects 
it references (like the key).
3. the Tombstone class it self accounts for all its fields but forgot about its 
own object header.

To fix this have the Tombstone class add the following:
{code:java}
private static final TOMBSTONE_OVERHEAD = 
JvmSizeUtils.memoryOverhead(Tombstone.class);
{code}
For the Node overhead on ConcurrentLinkedQueue add this:
{code:java}
private static final NODE_OVERHEAD = 
JvmSizeUtils.roundUpSize(JvmSizeUtils.getObjectHeaderSize() + 
2*JvmSizeUtils.getReferenceSize());
{code}
For RegionEntry add a new "memoryOverhead" method on HashRegionEntry and then 
implement it  in all the many leaf classes. You can do this by modifying  
LeafRegionEntry.cpp and running generateRegionEntryClasses.sh. You will want 
each class to have a private static final field that calls 
JvmSizeUtils.memoryOverhead(<CLASSNAME>.class) and then decide at runtime if 
the instance references other objects that should be sized (like key, value, 
DiskId, etc.).

  was:
This issue has been around since Geode 1.0.

CachePerfStats.replicatedTombstonesSize and nonReplicatedTombstonesSize are 
supposed to help geode users figure out how much memory is being used by 
tombstones. But because of some over sites in the sizing code they actual 
amount of memory used by tombstones is significantly higher. Some of the 
mistakes made:
1. A Tombstone is added to a ConcurrentLinkedQueue. This is accounted for as a 
single objRef. But each add to a ConcurrentLinkedQueue add a Node instance so 
you have the object header size plus two objRefs.
2. The size of the RegionEntry is not accounted for. The size of the key in 
this entry is but most of the memory used is in the RegionEntry fields itself. 
And the key, in some cases, is stored inline in primitive fields in the 
RegionEntry so Tombstone.getSize should not be asking the RegionEntry for its 
key and then sizing it but should instead just ask the RegionEntry for its 
memory footprint. It can then figure out if it needs to size any of the objects 
it references (like the key).
3. the Tombstone class it self accounts for all its fields but forgot about its 
own object header.

To fix this have the Tombstone class add the following:
{code:java}
private static final TOMBSTONE_OVERHEAD = 
JvmSizeUtils.memoryOverhead(Tombstone.class);
{code}
For the Node overhead on ConcurrentLinkedQueue add this:
{code:java}
private static final NODE_OVERHEAD = 
JvmSizeUtils.roundUpSize(JvmSizeUtils.getObjectHeaderSize() + 
2*JvmSizeUtils.getReferenceSize());
{code}
For RegionEntry add a new "memoryOverhead" method on RegionEntry and then 
implement it  in all the many leaf classes. You can do this by modifying  
LeafRegionEntry.cpp and running generateRegionEntryClasses.sh. You will want 
each class to have a private static final field that calls 
JvmSizeUtils.memoryOverhead(<CLASSNAME>.class) and then decide at runtime if 
the instance references other objects that should be sized (like key, value, 
DiskId, etc.).


> CachePerfStats.replicatedTombstonesSize and nonReplicatedTombstonesSize are 
> not accurate
> ----------------------------------------------------------------------------------------
>
>                 Key: GEODE-9801
>                 URL: https://issues.apache.org/jira/browse/GEODE-9801
>             Project: Geode
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.12.0
>            Reporter: Darrel Schneider
>            Priority: Major
>              Labels: needsTriage
>
> This issue has been around since Geode 1.0.
> CachePerfStats.replicatedTombstonesSize and nonReplicatedTombstonesSize are 
> supposed to help geode users figure out how much memory is being used by 
> tombstones. But because of some over sites in the sizing code they actual 
> amount of memory used by tombstones is significantly higher. Some of the 
> mistakes made:
> 1. A Tombstone is added to a ConcurrentLinkedQueue. This is accounted for as 
> a single objRef. But each add to a ConcurrentLinkedQueue add a Node instance 
> so you have the object header size plus two objRefs.
> 2. The size of the RegionEntry is not accounted for. The size of the key in 
> this entry is but most of the memory used is in the RegionEntry fields 
> itself. And the key, in some cases, is stored inline in primitive fields in 
> the RegionEntry so Tombstone.getSize should not be asking the RegionEntry for 
> its key and then sizing it but should instead just ask the RegionEntry for 
> its memory footprint. It can then figure out if it needs to size any of the 
> objects it references (like the key).
> 3. the Tombstone class it self accounts for all its fields but forgot about 
> its own object header.
> To fix this have the Tombstone class add the following:
> {code:java}
> private static final TOMBSTONE_OVERHEAD = 
> JvmSizeUtils.memoryOverhead(Tombstone.class);
> {code}
> For the Node overhead on ConcurrentLinkedQueue add this:
> {code:java}
> private static final NODE_OVERHEAD = 
> JvmSizeUtils.roundUpSize(JvmSizeUtils.getObjectHeaderSize() + 
> 2*JvmSizeUtils.getReferenceSize());
> {code}
> For RegionEntry add a new "memoryOverhead" method on HashRegionEntry and then 
> implement it  in all the many leaf classes. You can do this by modifying  
> LeafRegionEntry.cpp and running generateRegionEntryClasses.sh. You will want 
> each class to have a private static final field that calls 
> JvmSizeUtils.memoryOverhead(<CLASSNAME>.class) and then decide at runtime if 
> the instance references other objects that should be sized (like key, value, 
> DiskId, etc.).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to