[ 
https://issues.apache.org/jira/browse/HBASE-28893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-28893:
-----------------------------------------
    Fix Version/s: 3.0.0

> RefCnt Leak error when closing a HalfStoreFileReader
> ----------------------------------------------------
>
>                 Key: HBASE-28893
>                 URL: https://issues.apache.org/jira/browse/HBASE-28893
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 3.0.0-beta-1, 2.7.0
>            Reporter: Wellington Chevreuil
>            Assignee: Wellington Chevreuil
>            Priority: Major
>             Fix For: 3.0.0
>
>
> In HBASE-28596 we have added ability for references to get resolved to the 
> original file blocks in bucket cache. As part of this, we had to modify 
> HalfStoreFileReader.close method, to create a scanner and seek to boundary 
> cell in order to get the related offset and calculate the limiting offset for 
> blocks we want to evict. We missed close the scanner instance there, which 
> then cause the refcount leaks reported as below:
> {noformat}
> 2024-09-25 14:24:51,292 ERROR 
> org.apache.hbase.thirdparty.io.netty.util.ResourceLeakDetector: LEAK: 
> RefCnt.release() was not called before it's garbage-collected. See 
> https://netty.io/wiki/reference-counted-objects.html for more information.
> Recent access records:  
> Created at:
>         org.apache.hadoop.hbase.nio.RefCnt.<init>(RefCnt.java:59)
>         org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54)
>         org.apache.hadoop.hbase.nio.ByteBuff.wrap(ByteBuff.java:550)
>         
> org.apache.hadoop.hbase.io.ByteBuffAllocator.allocate(ByteBuffAllocator.java:357)
>         
> org.apache.hadoop.hbase.io.hfile.bucket.FileIOEngine.read(FileIOEngine.java:134)
>         
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:666)
>         
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:98)
>         
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.getCachedBlock(HFileReaderImpl.java:1102)
>         
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1287)
>         
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1248)
>         
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:318)
>         
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:670)
>         
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:623)
>         
> org.apache.hadoop.hbase.io.HalfStoreFileReader.close(HalfStoreFileReader.java:368)
>         
> org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2352)
>         
> org.apache.hadoop.hbase.regionserver.HStore.closeAndArchiveCompactedFiles(HStore.java:2314)
>         
> org.apache.hadoop.hbase.regionserver.CompactedHFilesDischargeHandler.process(CompactedHFilesDischargeHandler.java:41)
>         
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to