[ 
https://issues.apache.org/jira/browse/HBASE-29727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18044657#comment-18044657
 ] 

Hudson commented on HBASE-29727:
--------------------------------

Results for branch branch-2
        [build #1360 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/]: 
(x) *{color:red}-1 overall{color}*
----
details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk17 hadoop3 checks{color}
-- For more information [see jdk17 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk17 hadoop 3.3.5 backward compatibility checks{color}
-- For more information [see jdk17 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk17 hadoop 3.3.6 backward compatibility checks{color}
-- For more information [see jdk17 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk17 hadoop 3.4.0 backward compatibility checks{color}
-- For more information [see jdk17 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk17 hadoop 3.4.1 backward compatibility checks{color}
-- For more information [see jdk17 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1360/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test for HBase 2 {color}
(/) {color:green}+1 client integration test for 3.3.5 {color}
(/) {color:green}+1 client integration test for 3.3.6 {color}
(/) {color:green}+1 client integration test for 3.4.0 {color}
(/) {color:green}+1 client integration test for 3.4.1 {color}
(/) {color:green}+1 client integration test for 3.4.2 {color}


> Introduce a String pool for repeating filename, region and cf string fields 
> in BlockCacheKey
> --------------------------------------------------------------------------------------------
>
>                 Key: HBASE-29727
>                 URL: https://issues.apache.org/jira/browse/HBASE-29727
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Wellington Chevreuil
>            Assignee: Wellington Chevreuil
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.0.0
>
>
> For every block added to BucketCache, we create and keep a BlockCacheKey 
> object with a String attribute for the file name the blocks belong to, plus 
> the Path containing the entire path for the given file. HFiles will normally 
> contain many blocks, and for all blocks from a same file, these attributes 
> will have the very same value, yet, we create different instances for each of 
> the blocks. When using file based bucket cache, where the bucket cache size 
> is in the TB magnitude, the total block count in the cache can grow very 
> large, and so is the heap used by the BucketCache object, due to the high 
> count of BlockCacheKey instances it has to keep.
> For a few years now, the reference architecture with my employer for hbase 
> clusters on the cloud  has been to deploy hbase root dir on cloud storage, 
> then use ephemeral SSD disks shipped within the RSes node VMs to for a file 
> based BucketCache. At the moment, the standard VM profile used allows for as 
> much as 1.6TB of BucketCache capacity. For a cache of such size, with the 
> default block size of 64KB, we see on average, 30M blocks, with a minimal 
> heap usage around 12GB.
> With cloud providers now offering different VM profiles with more ephemeral 
> SSD disks capacity, we are looking for alternatives to optimise the heap 
> usage by BucketCache. The approach proposed here, is to define a "string 
> pool" for mapping the String attributes in the BlockCacheKey class to integer 
> ids, so that we can save some bytes for blocks from same file. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to