junegunn commented on code in PR #7076:
URL: https://github.com/apache/hbase/pull/7076#discussion_r2134900857


##########
hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java:
##########
@@ -1040,10 +1040,21 @@ public enum OperationStatusCode {
   public static final String HFILE_PREAD_ALL_BYTES_ENABLED_KEY = 
"hfile.pread.all.bytes.enabled";
   public static final boolean HFILE_PREAD_ALL_BYTES_ENABLED_DEFAULT = false;
 
-  /*
-   * Minimum percentage of free heap necessary for a successful cluster 
startup.
+  /**
+   * Configuration key for the minimum fraction of heap memory that must 
remain free for a
+   * RegionServer to start
+   */
+  public static final String HBASE_REGION_SERVER_FREE_HEAP_MIN_SIZE_KEY =
+    "hbase.regionserver.free.heap.min.size";
+
+  public static final float HBASE_REGION_SERVER_FREE_HEAP_MIN_SIZE_DEFAULT = 
0.1f;
+
+  /**
+   * Configuration key for the absolute amount of heap memory that must remain 
free for a
+   * RegionServer to start
    */
-  public static final float HBASE_CLUSTER_MINIMUM_MEMORY_THRESHOLD = 0.2f;
+  public static final String HBASE_REGION_SERVER_FREE_HEAP_MEMORY_MIN_SIZE_KEY 
=
+    "hbase.regionserver.free.heap.memory.min.size";

Review Comment:
   > Using an absolute size is not only more explicit for users, but also 
provides more stability.
   
   Thanks for giving my suggestion some thought. Considering the use cases and 
requirements, I think it's enough to offer a single parameter for setting the 
minimum amount of heap that should be left unreserved. Anyone who wants to 
bypass the check can set it to 0, and those who want to have some safety 
guarantee can set it to some realistic value. I believe it covers all 
scenarios, and having another option might distract/confuse the users.
   
   But that's just one perspective, and I hope to hear what others think as 
well.
   
   > If we go with absolute size only, we still need to think about the default 
value.
   
   Yeah, it's important to provide sensible defaults, but we should be extra 
careful when changing the existing ones. Following "the principal of least 
surprise", I would just leave the value undefined, which means that 20% of the 
heap is used as the threshold as before.
   
   > I'm also considering allowing this value to be disabled (by setting it to 
0)
   
   We should still allow `0` as a valid value for the option, for users who 
want to bypass the check entirely. An empty value or a negative value can be 
used instead to indicate an undefined state.
   
   > overridden to something like 128m in 
[conf/hbase-site.xml](https://github.com/apache/hbase/blob/master/conf/hbase-site.xml),
 especially for test environments.
   
   While I think it's reasonable, I'm not confident that it won't impact 
existing user setups. In pseudo-distributed environments, users often run many 
processes locally, each with a relatively small heap (e.g., 512MB). In such 
cases, even 128m could cause their setup to stop working.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to