chia7712 commented on code in PR #21273:
URL: https://github.com/apache/kafka/pull/21273#discussion_r2828490559


##########
server-common/src/main/java/org/apache/kafka/server/config/ServerLogConfigs.java:
##########
@@ -40,6 +42,11 @@ public class ServerLogConfigs {
     public static final String LOG_DIR_DOC = "A comma-separated list of the 
directories where the log data is stored. (supplemental to " + LOG_DIRS_CONFIG 
+ " property)";
     public static final String LOG_DIRS_DOC = "A comma-separated list of the 
directories where the log data is stored. If not set, the value in " + 
LOG_DIR_CONFIG + " is used.";
 
+    public static final String CORDONED_LOG_DIRS_CONFIG = "cordoned.log.dirs";

Review Comment:
   The configuration definitions are dumped as below.
   ```
   <h4><a id="cordoned.log.dirs"></a><a id="brokerconfigs_cordoned.log.dirs" 
href="#brokerconfigs_cordoned.log.dirs">cordoned.log.dirs</a></h4>
   <p>A comma-separated list of the directories that are cordoned. Entries in 
this list must be entries in log.dirs or log.dir configuration. This can also 
be set to * to cordon all log directories.</p>
   <table><tbody>
   <tr><th>Type:</th><td>list</td></tr>
   <tr><th>Default:</th><td>""</td></tr>
   <tr><th>Valid Values:</th><td></td></tr>
   <tr><th>Importance:</th><td>medium</td></tr>
   <tr><th>Update Mode:</th><td>cluster-wide</td></tr>
   </tbody></table>
   ```
   
   The KIP states the update mode is `per-broker`, so would you mind adding it 
to `PER_BROKER_CONFIGS`?



##########
server/src/main/java/org/apache/kafka/server/BrokerLifecycleManager.java:
##########
@@ -144,7 +145,13 @@ public class BrokerLifecycleManager {
      * to the Controller.
      * This variable can only be read or written from the event queue thread.
      */
-    private Map<Uuid, Boolean> offlineDirs = new HashMap<>();
+    private final Map<Uuid, Boolean> offlineDirs = new HashMap<>();
+
+    /**
+     * Map of cordoned log directories. The value is true if the directory is 
cordoned.
+     * This variable can only be read or written from the event queue thread.
+     */
+    private final Map<Uuid, Boolean> cordonedLogDirs = new HashMap<>();

Review Comment:
   It seems we could use a `HashSet` to handle the usages - All the uuids in 
the set are cordoned



##########
clients/src/main/java/org/apache/kafka/clients/admin/LogDirDescription.java:
##########
@@ -33,16 +33,18 @@ public class LogDirDescription {
     private final ApiException error;
     private final OptionalLong totalBytes;
     private final OptionalLong usableBytes;
+    private final boolean isCordoned;
 
     public LogDirDescription(ApiException error, Map<TopicPartition, 
ReplicaInfo> replicaInfos) {
-        this(error, replicaInfos, UNKNOWN_VOLUME_BYTES, UNKNOWN_VOLUME_BYTES);
+        this(error, replicaInfos, UNKNOWN_VOLUME_BYTES, UNKNOWN_VOLUME_BYTES, 
false);
     }
 
-    public LogDirDescription(ApiException error, Map<TopicPartition, 
ReplicaInfo> replicaInfos, long totalBytes, long usableBytes) {
+    public LogDirDescription(ApiException error, Map<TopicPartition, 
ReplicaInfo> replicaInfos, long totalBytes, long usableBytes, boolean 
isCordoned) {

Review Comment:
   Should we keep the original constructor for compatibility?



##########
core/src/main/scala/kafka/server/DynamicBrokerConfig.scala:
##########
@@ -577,8 +577,20 @@ class DynamicLogConfig(logManager: LogManager) extends 
BrokerReconfigurable with
       }
     }
 
+    def validateCordonedLogDirs(): Unit = {

Review Comment:
   What happens if a broker's `log.dirs` paths are changed during an offline 
restart? We can't safely uncordon the directory before physically migrating the 
data to the new path. However, changing the path while the old one remains in 
`cordoned.log.dirs` will cause this validation check to file. This seems to 
create a conflict.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to