rajagopr commented on code in PR #15008:
URL: https://github.com/apache/pinot/pull/15008#discussion_r1951232697


##########
pinot-controller/src/main/java/org/apache/pinot/controller/validation/DiskUtilizationChecker.java:
##########
@@ -0,0 +1,138 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.controller.validation;
+
+import com.google.common.collect.BiMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.pinot.common.restlet.resources.DiskUsageInfo;
+import org.apache.pinot.controller.ControllerConf;
+import org.apache.pinot.controller.helix.core.PinotHelixResourceManager;
+import org.apache.pinot.controller.util.CompletionServiceHelper;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.utils.JsonUtils;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public class DiskUtilizationChecker {
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(DiskUtilizationChecker.class);
+  private final int _timeoutMs;
+  private final double _diskUtilizationThreshold;
+  private final String _diskUtilizationPath;
+  private static final String DISK_UTILIZATION_API_PATH = 
"/instance/diskUtilization";
+
+  private final PinotHelixResourceManager _helixResourceManager;
+
+  public DiskUtilizationChecker(PinotHelixResourceManager 
helixResourceManager, ControllerConf controllerConf) {
+    _helixResourceManager = helixResourceManager;
+    _diskUtilizationPath = controllerConf.getDiskUtilizationPath();
+    _diskUtilizationThreshold = controllerConf.getDiskUtilizationThreshold();
+    _timeoutMs = controllerConf.getDiskUtilizationCheckTimeoutMs();
+  }
+
+  public static String getDiskUtilizationApiPath() {
+    return DISK_UTILIZATION_API_PATH;
+  }
+
+  /**
+   * Check if disk utilization for the requested table is within the 
configured limits.
+   */
+  public boolean isDiskUtilizationWithinLimits(String tableNameWithType) {
+    if (StringUtils.isEmpty(tableNameWithType)) {
+      throw new IllegalArgumentException("Table name found to be null or empty 
while computing disk utilization.");
+    }
+
+    if (TableNameBuilder.isOfflineTableResource(tableNameWithType)) {
+      TableConfig offlineTableConfig = 
_helixResourceManager.getOfflineTableConfig(tableNameWithType);
+      if (offlineTableConfig == null) {
+        // offline table does not exist
+        return true;
+      }
+      List<String> instances =
+          _helixResourceManager.getServerInstancesForTable(tableNameWithType, 
TableType.OFFLINE);
+      return isDiskUtilizationWithinLimits(instances);
+    }
+
+    if (TableNameBuilder.isRealtimeTableResource(tableNameWithType)) {
+      TableConfig realtimeTableConfig = 
_helixResourceManager.getRealtimeTableConfig(tableNameWithType);
+      if (realtimeTableConfig == null) {
+        // realtime table does not exist
+        return true;
+      }
+      List<String> instances =
+          _helixResourceManager.getServerInstancesForTable(tableNameWithType, 
TableType.REALTIME);
+      return isDiskUtilizationWithinLimits(instances);
+    }

Review Comment:
   Thanks, will avoid the duplication.



##########
pinot-server/src/main/java/org/apache/pinot/server/api/resources/InstanceResource.java:
##########
@@ -97,4 +103,22 @@ public Map<String, String> getInstancePools() {
     Map<String, String> pools = 
instanceConfig.getRecord().getMapField(InstanceUtils.POOL_KEY);
     return pools == null ? Collections.emptyMap() : pools;
   }
+
+  @GET
+  @Produces(MediaType.APPLICATION_JSON)
+  @Path("/diskUtilization")

Review Comment:
   Having a single endpoint was definitely considered. However, we would want 
to introduce the endpoint in the controller. The `diskUtilization` endpoint is 
a Server level API and not exposed in the controller itself. 
   
   Currently, we have the `ResourceUtilizationManager` which checks disk 
utilization. Additional checks can be added to this class in the future. When 
`resourceUtilization` endpoint is introduced in the controller, the call would 
go via the `ResourceUtilizationManager` class and it could provide a single 
view of the various stats that are collected.



##########
pinot-controller/src/main/java/org/apache/pinot/controller/validation/RealtimeSegmentValidationManager.java:
##########
@@ -135,6 +138,18 @@ private boolean shouldEnsureConsuming(String 
tableNameWithType) {
     if (isTablePaused && 
pauseStatus.getReasonCode().equals(PauseState.ReasonCode.ADMINISTRATIVE)) {
       return false;
     }
+    try {
+      boolean isResourceUtilizationWithinLimits =
+          
_resourceUtilizationManager.isResourceUtilizationWithinLimits(tableNameWithType);
+      if (!isResourceUtilizationWithinLimits) {
+        LOGGER.warn("Resource utilization limit exceeded for table: {}", 
tableNameWithType);
+        _llcRealtimeSegmentManager.pauseConsumption(tableNameWithType,

Review Comment:
   Sounds good, I will post a new revision to check if table is already paused 
and introduce the check hierarchy.
   
   `Should we also add the check at PinotSegmentUploadDownloadRestletResource`
   We did consider two approaches during internal discussions: 1) 
Aggressive-approach: In this approach, we block in-flight segment uploads when 
resource quota is breached. 2) Non-aggressive approach: In this approach, we 
pause REALTIME ingestion and block new ingestion tasks for OFFLINE tables.
   
   We decided to go with the non-aggressive approach and not block in-flight 
segment uploads as the goal here is to not get the cluster into an unusable 
state with low-disk as opposed to guarding storage quota allocated to a 
particular table.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to