sajjad-moradi commented on a change in pull request #7267:
URL: https://github.com/apache/pinot/pull/7267#discussion_r705794188



##########
File path: 
pinot-common/src/main/java/org/apache/pinot/common/utils/ServiceStatus.java
##########
@@ -232,13 +236,21 @@ public synchronized Status getServiceStatus() {
         return _serviceStatus;
       }
       long now = System.currentTimeMillis();
-      if (now < _endWaitTime) {
-        _statusDescription =
-            String.format("Waiting for consuming segments to catchup, 
timeRemaining=%dms", _endWaitTime - now);
-        return Status.STARTING;
+      if (now >= _endWaitTime) {
+        _statusDescription = String.format("Consuming segments status GOOD 
since %dms", _endWaitTime);
+        return Status.GOOD;
       }
-      _statusDescription = String.format("Consuming segments status GOOD since 
%dms", _endWaitTime);
-      return Status.GOOD;
+      if (_allConsumingSegmentsHaveReachedLatestOffset.get()) {
+        // TODO: Once the performance of offset based consumption checker is 
validated:
+        //      - remove the log line
+        //      - uncomment the status & statusDescription lines
+        LOGGER.info("All consuming segments have reached their latest 
offsets!");

Review comment:
       Agree. Updated the log line. Please note that we get to this point, now 
is always less than _endWaitTime. 

##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/data/manager/realtime/LLRealtimeSegmentDataManager.java
##########
@@ -748,11 +749,22 @@ public long getLastConsumedTimestamp() {
     return _lastLogTime;
   }
 
-  @VisibleForTesting
-  protected StreamPartitionMsgOffset getCurrentOffset() {
+  public StreamPartitionMsgOffset getCurrentOffset() {
     return _currentOffset;
   }
 
+  public StreamPartitionMsgOffset fetchLatestStreamOffset() {

Review comment:
       Done.

##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/data/manager/realtime/LLRealtimeSegmentDataManager.java
##########
@@ -748,11 +749,22 @@ public long getLastConsumedTimestamp() {
     return _lastLogTime;
   }
 
-  @VisibleForTesting
-  protected StreamPartitionMsgOffset getCurrentOffset() {
+  public StreamPartitionMsgOffset getCurrentOffset() {
     return _currentOffset;
   }
 
+  public StreamPartitionMsgOffset fetchLatestStreamOffset() {
+    try (StreamMetadataProvider metadataProvider = _streamConsumerFactory
+        .createPartitionMetadataProvider(_clientId, _partitionGroupId)) {
+      return metadataProvider
+          .fetchStreamPartitionOffset(OffsetCriteria.LARGEST_OFFSET_CRITERIA, 
/*maxWaitTimeMs=*/5000);

Review comment:
       For now this method is only called in 
OffsetBasedConsumptionStatusChecker at startup probably a few times for each 
stream partition and after health check for startup passes, it doesn't get 
called. IMO the complexity of 1) having a separate thread to periodically fetch 
the latest offset and 2) stop updating when the catchup period is finished, is 
not worth unless there are more usage of this method. I added a comment on 
javadoc of the method for this.

##########
File path: 
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import 
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the 
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in 
streams.
+ * To achieve this, every time status check is called, {@link 
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check 
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once 
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest 
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+  private final InstanceDataManager _instanceDataManager;
+  private Supplier<Set<String>> _consumingSegmentFinder;
+
+  private Set<String> _alreadyProcessedSegments = new HashSet<>();
+  private Map<String, StreamPartitionMsgOffset> 
_segmentNameToLatestStreamOffset = new HashMap<>();
+
+  public OffsetBasedConsumptionStatusChecker(InstanceDataManager 
instanceDataManager, HelixAdmin helixAdmin,
+      String helixClusterName, String instanceId) {
+    this(instanceDataManager, () -> findConsumingSegments(helixAdmin, 
helixClusterName, instanceId));
+  }
+
+  @VisibleForTesting
+  OffsetBasedConsumptionStatusChecker(InstanceDataManager instanceDataManager,
+      Supplier<Set<String>> consumingSegmentFinder) {
+    _instanceDataManager = instanceDataManager;
+    _consumingSegmentFinder = consumingSegmentFinder;
+  }
+
+  public boolean haveAllConsumingSegmentsReachedStreamLatestOffset() {
+    boolean allSegsReachedLatest = true;
+    Set<String> consumingSegmentNames = _consumingSegmentFinder.get();
+    for (String segName : consumingSegmentNames) {
+      if (_alreadyProcessedSegments.contains(segName)) {
+        continue;
+      }
+      TableDataManager tableDataManager = getTableDataManager(segName);
+      if (tableDataManager == null) {
+        LOGGER.info("TableDataManager is not yet setup for segment {}. Will 
check consumption status later", segName);
+        return false;
+      }
+      SegmentDataManager segmentDataManager = 
tableDataManager.acquireSegment(segName);
+      if (segmentDataManager == null) {
+        LOGGER.info("SegmentDataManager is not yet setup for segment {}. Will 
check consumption status later", segName);
+        return false;
+      }
+      if (!(segmentDataManager instanceof LLRealtimeSegmentDataManager)) {
+        // There's a small chance that after getting the list of consuming 
segment names at the beginning of this method
+        // up to this point, a consuming segment gets converted to a committed 
segment. In that case status check is
+        // returned as false and in the next round the new consuming segment 
will be used for fetching offsets.
+        LOGGER.info("Segment {} is already committed. Will check consumption 
status later", segName);
+        tableDataManager.releaseSegment(segmentDataManager);
+        return false;
+      }
+      LLRealtimeSegmentDataManager rtSegmentDataManager = 
(LLRealtimeSegmentDataManager) segmentDataManager;
+      StreamPartitionMsgOffset latestIngestedOffset = 
rtSegmentDataManager.getCurrentOffset();
+      StreamPartitionMsgOffset latestStreamOffset = 
_segmentNameToLatestStreamOffset.containsKey(segName)
+          ? _segmentNameToLatestStreamOffset.get(segName)
+          : rtSegmentDataManager.fetchLatestStreamOffset();
+      tableDataManager.releaseSegment(segmentDataManager);
+      if (latestStreamOffset == null || latestIngestedOffset == null) {
+        LOGGER.info("Null offset found for segment {} - latest stream offset: 
{}, latest ingested offset: {}. "
+            + "Will check consumption status later", segName, 
latestStreamOffset, latestIngestedOffset);
+        return false;
+      }
+      if (latestIngestedOffset.compareTo(latestStreamOffset) < 0) {
+        LOGGER.info("Latest ingested offset {} in segment {} is smaller than 
stream latest available offset {} ",
+            latestIngestedOffset, segName, latestStreamOffset);
+        _segmentNameToLatestStreamOffset.put(segName, latestStreamOffset);
+        allSegsReachedLatest = false;
+        continue;
+      }
+      LOGGER.info("Segment {} with latest ingested offset {} has caught up to 
the latest stream offset {}", segName,
+            latestIngestedOffset, latestStreamOffset);
+      _alreadyProcessedSegments.add(segName);
+    }
+    return allSegsReachedLatest;
+  }
+
+  private TableDataManager getTableDataManager(String segmentName) {
+    LLCSegmentName llcSegmentName = new LLCSegmentName(segmentName);
+    String tableName = llcSegmentName.getTableName();
+    String tableNameWithType = 
TableNameBuilder.forType(TableType.REALTIME).tableNameWithType(tableName);
+    return _instanceDataManager.getTableDataManager(tableNameWithType);
+  }
+
+  private static Set<String> findConsumingSegments(HelixAdmin helixAdmin, 
String helixClusterName, String instanceId) {

Review comment:
       I actually wanted to do this originally, but then realized that 
`TableDataManager`s and `SegmentDataManager`s get added when there's a helix 
transition message for a segment to go in consuming state from offline state. 
If for some reason, helix transition messages for some segments are temporarily 
lost or delayed at startup, then their segmentDataManagers (or even 
tableDataManagers) won't be available. Ideal state in ZK, on the other hand, is 
the source of truth.
   That being said, I can add one more check before getting ideal state for 
every table and that's checking if the resource is indeed a realtime table.

##########
File path: 
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import 
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the 
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in 
streams.
+ * To achieve this, every time status check is called, {@link 
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check 
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once 
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest 
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+  private final InstanceDataManager _instanceDataManager;
+  private Supplier<Set<String>> _consumingSegmentFinder;
+
+  private Set<String> _alreadyProcessedSegments = new HashSet<>();

Review comment:
       I liked caughtUpSegment better than the others. Updated.

##########
File path: 
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import 
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the 
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in 
streams.
+ * To achieve this, every time status check is called, {@link 
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check 
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once 
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest 
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+  private final InstanceDataManager _instanceDataManager;
+  private Supplier<Set<String>> _consumingSegmentFinder;
+
+  private Set<String> _alreadyProcessedSegments = new HashSet<>();
+  private Map<String, StreamPartitionMsgOffset> 
_segmentNameToLatestStreamOffset = new HashMap<>();
+
+  public OffsetBasedConsumptionStatusChecker(InstanceDataManager 
instanceDataManager, HelixAdmin helixAdmin,
+      String helixClusterName, String instanceId) {
+    this(instanceDataManager, () -> findConsumingSegments(helixAdmin, 
helixClusterName, instanceId));
+  }
+
+  @VisibleForTesting
+  OffsetBasedConsumptionStatusChecker(InstanceDataManager instanceDataManager,
+      Supplier<Set<String>> consumingSegmentFinder) {
+    _instanceDataManager = instanceDataManager;
+    _consumingSegmentFinder = consumingSegmentFinder;
+  }
+
+  public boolean haveAllConsumingSegmentsReachedStreamLatestOffset() {
+    boolean allSegsReachedLatest = true;
+    Set<String> consumingSegmentNames = _consumingSegmentFinder.get();
+    for (String segName : consumingSegmentNames) {
+      if (_alreadyProcessedSegments.contains(segName)) {
+        continue;
+      }
+      TableDataManager tableDataManager = getTableDataManager(segName);
+      if (tableDataManager == null) {
+        LOGGER.info("TableDataManager is not yet setup for segment {}. Will 
check consumption status later", segName);
+        return false;
+      }
+      SegmentDataManager segmentDataManager = 
tableDataManager.acquireSegment(segName);
+      if (segmentDataManager == null) {
+        LOGGER.info("SegmentDataManager is not yet setup for segment {}. Will 
check consumption status later", segName);
+        return false;
+      }
+      if (!(segmentDataManager instanceof LLRealtimeSegmentDataManager)) {
+        // There's a small chance that after getting the list of consuming 
segment names at the beginning of this method
+        // up to this point, a consuming segment gets converted to a committed 
segment. In that case status check is
+        // returned as false and in the next round the new consuming segment 
will be used for fetching offsets.
+        LOGGER.info("Segment {} is already committed. Will check consumption 
status later", segName);

Review comment:
       Done.

##########
File path: 
pinot-server/src/main/java/org/apache/pinot/server/starter/helix/OffsetBasedConsumptionStatusChecker.java
##########
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.pinot.server.starter.helix;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Supplier;
+import org.apache.helix.HelixAdmin;
+import org.apache.helix.model.IdealState;
+import org.apache.pinot.common.utils.LLCSegmentName;
+import org.apache.pinot.core.data.manager.InstanceDataManager;
+import 
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.SegmentDataManager;
+import org.apache.pinot.segment.local.data.manager.TableDataManager;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.stream.StreamPartitionMsgOffset;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * This class is used at startup time to have a more accurate estimate of the 
catchup period in which no query execution
+ * happens and consumers try to catch up to the latest messages available in 
streams.
+ * To achieve this, every time status check is called, {@link 
#haveAllConsumingSegmentsReachedStreamLatestOffset},
+ * list of consuming segments is gathered and then for each segment, we check 
if segment's latest ingested offset has
+ * reached the latest stream offset. To prevent chasing a moving target, once 
the latest stream offset is fetched, it
+ * will not be fetched again and subsequent status check calls compare latest 
ingested offset with the already fetched
+ * stream offset.
+ */
+public class OffsetBasedConsumptionStatusChecker {
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(OffsetBasedConsumptionStatusChecker.class);
+
+  private final InstanceDataManager _instanceDataManager;
+  private Supplier<Set<String>> _consumingSegmentFinder;
+
+  private Set<String> _alreadyProcessedSegments = new HashSet<>();
+  private Map<String, StreamPartitionMsgOffset> 
_segmentNameToLatestStreamOffset = new HashMap<>();
+
+  public OffsetBasedConsumptionStatusChecker(InstanceDataManager 
instanceDataManager, HelixAdmin helixAdmin,
+      String helixClusterName, String instanceId) {
+    this(instanceDataManager, () -> findConsumingSegments(helixAdmin, 
helixClusterName, instanceId));
+  }
+
+  @VisibleForTesting
+  OffsetBasedConsumptionStatusChecker(InstanceDataManager instanceDataManager,
+      Supplier<Set<String>> consumingSegmentFinder) {
+    _instanceDataManager = instanceDataManager;
+    _consumingSegmentFinder = consumingSegmentFinder;
+  }
+
+  public boolean haveAllConsumingSegmentsReachedStreamLatestOffset() {
+    boolean allSegsReachedLatest = true;
+    Set<String> consumingSegmentNames = _consumingSegmentFinder.get();
+    for (String segName : consumingSegmentNames) {
+      if (_alreadyProcessedSegments.contains(segName)) {
+        continue;
+      }
+      TableDataManager tableDataManager = getTableDataManager(segName);
+      if (tableDataManager == null) {
+        LOGGER.info("TableDataManager is not yet setup for segment {}. Will 
check consumption status later", segName);
+        return false;
+      }
+      SegmentDataManager segmentDataManager = 
tableDataManager.acquireSegment(segName);
+      if (segmentDataManager == null) {
+        LOGGER.info("SegmentDataManager is not yet setup for segment {}. Will 
check consumption status later", segName);
+        return false;
+      }
+      if (!(segmentDataManager instanceof LLRealtimeSegmentDataManager)) {
+        // There's a small chance that after getting the list of consuming 
segment names at the beginning of this method
+        // up to this point, a consuming segment gets converted to a committed 
segment. In that case status check is
+        // returned as false and in the next round the new consuming segment 
will be used for fetching offsets.
+        LOGGER.info("Segment {} is already committed. Will check consumption 
status later", segName);
+        tableDataManager.releaseSegment(segmentDataManager);

Review comment:
       Added a try/catch.
   For the 2nd part of your comment: at this line of code, we know that segment 
has already moved over to be an offline segment and then we return false. We 
don't continue with casting and the rest of the method.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to