somandal commented on code in PR #16857: URL: https://github.com/apache/pinot/pull/16857#discussion_r2412044624
########## pinot-controller/src/main/java/org/apache/pinot/controller/helix/core/minion/DistributedTaskLockManager.java: ########## @@ -0,0 +1,423 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.controller.helix.core.minion; + +import com.google.common.annotations.VisibleForTesting; +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.UUID; +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import javax.annotation.Nullable; +import org.apache.helix.AccessOption; +import org.apache.helix.store.zk.ZkHelixPropertyStore; +import org.apache.helix.zookeeper.datamodel.ZNRecord; +import org.apache.pinot.common.metadata.ZKMetadataProvider; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +/** + * Manages distributed locks for minion task generation using ZooKeeper ephemeral sequential nodes. + * Uses ephemeral nodes that automatically disappear when the controller session ends. + * This approach provides automatic cleanup and is suitable for long-running task generation. + * Locks are held until explicitly released or the controller session terminates. + * Locks are at the table level, to ensure that only one type of task can be generated per table at any given time. + * <p> + * ZK EPHEMERAL_SEQUENTIAL Locks (see <a href="https://zookeeper.apache.org/doc/current/recipes.html#sc_recipes_Locks"> + * ZooKeeper Lock Recipe.</a> for more details): + * <ul> + * <li>Every lock is created with a lock prefix. Lock prefix used: [controllerName]-lock-[UUID]. The UUID helps + * differentiate between requests originating from the same controller at the same time + * <li>When ZK creates the ZNode, it appends a sequence number at the end. E.g. + * [controllerName]-lock-[UUID]-00000001 + * <li>The sequence number is used to identify the lock winner in case more than one lock node is created at the + * same time. The smallest sequence number always wins + * <li>The locks are EPHEMERAL in nature, meaning that once the session with ZK is lost, the lock is automatically + * cleaned up. Scenarios when the ZK session can be lost: a) controller shutdown, b) controller crash, c) ZK session + * expiry (e.g. long GC pauses can cause this) + * <li>This implementation does not set up watches as described in the recipe as the task lock is released whenever + * we identify that the lock is already acquired. Do not expect lock ownership to automatically change for the + * time being. If such support is needed in the future, this can be enhanced to add a watch on the neighboring + * lock node + * </ul> + * <p> + * Example of how the locks will work: + * <p> + * Say we have two controllers, and one controller happens to run 2 threads at the same time, all of which need to take + * the distributed lock. Each thread will create a distributed lock node, and the "-Lock" ZNode getChildren will return: + * <ul> + * <li>controller2-lock-xyzwx-00000002 + * <li>controller1-lock-abcde-00000001 + * <li>controller1-lock-ab345-00000003 + * </ul> + * <p> + * In the above, the controller1 with UUID abcde will win the lock as it has the smallest sequence number. The other + * two threads will clean up their locks and return error that the distributed lock could not be acquired. Controller1 Review Comment: we won't have extra nodes anymore, but it is still possible to accidentally miss cleanup of the lock node. We have an API to force clean up the lock node and I'm also emitting a metric which can be used to track how old the lock node is (this is called when we check whether any task generation is ongoing for the given table) Hopefully the above is sufficient for now, but let me know if you have any other ideas or suggestions around this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
