pvary commented on code in PR #10457:
URL: https://github.com/apache/iceberg/pull/10457#discussion_r1684360744


##########
flink/v1.19/flink/src/main/java/org/apache/iceberg/flink/sink/shuffle/RangePartitioner.java:
##########
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.flink.sink.shuffle;
+
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.Partitioner;
+import org.apache.flink.table.data.RowData;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.SortOrder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** The wrapper class */
+@Internal
+public class RangePartitioner implements Partitioner<StatisticsOrRecord> {
+  private static final Logger LOG = 
LoggerFactory.getLogger(RangePartitioner.class);
+
+  private final Schema schema;
+  private final SortOrder sortOrder;
+
+  private transient AtomicLong roundRobinCounter;
+  private transient Partitioner<RowData> delegatePartitioner;
+
+  public RangePartitioner(Schema schema, SortOrder sortOrder) {
+    this.schema = schema;
+    this.sortOrder = sortOrder;
+  }
+
+  @Override
+  public int partition(StatisticsOrRecord wrapper, int numPartitions) {
+    if (wrapper.hasStatistics()) {
+      this.delegatePartitioner = delegatePartitioner(wrapper.statistics());
+      return (int) (roundRobinCounter(numPartitions).getAndIncrement() % 
numPartitions);
+    } else {
+      if (delegatePartitioner != null) {
+        return delegatePartitioner.partition(wrapper.record(), numPartitions);
+      } else {
+        int partition = (int) 
(roundRobinCounter(numPartitions).getAndIncrement() % numPartitions);
+        LOG.trace("Statistics not available. Round robin to partition {}", 
partition);
+        return partition;
+      }
+    }
+  }
+
+  private AtomicLong roundRobinCounter(int numPartitions) {
+    if (roundRobinCounter == null) {
+      // randomize the starting point to avoid synchronization across subtasks
+      this.roundRobinCounter = new AtomicLong(new 
Random().nextInt(numPartitions));
+    }
+
+    return roundRobinCounter;
+  }
+
+  private Partitioner<RowData> delegatePartitioner(GlobalStatistics 
statistics) {
+    if (statistics.type() == StatisticsType.Map) {
+      return new MapRangePartitioner(schema, sortOrder, 
statistics.mapAssignment());
+    } else if (statistics.type() == StatisticsType.Sketch) {
+      return new SketchRangePartitioner(schema, sortOrder, 
statistics.rangeBounds());
+    } else {
+      throw new IllegalArgumentException(
+          String.format("Invalid statistics type: %s. Should be Map or 
Sketch", statistics.type()));
+    }
+  }

Review Comment:
   After our offline discussion I understand your points better. I understand 
that the concept is
   - That we collect the statistics
   - Then use the statistics to create a partitioner
   and we try to stick to the concept.
   
   I still think it is confusing, that currently we collect 2 types of 
statistics and each of them are tightly coupled to the 2 types of partitioners 
we have. (`Type.Map` is always used by `MapRangePartitioner`, and `Type.Sketch` 
is always used by `SketchRangePartitioner`). We could create the 
`MapRangePartitioner` and the `SketchRangePartitioner` on the coordinator side, 
and send them to a PartitionerExecutor which just deserializes them and runs 
them. So the real logic would be in a single place 
(`DataStatisticsCoordinator`).
   
   That said, the improvement is just coding style preference, as the 
Partitioners still need to serialize the underlying statistics, so the 
performance would be the same. 
   
   So we can move forward with your proposed solution.
   Thanks for the discussion!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to