fvaleye commented on code in PR #1620:
URL: https://github.com/apache/iceberg-rust/pull/1620#discussion_r2303784947


##########
crates/integrations/datafusion/src/physical_plan/repartition.rs:
##########
@@ -0,0 +1,906 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use std::any::Any;
+use std::sync::Arc;
+
+use datafusion::error::Result as DFResult;
+use datafusion::execution::{SendableRecordBatchStream, TaskContext};
+use datafusion::physical_expr::{EquivalenceProperties, PhysicalExpr};
+use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};
+use datafusion::physical_plan::expressions::Column;
+use datafusion::physical_plan::repartition::RepartitionExec;
+use datafusion::physical_plan::{
+    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, PlanProperties,
+};
+use iceberg::spec::{SchemaRef, TableMetadata, TableMetadataRef, Transform};
+
+/// Iceberg-specific repartition execution plan that optimizes data 
distribution
+/// for parallel processing while respecting Iceberg table partitioning 
semantics.
+///
+/// This execution plan automatically determines the optimal partitioning 
strategy based on
+/// the table's partition specification and the configured write distribution 
mode:
+///
+/// ## Partitioning Strategies
+///
+/// - **Unpartitioned tables**: Uses round-robin distribution to ensure 
balanced load
+///   across all workers, maximizing parallelism for write operations.
+///
+/// - **Partitioned tables**: Uses hash partitioning on partition columns 
(identity transforms)
+///   and bucket columns to maintain data co-location. This ensures:
+///   - Better file clustering within partitions
+///   - Improved query pruning performance
+///   - Optimal join performance on partitioned columns
+///
+/// - **Range-distributed tables**: Approximates range distribution by hashing 
on sort order
+///   columns since DataFusion lacks native range exchange. Falls back to 
partition/bucket
+///   column hashing when available.
+///
+/// ## Write Distribution Modes
+///
+/// Respects the table's `write.distribution-mode` property:
+/// - `hash` (default): Distributes by partition and bucket columns
+/// - `range`: Distributes by sort order columns
+/// - `none`: Uses round-robin distribution
+///
+/// ## Performance notes
+///
+/// - Only repartitions when the input partitioning scheme differs from the 
desired strategy
+/// - Only repartitions when the input partition count differs from the target
+/// - Automatically detects optimal partition count from DataFusion's 
SessionConfig
+/// - Preserves column order (partitions first, then buckets) for consistent 
file layout
+#[derive(Debug)]
+pub struct IcebergRepartitionExec {

Review Comment:
   I understand your point better!
   
   This is mostly a code architecture decision, based on what we think is 
cleaner in terms of implementation.
   I created the `IcebergRepartitionExec` to have a clear separation and 
identification (as a separate file and structure) to follow a common (and be 
integrated) DataFusion physical plan:
   ```
       -> InputExec
      -> ProjectionExec
      -> RepartitionExec
      -> SortExec
      -> WriteExec
   ```
   Therefore, we can easily identify the `IcebergReparationExec` implementation 
(file and DataFusion integration) and extend it, which can be used across 
different operations (merge, compaction...).
   
   To sum up:
   - I thought of having a dedicated file for the DataFusion plan to ensure we 
know that we apply an additional physical plan execution for Iceberg. It makes 
it clearer that there is a dedicated node in the plan that everyone could 
consider.
   - It's a safe encapsulation and composable by centralizing partitioning 
logic improves readability, maintainability, and evolution of the code.
   - Lastly, the logic could rapidly evolve, and we could introduce 
optimizations or more complex behavior, or implement custom DataFusion logic 
for performance tuning. I avoided this to keep it simple, but we 
   
   Of course, with your knowledge of the codebase, you know better how to 
determine the best place for this. 
   I'm thinking out loud, and we could move the logic to a more tailored 
implementation for now, where you think it makes more sense.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to