jackye1995 commented on code in PR #9731:
URL: https://github.com/apache/iceberg/pull/9731#discussion_r1500225930


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteManifestsSparkAction.java:
##########
@@ -250,12 +282,40 @@ private List<ManifestFile> writeUnpartitionedManifests(
   private List<ManifestFile> writePartitionedManifests(
       ManifestContent content, Dataset<Row> manifestEntryDF, int numManifests) 
{
 
+    // Extract desired clustering/sorting criteria into a dedicated column
+    Dataset<Row> clusteredManifestEntryDF;
+    String clusteringColumnName = "__clustering_column__";
+
+    if (partitionSortColumns != null) {
+      LOG.info(
+          "Sorting manifests for specId {} by partition columns in order of {} 
",
+          spec.specId(),
+          partitionSortColumns);
+
+      // Map the top level partition column names to the column name 
referenced within the manifest
+      // entry dataframe
+      Column[] actualPartitionColumns =
+          partitionSortColumns.stream()
+              .map(p -> col("data_file.partition." + p))
+              .toArray(Column[]::new);
+
+      // Form a new temporary column to sort/cluster manifests on, based on 
the custom sort
+      // order provided
+      clusteredManifestEntryDF =
+          manifestEntryDF.withColumn(
+              clusteringColumnName, functions.struct(actualPartitionColumns));
+    } else {
+      clusteredManifestEntryDF =
+          manifestEntryDF.withColumn(clusteringColumnName, 
col("data_file.partition"));
+    }
+
     return withReusableDS(
-        manifestEntryDF,
+        clusteredManifestEntryDF,
         df -> {
           WriteManifests<?> writeFunc = newWriteManifestsFunc(content, 
df.schema());
-          Column partitionColumn = df.col("data_file.partition");
-          Dataset<Row> transformedDF = repartitionAndSort(df, partitionColumn, 
numManifests);
+          Column partitionColumn = df.col(clusteringColumnName);
+          Dataset<Row> transformedDF =
+              repartitionAndSort(df, partitionColumn, 
numManifests).drop(clusteringColumnName);

Review Comment:
   The column here is more used for repartitioning, rather than sorting. 
Sorting only happens for rows with the same clustering column value. I am 
starting to think if we should call the API `repartition(List<String> 
partitionFields)`, rather than `sort(List<String> partitionFields)`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to