slfan1989 commented on code in PR #13956:
URL: https://github.com/apache/iceberg/pull/13956#discussion_r2315035269


##########
spark/v4.0/spark/src/test/java/org/apache/iceberg/spark/actions/TestRewriteTablePathsAction.java:
##########
@@ -902,36 +902,37 @@ public void testInvalidArgs() {
   }
 
   @Test
-  public void testPartitionStatisticFile() throws IOException {
+  public void testTableWithPartitionStatisticFile() throws IOException {
     String sourceTableLocation = newTableLocation();
     Map<String, String> properties = Maps.newHashMap();
     properties.put("format-version", "2");
     String tableName = "v2tblwithPartStats";
     Table sourceTable =
         createMetastoreTable(sourceTableLocation, properties, "default", 
tableName, 0);
 
+    sql("insert into hive.default.%s values (%s, 'AAAAAAAAAA', 'AAAA')", 
tableName, 0);
+    sourceTable.refresh();
+
     TableMetadata metadata = currentMetadata(sourceTable);
+    File statFile = new File(removePrefix(sourceTableLocation + 
"/stats/file.parquet"));
     TableMetadata withPartStatistics =
         TableMetadata.buildFrom(metadata)
             .setPartitionStatistics(
                 ImmutableGenericPartitionStatisticsFile.builder()
                     .snapshotId(11L)
-                    .path("/some/partition/stats/file.parquet")
+                    .path(statFile.toURI().toString())
                     .fileSizeInBytes(42L)
                     .build())
             .build();
-
     OutputFile file = 
sourceTable.io().newOutputFile(metadata.metadataFileLocation());
     TableMetadataParser.overwrite(withPartStatistics, file);
 
-    assertThatThrownBy(
-            () ->
-                actions()
-                    .rewriteTablePath(sourceTable)
-                    .rewriteLocationPrefix(sourceTableLocation, 
targetTableLocation())
-                    .execute())
-        .isInstanceOf(IllegalArgumentException.class)
-        .hasMessageContaining("Partition statistics files are not supported 
yet");
+    RewriteTablePath.Result result =
+        actions()
+            .rewriteTablePath(sourceTable)
+            .rewriteLocationPrefix(sourceTableLocation, targetTableLocation())
+            .execute();
+    checkFileNum(2, 1, 1, 6, result);

Review Comment:
   Thank you very much for your feedback. I have optimized the unit test, 
focusing on the following aspects:
   
   - Creating a partitioned table and writing data.
   - Using the computePartitionStats method to generate the partition 
statistics file.
   - Verifying the file count after the path rewrite.
   - Analyzing the file movement results, filtering out the partition 
statistics file list to ensure the file is correctly rewritten.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to