Fokko commented on code in PR #14163:
URL: https://github.com/apache/iceberg/pull/14163#discussion_r2372021034


##########
spark/v4.0/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestSnapshotTableProcedure.java:
##########
@@ -281,5 +298,61 @@ public void testSnapshotPartitionedWithParallelism() 
throws IOException {
         "Should have expected rows",
         ImmutableList.of(row("a", 1L), row("b", 2L)),
         sql("SELECT * FROM %s ORDER BY id", tableName));
+
+    Table createdTable = validationCatalog.loadTable(tableIdent);
+
+    for (ManifestFile manifest :
+        createdTable.currentSnapshot().dataManifests(new HadoopFileIO(new 
Configuration()))) {
+      try (AvroIterable<GenericData.Record> reader =
+          Avro.read(org.apache.iceberg.Files.localInput(manifest.path()))
+              .project(SNAPSHOT_ID_READ_SCHEMA)
+              .createResolvingReader(GenericAvroReader::create)
+              .build()) {
+
+        assertThat(reader.getMetadata().get("format-version")).isEqualTo("2");
+
+        List<GenericData.Record> records = 
Lists.newArrayList(reader.iterator());
+        for (GenericData.Record row : records) {
+          assertThat(row.get(0)).as("Field-ID should be inherited").isNull();
+        }
+      }
+    }
+  }
+
+  @TestTemplate
+  public void testSnapshotPartitionedWithParallelismV1() throws IOException {
+    String location = Files.createTempDirectory(temp, 
"junit").toFile().toString();
+    sql(
+        "CREATE TABLE %s (id bigint NOT NULL, data string) USING parquet 
PARTITIONED BY (id) LOCATION '%s'",
+        SOURCE_NAME, location);
+    sql("INSERT INTO TABLE %s (id, data) VALUES (1, 'a'), (2, 'b')", 
SOURCE_NAME);
+    List<Object[]> result =
+        sql(
+            "CALL %s.system.snapshot(source_table => '%s', table => '%s', 
parallelism => %d, properties => map('format-version', '1'))",

Review Comment:
   I had the same idea, and I tried this first as well. But I could not 
reproduce the underlying issue. It turns out that the partitioned table takes a 
different branch in the code if the table is partitioned, and this causes the 
issue:
   
   
https://github.com/apache/iceberg/blob/6b80e5c42beb856be5c84c00b9f96d7ff268a7d7/spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/SparkTableUtil.java#L590-L608
   
   With the unpartitioned table, I'm unable to reproduce the issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to