RussellSpitzer commented on code in PR #12319:
URL: https://github.com/apache/iceberg/pull/12319#discussion_r1960681428


##########
spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestAddFilesProcedure.java:
##########
@@ -476,6 +485,53 @@ public void 
addPartitionToPartitionedSnapshotIdInheritanceEnabledInTwoRuns() {
         sql("SELECT id, name, dept, subdept FROM %s WHERE id < 3 ORDER BY id", 
sourceTableName),
         sql("SELECT id, name, dept, subdept FROM %s ORDER BY id", tableName));
 
+    Table table = Spark3Util.loadIcebergTable(spark, tableName);
+    FileIO io = ((HasTableOperations) table).operations().io();
+
+    assertThat(
+            table.currentSnapshot().allManifests(io).stream()
+                .map(mf -> ManifestFiles.read(mf, io, null /* force reading 
spec from file*/))
+                .collect(Collectors.toList()))
+        .allMatch(file -> file.spec().equals(table.spec()));
+
+    // verify manifest file name has uuid pattern
+    String manifestPath = (String) sql("select path from %s.manifests", 
tableName).get(0)[0];
+
+    Pattern uuidPattern = 
Pattern.compile("[a-f0-9]{8}(?:-[a-f0-9]{4}){4}[a-f0-9]{8}");
+
+    Matcher matcher = uuidPattern.matcher(manifestPath);
+    assertThat(matcher.find()).as("verify manifest path has uuid").isTrue();
+  }
+
+  @TestTemplate
+  public void addPartitionsFromHiveSnapshotInheritanceEnabled()
+      throws NoSuchTableException, ParseException {
+    createPartitionedHiveTable();
+    createIcebergTable(
+        "id Integer, name String, dept String, subdept String", "PARTITIONED 
BY (id)");
+
+    sql(
+        "ALTER TABLE %s SET TBLPROPERTIES ('%s' 'true')",
+        tableName, TableProperties.SNAPSHOT_ID_INHERITANCE_ENABLED);
+
+    sql("CALL %s.system.add_files('%s', '%s')", catalogName, tableName, 
sourceTableName);
+
+    assertEquals(
+        "Iceberg table contains correct data",
+        sql("SELECT id, name, dept, subdept FROM %s ORDER BY id", 
sourceTableName),
+        sql("SELECT id, name, dept, subdept FROM %s ORDER BY id", tableName));
+
+    Table table = Spark3Util.loadIcebergTable(spark, tableName);
+    FileIO io = ((HasTableOperations) table).operations().io();
+
+    // Check that the manifest written have the correct partition spec written
+    assertThat(
+            table.currentSnapshot().allManifests(io).stream()
+                .map(mf -> ManifestFiles.read(mf, io, null /* force reading 
spec from file*/))

Review Comment:
   We need to use the null here because our native readers for manifest pass 
through a mapping of specId to partition spec which is used instead of reading 
the metadata. This means if the specId is correct, then the correct spec is 
read by the Java Impl even if the incorrect spec is in the file itself.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to