dramaticlly opened a new issue, #12134: URL: https://github.com/apache/iceberg/issues/12134
### Apache Iceberg version 1.7.1 (latest release) ### Query engine Spark ### Please describe the bug 🐞 Before migrate to REST catalog, we rely on following `TableOperations.commit` API call to swap table metadata atomically. ```java String deisredMetadataPath = "/var/newdb/table/metadata/00003-579b23d1-4ca5-4acf-85ec-081e1699cb83.metadata.json"" tblOps.commit(ops.current(), TableMetadataParser.read(originalTable.io(), dedeisredMetadataPath)); ``` However this is no longer working in REST based catalog, I suspect it might relate to [how update type was modeled here](https://github.com/apache/iceberg/blob/a2b8008da7bc26e03248a35eeee60d1cc7e8499d/core/src/main/java/org/apache/iceberg/rest/RESTTableOperations.java#L116) where `metadata.changes()` return empty when read from parser and end up with empty changeset in update table POST call. This can be reproduced by adding following test case in org.apache.iceberg.catalog.CatalogTests.java where all other catalogs are functioning as expected but only failure for TestRESTCatalog <img width="640" alt="Image" src="https://github.com/user-attachments/assets/bde2d284-18e9-415f-b544-0e4f921ec47a" /> repro: ```java @Test public void testTableOperationCommit() { C catalog = catalog(); if (requiresNamespaceCreate()) { catalog.createNamespace(TABLE.namespace()); } Map<String, String> properties = ImmutableMap.of("user", "someone", "created-at", "2023-01-15T00:00:01"); Table originalTable = catalog .buildTable(TABLE, SCHEMA) .withPartitionSpec(SPEC) .withSortOrder(WRITE_ORDER) .withProperties(properties) .create(); TableOperations ops = ((BaseTable) originalTable).operations(); String original = ops.current().metadataFileLocation(); FileIO io = ops.io(); originalTable.newFastAppend().appendFile(FILE_A).commit(); originalTable.newFastAppend().appendFile(FILE_B).commit(); String metadataLocation = ops.refresh().metadataFileLocation(); System.out.printf("After write, metadata location is:" + metadataLocation); ops.commit(ops.refresh(), TableMetadataParser.read(originalTable.io(), original)); originalTable.refresh(); TableMetadata actual = ((BaseTable) originalTable).operations().current(); TableMetadata expected = new StaticTableOperations(original, io).current(); assertThat(actual.properties()) .as("Props must match") .containsAllEntriesOf(expected.properties()); assertThat(actual.schema().asStruct()) .as("Schema must match") .isEqualTo(expected.schema().asStruct()); assertThat(actual.specs()).as("Specs must match").isEqualTo(expected.specs()); assertThat(actual.sortOrders()).as("Sort orders must match").isEqualTo(expected.sortOrders()); assertThat(actual.currentSnapshot()) .as("Current snapshot must match") .isEqualTo(expected.currentSnapshot()); assertThat(actual.snapshots()).as("Snapshots must match").isEqualTo(expected.snapshots()); assertThat(actual.snapshotLog()).as("History must match").isEqualTo(expected.snapshotLog()); TestHelpers.assertSameSchemaMap(actual.schemasById(), expected.schemasById()); assertThat(actual) .isEqualToIgnoringGivenFields( expected, "metadataFileLocation", "schemas", "specs", "sortOrders", "properties", "schemasById", "specsById", "sortOrdersById", "snapshots"); } ``` ### Willingness to contribute - [ ] I can contribute a fix for this bug independently - [x] I would be willing to contribute a fix for this bug with guidance from the Iceberg community - [ ] I cannot contribute a fix for this bug at this time -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org