amogh-jahagirdar commented on code in PR #10755:
URL: https://github.com/apache/iceberg/pull/10755#discussion_r1695462421


##########
core/src/main/java/org/apache/iceberg/TableMetadata.java:
##########
@@ -597,6 +597,24 @@ public TableMetadata replaceProperties(Map<String, String> 
rawProperties) {
         .build();
   }
 
+  /**
+   * Prune the unused partition specs from the table metadata.
+   *
+   * <p>Note: it's not safe for external client to call this directly, it's 
usually called by the

Review Comment:
   I'd suggest a more concise comment:
   
   ```
   External callers of this should ensure that the specs to remove are not 
active
   ```



##########
core/src/main/java/org/apache/iceberg/BaseRemoveUnusedSpecs.java:
##########
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import static org.apache.iceberg.TableProperties.COMMIT_MAX_RETRY_WAIT_MS;
+import static 
org.apache.iceberg.TableProperties.COMMIT_MAX_RETRY_WAIT_MS_DEFAULT;
+import static org.apache.iceberg.TableProperties.COMMIT_MIN_RETRY_WAIT_MS;
+import static 
org.apache.iceberg.TableProperties.COMMIT_MIN_RETRY_WAIT_MS_DEFAULT;
+import static org.apache.iceberg.TableProperties.COMMIT_NUM_RETRIES;
+import static org.apache.iceberg.TableProperties.COMMIT_NUM_RETRIES_DEFAULT;
+import static org.apache.iceberg.TableProperties.COMMIT_TOTAL_RETRY_TIME_MS;
+import static 
org.apache.iceberg.TableProperties.COMMIT_TOTAL_RETRY_TIME_MS_DEFAULT;
+
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+import org.apache.iceberg.exceptions.CommitFailedException;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * Implementation of RemoveUnusedSpecs API to remove unused partition specs.
+ *
+ * <p>When committing, these changes will be applied to the latest table 
metadata. Commit conflicts
+ * will be resolved by recalculating which specs are no longer in use again in 
the latest metadata
+ * and retrying.
+ */
+class BaseRemoveUnusedSpecs implements RemoveUnusedSpecs {
+  private final TableOperations ops;
+  private final Table table;
+
+  BaseRemoveUnusedSpecs(TableOperations ops, Table table) {
+    this.ops = ops;
+    this.table = table;
+  }
+
+  @Override
+  public List<PartitionSpec> apply() {
+    TableMetadata current = ops.refresh();
+    TableMetadata newMetadata = removeUnusedSpecs(current);
+    return newMetadata.specs();
+  }
+
+  @Override
+  public void commit() {
+    TableMetadata base = ops.refresh();
+    Tasks.foreach(ops)
+        .retry(base.propertyAsInt(COMMIT_NUM_RETRIES, 
COMMIT_NUM_RETRIES_DEFAULT))
+        .exponentialBackoff(
+            base.propertyAsInt(COMMIT_MIN_RETRY_WAIT_MS, 
COMMIT_MIN_RETRY_WAIT_MS_DEFAULT),
+            base.propertyAsInt(COMMIT_MAX_RETRY_WAIT_MS, 
COMMIT_MAX_RETRY_WAIT_MS_DEFAULT),
+            base.propertyAsInt(COMMIT_TOTAL_RETRY_TIME_MS, 
COMMIT_TOTAL_RETRY_TIME_MS_DEFAULT),
+            2.0 /* exponential */)
+        .onlyRetryOn(CommitFailedException.class)
+        .run(
+            taskOps -> {
+              TableMetadata current = ops.refresh();
+              TableMetadata newMetadata = removeUnusedSpecs(current);
+              taskOps.commit(current, newMetadata);
+            });
+  }
+
+  private TableMetadata removeUnusedSpecs(TableMetadata current) {
+    List<PartitionSpec> specs = current.specs();
+    int currentSpecId = current.defaultSpecId();
+
+    // Read ManifestLists and get all specId's in use
+    Set<Integer> specsInUse =
+        Sets.newHashSet(
+            CloseableIterable.transform(
+                MetadataTableUtils.createMetadataTableInstance(table, 
MetadataTableType.ALL_ENTRIES)
+                    .newScan()
+                    .planFiles(),
+                task -> ((BaseEntriesTable.ManifestReadTask) 
task).partitionSpecId()));
+
+    // add current spec id to the set of specs in use

Review Comment:
   Same as above, don't think this comment is really required/useful.



##########
core/src/main/java/org/apache/iceberg/TableMetadata.java:
##########
@@ -597,6 +597,24 @@ public TableMetadata replaceProperties(Map<String, String> 
rawProperties) {
         .build();
   }
 
+  /**
+   * Prune the unused partition specs from the table metadata.
+   *
+   * <p>Note: it's not safe for external client to call this directly, it's 
usually called by the
+   * {@link Table#removeUnusedSpecs()} method. It's caller's responsibility to 
ensure that the
+   * toRemoveSpecs are indeed not in use by any existing manifests.
+   *
+   * @param toRemoveSpecs the partition specs to be removed
+   * @return the new table metadata with the unused partition specs removed
+   */
+  TableMetadata pruneUnusedSpecs(List<PartitionSpec> toRemoveSpecs) {

Review Comment:
   Can we call the parameter `specsToRemove`? 



##########
core/src/main/java/org/apache/iceberg/TableMetadata.java:
##########
@@ -1425,6 +1460,7 @@ private boolean hasChanges() {
           || (discardChanges && !changes.isEmpty())
           || metadataLocation != null
           || suppressHistoricalSnapshots
+          || hasRemovedSpecs

Review Comment:
   Hm I'm trying to understand why we need this special flag, is this a way so 
that we avoid having to add the metadata update type for REST (since then that 
would ultimately just be in the changes list)? 



##########
core/src/test/java/org/apache/iceberg/TestRemoveUnusedSpecs.java:
##########
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.types.Types;
+import org.junit.jupiter.api.TestTemplate;
+
+public class TestRemoveUnusedSpecs extends TestBase {
+
+  @TestTemplate
+  public void testRemoveAllButCurrent() {
+    table
+        .updateSchema()
+        .addColumn("ts", Types.TimestampType.withoutZone())
+        .addColumn("category", Types.StringType.get())
+        .commit();
+    table.updateSpec().addField("id").commit();
+    table.updateSpec().addField("ts").commit();
+    table.updateSpec().addField("category").commit();
+    table.updateSpec().addField("data").commit();
+    assertThat(table.specs().size()).as("Added specs should be 
present").isEqualTo(5);
+
+    PartitionSpec currentSpec = table.spec();
+    table.removeUnusedSpecs().commit();
+
+    assertThat(table.specs().size()).as("All but current spec should be 
removed").isEqualTo(1);
+    assertThat(table.spec()).as("Current spec shall not 
change").isEqualTo(currentSpec);
+  }
+
+  @TestTemplate
+  public void testDontRemoveInUseSpecs() {
+    table
+        .updateSchema()
+        .addColumn("ts", Types.LongType.get())
+        .addColumn("category", Types.StringType.get())
+        .commit();
+
+    table.updateSpec().addField("id").commit(); // 1
+    table.newAppend().appendFile(newDataFile("data_bucket=0/id=5")).commit();
+
+    table.updateSpec().addField("ts").commit(); // 2
+
+    table.updateSpec().addField("category").commit(); // 3
+    if (formatVersion == 1) {
+      
table.newAppend().appendFile(newDataFile("data_bucket=0/id=5/ts=100/category=fo")).commit();
+    } else {
+      table
+          .newRowDelta()
+          .addDeletes(newDeleteFile(table.spec().specId(), 
"data_bucket=0/id=5/ts=100/category=fo"))
+          .commit();
+    }
+
+    table.updateSpec().addField("data").commit(); // 4
+    assertThat(table.specs()).size().as("Added specs should be 
present").isEqualTo(5);
+
+    PartitionSpec currentSpec = table.spec();
+    table.removeUnusedSpecs().commit();
+    assertThat(table.specs().keySet()).as("Unused specs are 
removed").containsExactly(1, 3, 4);
+    assertThat(table.spec()).as("Current spec shall not 
change").isEqualTo(currentSpec);
+  }
+
+  @TestTemplate
+  public void testRemoveUnpartitionedSpec() {
+    // clean it first to reset to un-partitioned
+    cleanupTables();
+    this.table = create(SCHEMA, PartitionSpec.unpartitioned());
+    DataFile file =
+        DataFiles.builder(table.spec())
+            .withPath("/path/to/data-0.parquet")
+            .withFileSizeInBytes(10)
+            .withRecordCount(100)
+            .build();
+    table.newAppend().appendFile(file).commit();
+    // add a bucket partition

Review Comment:
   I don't think we need this inline comment.



##########
api/src/main/java/org/apache/iceberg/Table.java:
##########
@@ -211,6 +211,17 @@ default IncrementalChangelogScan 
newIncrementalChangelogScan() {
    */
   AppendFiles newAppend();
 
+  /**
+   * Remove any partition specs from the Metadata that are no longer used in 
any data files. Always
+   * preserves the current default spec even if it has not yet been used.

Review Comment:
   "Always preserves the current spec" should suffice for the second statement.



##########
core/src/main/java/org/apache/iceberg/BaseRemoveUnusedSpecs.java:
##########
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import static org.apache.iceberg.TableProperties.COMMIT_MAX_RETRY_WAIT_MS;
+import static 
org.apache.iceberg.TableProperties.COMMIT_MAX_RETRY_WAIT_MS_DEFAULT;
+import static org.apache.iceberg.TableProperties.COMMIT_MIN_RETRY_WAIT_MS;
+import static 
org.apache.iceberg.TableProperties.COMMIT_MIN_RETRY_WAIT_MS_DEFAULT;
+import static org.apache.iceberg.TableProperties.COMMIT_NUM_RETRIES;
+import static org.apache.iceberg.TableProperties.COMMIT_NUM_RETRIES_DEFAULT;
+import static org.apache.iceberg.TableProperties.COMMIT_TOTAL_RETRY_TIME_MS;
+import static 
org.apache.iceberg.TableProperties.COMMIT_TOTAL_RETRY_TIME_MS_DEFAULT;
+
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+import org.apache.iceberg.exceptions.CommitFailedException;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * Implementation of RemoveUnusedSpecs API to remove unused partition specs.
+ *
+ * <p>When committing, these changes will be applied to the latest table 
metadata. Commit conflicts
+ * will be resolved by recalculating which specs are no longer in use again in 
the latest metadata
+ * and retrying.
+ */
+class BaseRemoveUnusedSpecs implements RemoveUnusedSpecs {
+  private final TableOperations ops;
+  private final Table table;
+
+  BaseRemoveUnusedSpecs(TableOperations ops, Table table) {
+    this.ops = ops;
+    this.table = table;
+  }
+
+  @Override
+  public List<PartitionSpec> apply() {
+    TableMetadata current = ops.refresh();
+    TableMetadata newMetadata = removeUnusedSpecs(current);
+    return newMetadata.specs();
+  }
+
+  @Override
+  public void commit() {
+    TableMetadata base = ops.refresh();
+    Tasks.foreach(ops)
+        .retry(base.propertyAsInt(COMMIT_NUM_RETRIES, 
COMMIT_NUM_RETRIES_DEFAULT))
+        .exponentialBackoff(
+            base.propertyAsInt(COMMIT_MIN_RETRY_WAIT_MS, 
COMMIT_MIN_RETRY_WAIT_MS_DEFAULT),
+            base.propertyAsInt(COMMIT_MAX_RETRY_WAIT_MS, 
COMMIT_MAX_RETRY_WAIT_MS_DEFAULT),
+            base.propertyAsInt(COMMIT_TOTAL_RETRY_TIME_MS, 
COMMIT_TOTAL_RETRY_TIME_MS_DEFAULT),
+            2.0 /* exponential */)
+        .onlyRetryOn(CommitFailedException.class)
+        .run(
+            taskOps -> {
+              TableMetadata current = ops.refresh();
+              TableMetadata newMetadata = removeUnusedSpecs(current);
+              taskOps.commit(current, newMetadata);
+            });
+  }
+
+  private TableMetadata removeUnusedSpecs(TableMetadata current) {
+    List<PartitionSpec> specs = current.specs();
+    int currentSpecId = current.defaultSpecId();
+
+    // Read ManifestLists and get all specId's in use

Review Comment:
   Nit: I don't think this inline comment is required, it's a bit self 
explanatory imo.
   



##########
core/src/test/java/org/apache/iceberg/TestRemoveUnusedSpecs.java:
##########
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.types.Types;
+import org.junit.jupiter.api.TestTemplate;
+
+public class TestRemoveUnusedSpecs extends TestBase {
+
+  @TestTemplate
+  public void testRemoveAllButCurrent() {
+    table
+        .updateSchema()
+        .addColumn("ts", Types.TimestampType.withoutZone())
+        .addColumn("category", Types.StringType.get())
+        .commit();
+    table.updateSpec().addField("id").commit();
+    table.updateSpec().addField("ts").commit();
+    table.updateSpec().addField("category").commit();
+    table.updateSpec().addField("data").commit();
+    assertThat(table.specs().size()).as("Added specs should be 
present").isEqualTo(5);
+
+    PartitionSpec currentSpec = table.spec();
+    table.removeUnusedSpecs().commit();
+
+    assertThat(table.specs().size()).as("All but current spec should be 
removed").isEqualTo(1);
+    assertThat(table.spec()).as("Current spec shall not 
change").isEqualTo(currentSpec);
+  }
+
+  @TestTemplate
+  public void testDontRemoveInUseSpecs() {
+    table
+        .updateSchema()
+        .addColumn("ts", Types.LongType.get())
+        .addColumn("category", Types.StringType.get())
+        .commit();
+
+    table.updateSpec().addField("id").commit(); // 1
+    table.newAppend().appendFile(newDataFile("data_bucket=0/id=5")).commit();
+
+    table.updateSpec().addField("ts").commit(); // 2
+
+    table.updateSpec().addField("category").commit(); // 3
+    if (formatVersion == 1) {
+      
table.newAppend().appendFile(newDataFile("data_bucket=0/id=5/ts=100/category=fo")).commit();
+    } else {
+      table
+          .newRowDelta()
+          .addDeletes(newDeleteFile(table.spec().specId(), 
"data_bucket=0/id=5/ts=100/category=fo"))
+          .commit();
+    }
+
+    table.updateSpec().addField("data").commit(); // 4
+    assertThat(table.specs()).size().as("Added specs should be 
present").isEqualTo(5);
+
+    PartitionSpec currentSpec = table.spec();
+    table.removeUnusedSpecs().commit();
+    assertThat(table.specs().keySet()).as("Unused specs are 
removed").containsExactly(1, 3, 4);
+    assertThat(table.spec()).as("Current spec shall not 
change").isEqualTo(currentSpec);
+  }
+
+  @TestTemplate
+  public void testRemoveUnpartitionedSpec() {
+    // clean it first to reset to un-partitioned
+    cleanupTables();
+    this.table = create(SCHEMA, PartitionSpec.unpartitioned());
+    DataFile file =
+        DataFiles.builder(table.spec())
+            .withPath("/path/to/data-0.parquet")
+            .withFileSizeInBytes(10)
+            .withRecordCount(100)
+            .build();
+    table.newAppend().appendFile(file).commit();
+    // add a bucket partition
+    table.updateSpec().addField("data_bucket", Expressions.bucket("data", 
16)).commit();
+
+    // removeUnusedPartitionSpec shall not remove the un-partitioned spec
+    table.removeUnusedSpecs().commit();
+    assertThat(table.specs().keySet())
+        .as("Un-partitioned spec is still used")
+        .containsExactly(0, 1);
+
+    table.newDelete().deleteFile(file).commit();
+    DataFile bucketFile =
+        DataFiles.builder(table.spec())
+            .withPath("/path/to/data-0.parquet")
+            .withFileSizeInBytes(10)
+            .withRecordCount(100)
+            .withPartitionPath("data_bucket=0")
+            .build();
+    table.newAppend().appendFile(bucketFile).commit();
+
+    
table.expireSnapshots().expireOlderThan(System.currentTimeMillis()).commit();
+    // un-partitioned spec can be removed.
+    table.removeUnusedSpecs().commit();
+    assertThat(table.specs().keySet()).as("Un-partitioned spec is 
removed").containsExactly(1);
+
+    // bucket spec can reset to un-partitioned

Review Comment:
   Nit: instead of "un-partitioned" can we call it just unpartitioned. Also 
let's double check if we really need a lot of these inline comments, some of 
them don't add much value imo.



##########
core/src/test/java/org/apache/iceberg/TestRemoveUnusedSpecs.java:
##########
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.types.Types;
+import org.junit.jupiter.api.TestTemplate;
+
+public class TestRemoveUnusedSpecs extends TestBase {
+
+  @TestTemplate
+  public void testRemoveAllButCurrent() {
+    table
+        .updateSchema()
+        .addColumn("ts", Types.TimestampType.withoutZone())
+        .addColumn("category", Types.StringType.get())
+        .commit();
+    table.updateSpec().addField("id").commit();
+    table.updateSpec().addField("ts").commit();
+    table.updateSpec().addField("category").commit();
+    table.updateSpec().addField("data").commit();
+    assertThat(table.specs().size()).as("Added specs should be 
present").isEqualTo(5);
+
+    PartitionSpec currentSpec = table.spec();
+    table.removeUnusedSpecs().commit();
+
+    assertThat(table.specs().size()).as("All but current spec should be 
removed").isEqualTo(1);
+    assertThat(table.spec()).as("Current spec shall not 
change").isEqualTo(currentSpec);
+  }
+
+  @TestTemplate
+  public void testDontRemoveInUseSpecs() {
+    table
+        .updateSchema()
+        .addColumn("ts", Types.LongType.get())
+        .addColumn("category", Types.StringType.get())
+        .commit();
+
+    table.updateSpec().addField("id").commit(); // 1
+    table.newAppend().appendFile(newDataFile("data_bucket=0/id=5")).commit();
+
+    table.updateSpec().addField("ts").commit(); // 2
+
+    table.updateSpec().addField("category").commit(); // 3
+    if (formatVersion == 1) {
+      
table.newAppend().appendFile(newDataFile("data_bucket=0/id=5/ts=100/category=fo")).commit();
+    } else {
+      table
+          .newRowDelta()
+          .addDeletes(newDeleteFile(table.spec().specId(), 
"data_bucket=0/id=5/ts=100/category=fo"))
+          .commit();
+    }
+
+    table.updateSpec().addField("data").commit(); // 4
+    assertThat(table.specs()).size().as("Added specs should be 
present").isEqualTo(5);
+
+    PartitionSpec currentSpec = table.spec();
+    table.removeUnusedSpecs().commit();
+    assertThat(table.specs().keySet()).as("Unused specs are 
removed").containsExactly(1, 3, 4);
+    assertThat(table.spec()).as("Current spec shall not 
change").isEqualTo(currentSpec);
+  }
+
+  @TestTemplate
+  public void testRemoveUnpartitionedSpec() {
+    // clean it first to reset to un-partitioned
+    cleanupTables();
+    this.table = create(SCHEMA, PartitionSpec.unpartitioned());
+    DataFile file =
+        DataFiles.builder(table.spec())
+            .withPath("/path/to/data-0.parquet")
+            .withFileSizeInBytes(10)
+            .withRecordCount(100)
+            .build();
+    table.newAppend().appendFile(file).commit();
+    // add a bucket partition
+    table.updateSpec().addField("data_bucket", Expressions.bucket("data", 
16)).commit();
+
+    // removeUnusedPartitionSpec shall not remove the un-partitioned spec
+    table.removeUnusedSpecs().commit();
+    assertThat(table.specs().keySet())
+        .as("Un-partitioned spec is still used")
+        .containsExactly(0, 1);
+
+    table.newDelete().deleteFile(file).commit();
+    DataFile bucketFile =
+        DataFiles.builder(table.spec())
+            .withPath("/path/to/data-0.parquet")
+            .withFileSizeInBytes(10)
+            .withRecordCount(100)
+            .withPartitionPath("data_bucket=0")
+            .build();
+    table.newAppend().appendFile(bucketFile).commit();
+
+    
table.expireSnapshots().expireOlderThan(System.currentTimeMillis()).commit();
+    // un-partitioned spec can be removed.
+    table.removeUnusedSpecs().commit();
+    assertThat(table.specs().keySet()).as("Un-partitioned spec is 
removed").containsExactly(1);
+
+    // bucket spec can reset to un-partitioned
+    table.updateSpec().removeField("data_bucket").commit();
+    assertThat(table.spec().isUnpartitioned()).as("Should equal to 
un-partitioned").isTrue();
+    if (formatVersion == 1) {
+      assertThat(table.spec().fields().size()).as("Should have one void 
transform").isEqualTo(1);
+      assertThat(table.spec().specId())
+          .as("un-partitioned is evolved to use a new SpecId")
+          .isEqualTo(2);
+    } else {
+      assertThat(table.spec().fields().size()).as("Should have no 
fields").isEqualTo(0);
+      assertThat(table.spec().specId())
+          .as("un-partitioned is evolved to use a new SpecId")
+          .isEqualTo(2);
+    }

Review Comment:
   Is it possible to simplify this by having a separate variable for expected 
spec ID, the only difference in the assertion is the expected spec ID which 
depends on the format version. The rest of the assertion is the same if I'm not 
missing anything.



##########
core/src/main/java/org/apache/iceberg/TableMetadata.java:
##########
@@ -1446,6 +1482,11 @@ public TableMetadata build() {
           "Cannot set metadata location with changes to table metadata: %s 
changes",
           changes.size());
 
+      if (hasRemovedSpecs) {
+        Preconditions.checkArgument(
+            changes.isEmpty(), "Cannot remove partition specs with other 
metadata update");

Review Comment:
   @advancedxy I'm not sure I follow, what's the intention of this check?



##########
core/src/main/java/org/apache/iceberg/TableMetadata.java:
##########
@@ -597,6 +597,24 @@ public TableMetadata replaceProperties(Map<String, String> 
rawProperties) {
         .build();
   }
 
+  /**
+   * Prune the unused partition specs from the table metadata.
+   *
+   * <p>Note: it's not safe for external client to call this directly, it's 
usually called by the
+   * {@link Table#removeUnusedSpecs()} method. It's caller's responsibility to 
ensure that the
+   * toRemoveSpecs are indeed not in use by any existing manifests.
+   *
+   * @param toRemoveSpecs the partition specs to be removed
+   * @return the new table metadata with the unused partition specs removed
+   */
+  TableMetadata pruneUnusedSpecs(List<PartitionSpec> toRemoveSpecs) {
+    Builder builder = new Builder(this);
+    for (PartitionSpec spec : toRemoveSpecs) {
+      builder.removePartitionSpec(spec);

Review Comment:
   Thanks @advancedxy , I definitley think it's an improvement! 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to