HonahX commented on code in PR #245: URL: https://github.com/apache/iceberg-python/pull/245#discussion_r1477690350
########## pyiceberg/table/__init__.py: ########## @@ -533,6 +551,39 @@ def _(update: SetCurrentSchemaUpdate, base_metadata: TableMetadata, context: _Ta return base_metadata.model_copy(update={"current_schema_id": new_schema_id}) +@_apply_table_update.register(AddPartitionSpecUpdate) +def _(update: AddPartitionSpecUpdate, base_metadata: TableMetadata, context: _TableMetadataUpdateContext) -> TableMetadata: + for spec in base_metadata.partition_specs: + if spec.spec_id == update.spec_id: + raise ValueError(f"Partition spec with id {spec.spec_id} already exists: {spec}") + + context.add_update(update) + return base_metadata.model_copy( + update={ + "partition_specs": base_metadata.partition_specs + [update.spec], + } + ) + + +@_apply_table_update.register(SetDefaultSpecUpdate) +def _(update: SetDefaultSpecUpdate, base_metadata: TableMetadata, context: _TableMetadataUpdateContext) -> TableMetadata: + new_spec_id = update.spec_id + if new_spec_id == base_metadata.default_spec_id: Review Comment: Thanks for the update! I think we also need to add some logic here like: https://github.com/apache/iceberg-python/blob/7fbcc220263228618da8d6871a50a0b82e22a843/pyiceberg/table/__init__.py#L537-L541 to make other catalogs such as `hive` to handle the `-1` correctly. WDYT? Java ref: https://github.com/apache/iceberg/blob/9921937d8285dec9a19fd16b0cd82d451a8aca9e/core/src/main/java/org/apache/iceberg/TableMetadata.java#L1079-L1084 ########## tests/test_integration_partition_evolution.py: ########## @@ -0,0 +1,423 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# pylint:disable=redefined-outer-name + +import pytest + +from pyiceberg.catalog import Catalog, load_catalog Review Comment: Recently, we have moved integration tests into `tests/integration`: #207 . Shall we move this file into that folder too? We can name it as `test_rest_partition_evolution.py` to indicate that these tests use RestCatalog. Since we've implemented `_commit_table` for Hive #294 , we could refactor `test_rest_*` to use both rest and hive catalog. Using catalog can help test pyiceberg's logic of `_commit_table`. But this can happen in a follow-up PR ########## pyiceberg/table/__init__.py: ########## @@ -868,6 +919,12 @@ def sort_orders(self) -> Dict[int, SortOrder]: """Return a dict of the sort orders of this table.""" return {sort_order.order_id: sort_order for sort_order in self.metadata.sort_orders} + def last_partition_id(self) -> Optional[int]: + """Return the highest assigned partition field ID across all specs for the table or None if the table is unpartitioned and there are no specs.""" + if len(self.specs()) == 1 and self.spec().is_unpartitioned(): + return None + return self.metadata.last_partition_id Review Comment: I think we probably should update https://github.com/apache/iceberg-python/blob/7fbcc220263228618da8d6871a50a0b82e22a843/pyiceberg/partitioning.py#L148-L152 to return PARTITION_FIELD_ID_START - 1 for unpartitioned spec. Then we can return the last_partition_id from metadata directly because the metadata should have `last_partition_id=999` for unpartitioned table. Java implementation uses PARTITION_FIELD_ID_START - 1 for unpartitioned spec: https://github.com/apache/iceberg/blob/main/api/src/main/java/org/apache/iceberg/PartitionSpec.java#L344-L345 https://github.com/apache/iceberg/blob/9921937d8285dec9a19fd16b0cd82d451a8aca9e/api/src/main/java/org/apache/iceberg/PartitionSpec.java#L319-L321 I checked locally that unpartitioned tables created by spark-iceberg-runtime have `last_partition_id=999`, while those created by pyiceberg have `last_partition_id=1000`: -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org