amogh-jahagirdar commented on code in PR #245: URL: https://github.com/apache/iceberg-python/pull/245#discussion_r1479265846
########## pyiceberg/table/metadata.py: ########## @@ -308,7 +308,8 @@ def construct_partition_specs(cls, data: Dict[str, Any]) -> Dict[str, Any]: data[PARTITION_SPECS] = [{"field-id": 0, "fields": ()}] data[LAST_PARTITION_ID] = max( - [field.get(FIELD_ID) for spec in data[PARTITION_SPECS] for field in spec[FIELDS]], default=PARTITION_FIELD_ID_START + [field.get(FIELD_ID) for spec in data[PARTITION_SPECS] for field in spec[FIELDS]], + default=PARTITION_FIELD_ID_START - 1, Review Comment: This needs to be updated so that in case there are partition specs, we return 999. It's insufficient to just update the `PartitionSpec#last_assigned_field_id` method. I do believe this is spec compliant since the spec doesn't explicitly say what values these IDs should be. This is also what Spark does when one creates an unpartitioned table. The spec does say in v1, ids were assigned starting at 1000, which is still followed. So I think we're covered. @Fokko @HonahX -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org