myz540 commented on issue #1790: URL: https://github.com/apache/iceberg-python/issues/1790#issuecomment-2719096727
> [@myz540](https://github.com/myz540) That's not in there today. However, if you pre-cluster the table before writing, it should maintain order. thanks for your reply. On another matter, I am trying to write to a table that has `TruncateTransform`s on two columns as part of the partition schema. I mostly followed the documentation. ```python def create_partitions_truncate(): return PartitionSpec( PartitionField( source_id=1, field_id=1000, transform=TruncateTransform(width=7), name="sid_truncate" ), PartitionField( source_id=2, field_id=2000, transform=TruncateTransform(width=1), name="gene_truncate" ), ) ``` However, when I try writing to it ```python table = catalog.load_table((DATABASE, table_name)) smol_table = pa.Table.from_pandas(_df, schema=create_pa_schema()) with table.transaction() as transaction: transaction.append(smol_table) ``` I am hit with the following error ``` Not all partition types are supported for writes. Following partitions cannot be written using pyarrow: [PartitionField(source_id=1, field_id=1000, transform=TruncateTransform(width=7), name='sid_truncate'), PartitionField(source_id=2, field_id=1001, transform=TruncateTransform(width=1), name='gene_truncate')]. ``` Is this simply not supported by `pyarrow` at the moment or am I doing something wrong? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org