Fokko commented on issue #522: URL: https://github.com/apache/iceberg-python/issues/522#issuecomment-1996627258
@sebpretzer Thanks for raising this, and it looks like you already found the solution. What PyIceberg is doing; when you create a predicate it will accept everything. Once you run the query, it will try to bind it to the schema. The input is a string here: `row_filter=f"uuid_col == '102cb62f-e6f8-4eb0-9973-d9b012ff0967'"`, but the actual column type is a `fixed[16]`. This means that PyIceberg tries to convert it to that type but fails. For completeness, could you also add a conversion `@to.register(BinaryType)`? Tests can be added to `test_literals.py`. It is tricky to test this through an integration test since Spark has limited support for UUIDs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org