bluzy opened a new issue, #9831: URL: https://github.com/apache/iceberg/issues/9831
### Apache Iceberg version 1.3.1 ### Query engine Spark ### Please describe the bug 🐞 Hi, I am currently using Spark 3.2, and considering to upgrade spark version to 3.4 I tested current production queries on Spark 3.4, I encountered error when aggregate field with max() function. After many tests, I concluded that the error occurs when the max() function is performed on a nested field that contains null values. For represent the issue, I created a table for test. ``` CREATE TABLE iceberg.test_db.aggregation_test ( col0 int, col1 struct<col2:int> ) USING iceberg; INSERT INTO iceberg.test_db.aggregation_test VALUES (1709168400, named_struct('col2', 1709168400)); INSERT INTO iceberg.test_db.aggregation_test VALUES (1709175600, named_struct('col2', 1709175600)); INSERT INTO iceberg.test_db.aggregation_test VALUES (1709172000, named_struct('col2', 1709172000)); INSERT INTO iceberg.test_db.aggregation_test VALUES (null, named_struct('col2', null)); ``` Then run below sql, I got error. ``` SELECT max(col1.col2) FROM iceberg.test_db.aggregation_test; java.lang.ClassCastException: java.lang.Integer cannot be cast to org.apache.iceberg.StructLike ``` With 1-depth field, it works without any problem. ``` SELECT max(col0) FROM iceberg.test_db.aggregation_test; ``` Can you help me? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org