ayushtkn opened a new issue, #15186: URL: https://github.com/apache/iceberg/issues/15186
### Apache Iceberg version None ### Query engine None ### Please describe the bug 🐞 In case of normal inserts, when data files don't have the Row Lineage columns. The values aren't calculated, rather the `first row Id` is returned for `ROW_ID`: As per the official doc here: https://iceberg.apache.org/spec/?h=name+mapping#row-lineage-assignment -> **A data file with only new rows for the table may omit the _last_updated_sequence_number and _row_id. If the columns are missing, readers should treat both columns as if they exist and are set to null for all rows._** <- It is totally valid to not have physically written values for the columns in specific cases & it should be calculated by the readers. In case of Avro this doesn't work because: Create Missing File Reader returns the value from `idToConstant` Map directly rather than calculating here: https://github.com/apache/iceberg/blob/8072acd66e69bd0888ca67c8ffb9b213501be8fc/core/src/main/java/org/apache/iceberg/avro/ValueReaders.java#L299-L308 The `idToConstant` Map contains the First Row Id in the `ROW_ID` https://github.com/apache/iceberg/blob/8072acd66e69bd0888ca67c8ffb9b213501be8fc/core/src/main/java/org/apache/iceberg/util/PartitionUtil.java#L56-L60 It needs to calculated as firstRowId (from `idToConstant` Map) + pos ### Willingness to contribute - [x] I can contribute a fix for this bug independently - [ ] I would be willing to contribute a fix for this bug with guidance from the Iceberg community - [ ] I cannot contribute a fix for this bug at this time -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
