mxm commented on code in PR #14728:
URL: https://github.com/apache/iceberg/pull/14728#discussion_r2584507940
##########
flink/v2.1/flink/src/main/java/org/apache/iceberg/flink/sink/dynamic/CompareSchemasVisitor.java:
##########
@@ -97,6 +101,11 @@ public Result struct(Types.StructType struct, Integer
tableSchemaId, List<Result
}
if (struct.fields().size() !=
tableSchemaType.asStructType().fields().size()) {
+ if (dropUnusedColumns
+ && struct.fields().size() <
tableSchemaType.asStructType().fields().size()) {
+ // We need to drop fields
+ return Result.SCHEMA_UPDATE_NEEDED;
Review Comment:
What you describe will happen. Once we have created S2 via R2 and receive
R3, we will iterate through all the table schemas and we will discover the old
schema which matches S1. We will continue to write to F1.
The branch here will be triggered when receiving a record with a new schema
S3 (column dropped). `SCHEMA_UPDATE_NEEDED` will then trigger the schema update
logic which does the column dropping. We don't change the RowData in this
branch (which would be `DATA_CONVERSION_NEEDED`).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]