mxm commented on code in PR #13883:
URL: https://github.com/apache/iceberg/pull/13883#discussion_r2293319915
##########
flink/v2.0/flink/src/main/java/org/apache/iceberg/flink/sink/dynamic/TableUpdater.java:
##########
@@ -90,6 +109,53 @@ private void findOrCreateTable(TableIdentifier identifier,
Schema schema, Partit
}
}
+ private void updateTablePropertiesIfNeeded(TableIdentifier identifier) {
+ if (tablePropertiesUpdater == null) {
+ return;
+ }
+
+ Map<String, String> currentProperties = cache.properties(identifier);
+ Map<String, String> updatedProperties =
+ tablePropertiesUpdater.apply(identifier.toString(), currentProperties);
+
+ if (updatedProperties == null || Objects.equals(currentProperties,
updatedProperties)) {
+ return;
+ }
+
+ LOG.info(
+ "Updating table {} properties from {} to {}",
+ identifier,
+ currentProperties,
+ updatedProperties);
+ Table table = catalog.loadTable(identifier);
+ try {
+ UpdateProperties updateApi = table.updateProperties();
+
+ // Remove properties that are no longer present
Review Comment:
If this feature is enabled, which is optional, the job is the source of
truth.
>In large scale ingestion pipelines, if we need to add a new property, we
will need to then update the TablePropertiesUpdaterImpl and redeploy right ?
This may not be a feasible as it may also put too much pressure on catalog on
startup.
This is no different from other operations (e.g. table creation, schema
changes) that the sink may perform.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]