swapna267 commented on code in PR #13883:
URL: https://github.com/apache/iceberg/pull/13883#discussion_r2291664427
##########
flink/v2.0/flink/src/main/java/org/apache/iceberg/flink/sink/dynamic/TableUpdater.java:
##########
@@ -90,6 +109,53 @@ private void findOrCreateTable(TableIdentifier identifier,
Schema schema, Partit
}
}
+ private void updateTablePropertiesIfNeeded(TableIdentifier identifier) {
+ if (tablePropertiesUpdater == null) {
+ return;
+ }
+
+ Map<String, String> currentProperties = cache.properties(identifier);
+ Map<String, String> updatedProperties =
+ tablePropertiesUpdater.apply(identifier.toString(), currentProperties);
+
+ if (updatedProperties == null || Objects.equals(currentProperties,
updatedProperties)) {
+ return;
+ }
+
+ LOG.info(
+ "Updating table {} properties from {} to {}",
+ identifier,
+ currentProperties,
+ updatedProperties);
+ Table table = catalog.loadTable(identifier);
+ try {
+ UpdateProperties updateApi = table.updateProperties();
+
+ // Remove properties that are no longer present
Review Comment:
So we consider Flink job as source of truth for properties and remove
properties on tables , if added from external pipelines ?
In large scale ingestion pipelines, if we need to add a new property, we
will need to then update the TablePropertiesUpdaterImpl and redeploy right ?
This may not be a feasible as it may also put too much pressure on catalog on
startup.
But that also means, there is no one source of truth :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]