pvary commented on PR #14578:
URL: https://github.com/apache/iceberg/pull/14578#issuecomment-3526886663

   I’ve added a few comments on the PR, but the bigger question is deciding 
what functionality we actually want.
   Here are the ideas I’ve heard so far:
   - Apply the same table properties to every table at creation (current PR).
   - Set table properties at creation based on the DynamicRecord for each table 
individually.
   - Define the table location during creation (per table).
   - Update table properties to ensure a specific set of properties is always 
present.
   - Allow V2 → V3 migration (convert DVs) — related only in that operators 
need up-to-date knowledge of table properties.
   
   We need to draw a clear line: what should the Dynamic Sink support, and what 
should be handled by external operators outside the Dynamic Iceberg Sink?
   
   To make this decision, I’d like input from actual users: @jordepic, @mxm, 
@Guosmilesmile, or anyone else interested. Maybe someone could even start a 
discussion on the dev list to reach a wider audience.
   
   If some changes are handled by external operators, we may need a mechanism 
for those operators to notify the Dynamic Iceberg Sink to refresh its caches. 
One option could be using something like the Flink Orchestrator restart nonce 
(https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/job-management/#application-restarts-without-spec-change).
 If the cache detects a new nonce (greater than the current one), it could 
invalidate the current table cache and reload the data.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to