sunxiaojian commented on PR #12754:
URL: https://github.com/apache/iceberg/pull/12754#issuecomment-2799961553

   > > But should this be done after the Flink implementation is completed and 
then the logic is extracted to the core uniformly?
   > 
   > I faced the exact same question when I have implemented the 
DataFileRewrite, and the decision was to do the refactor first, then implement 
the Flink changes using the refactored code.
   > 
   > > Regarding ManifestFileBean, I initially wanted to keep it consistent 
with Spark to facilitate the abstraction of logic on both sides to the core 
later. However, in practice, IcebergSource can also directly scan the metadata 
table and use RowData.
   > 
   > Reusing the IcebergSource is a good idea, OTOH when we want to implement 
the feature in a way that can be embedded in the Flink TableMaintenance 
infrastructure we need to have operators instead of the IcebergSource.
   
   @pvary ok, I'll try to modify it first.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to