mxm commented on code in PR #13340:
URL: https://github.com/apache/iceberg/pull/13340#discussion_r2162319680


##########
flink/v2.0/flink/src/main/java/org/apache/iceberg/flink/sink/dynamic/TableMetadataCache.java:
##########
@@ -220,37 +238,59 @@ SchemaInfo getSchemaInfo() {
    */
   static class SchemaInfo {
     private final Map<Integer, Schema> schemas;
-    private final Map<Schema, Tuple2<Schema, CompareSchemasVisitor.Result>> 
lastResults;
+    private final Cache<Schema, SchemaCompareInfo> lastResults;

Review Comment:
   I recall something similar, but I think the main issue was the time-based 
removal of cache items. When we switched to removing solely via a maximum 
number of entries, the performance was much better. That said, we can easily 
reproduce via `TestDynamicIcebergSinkPerf`, but there are some changes required 
to the performance test because we don't trigger the RowDataConverter path. I 
can have a look.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to