aiborodin commented on code in PR #13340:
URL: https://github.com/apache/iceberg/pull/13340#discussion_r2156145524


##########
flink/v2.0/flink/src/main/java/org/apache/iceberg/flink/sink/dynamic/DynamicRecordProcessor.java:
##########
@@ -142,10 +151,18 @@ private void emit(
       Schema schema,
       CompareSchemasVisitor.Result result,
       PartitionSpec spec) {
-    RowData rowData =
-        result == CompareSchemasVisitor.Result.SAME
-            ? data.rowData()
-            : RowDataEvolver.convert(data.rowData(), data.schema(), schema);
+    RowData rowData;
+    if (result == CompareSchemasVisitor.Result.SAME) {
+      rowData = data.rowData();
+    } else {
+      RowDataConverter rowDataConverter =
+          converterCache.get(
+              data.schema(),
+              dataSchema ->
+                  new RowDataConverter(
+                      FlinkSchemaUtil.convert(dataSchema), 
FlinkSchemaUtil.convert(schema)));

Review Comment:
   Yes, we did, please see the profile attached. According to the profile, 
Schema -> RowType conversion takes approximately 51% of our converter's CPU 
time, while the static conversion in RowDataEvolver, which recomputes field 
indices for every record, accounts for about 45%. It is clear from the profile 
that caching schemas alone wouldn't be sufficient, and we also need quasi-code 
generation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to