fqaiser94 commented on code in PR #9641:
URL: https://github.com/apache/iceberg/pull/9641#discussion_r1501880384


##########
kafka-connect/kafka-connect/src/main/java/org/apache/iceberg/connect/data/IcebergWriter.java:
##########
@@ -77,8 +78,47 @@ public void write(SinkRecord record) {
   }
 
   private Record convertToRow(SinkRecord record) {
-    // FIXME: update this when the record converter is added
-    return null;
+    if (!config.evolveSchemaEnabled()) {
+      return recordConverter.convert(record.value());
+    }
+
+    SchemaUpdate.Consumer updates = new SchemaUpdate.Consumer();
+    Record row = recordConverter.convert(record.value(), updates);
+
+    if (!updates.empty()) {
+      // complete the current file
+      flush();
+      // apply the schema updates, this will refresh the table
+      SchemaUtils.applySchemaUpdates(table, updates);
+      // initialize a new writer with the new schema
+      initNewWriter();
+      // convert the row again, this time using the new table schema
+      row = recordConverter.convert(record.value(), null);

Review Comment:
   ahhh I see now, you convert it again afterwards with the new schema, and 
presumably this time you won't hit that branch and will include the value in 
the resulting row ...
   
   Is the fundamental reason we need to do this twice because we basically 
don't know the new field's ID before the schema evolution is executed and 
therefore can't add the new field to the `GenericRecord`? 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to