javrasya commented on code in PR #9464:
URL: https://github.com/apache/iceberg/pull/9464#discussion_r1451079177


##########
flink/v1.18/flink/src/main/java/org/apache/iceberg/flink/source/split/IcebergSourceSplit.java:
##########
@@ -166,12 +166,19 @@ static IcebergSourceSplit deserializeV2(byte[] 
serialized, boolean caseSensitive
 
     List<FileScanTask> tasks = Lists.newArrayListWithCapacity(taskCount);
     for (int i = 0; i < taskCount; ++i) {
-      String taskJson = in.readUTF();
+      String taskJson = in.readLine();
       FileScanTask task = FileScanTaskParser.fromJson(taskJson, caseSensitive);
       tasks.add(task);
     }
 
     CombinedScanTask combinedScanTask = new BaseCombinedScanTask(tasks);
     return IcebergSourceSplit.fromCombinedScanTask(combinedScanTask, 
fileOffset, recordOffset);
   }
+
+  private static void writeBytes(DataOutputSerializer out, String s) throws 
IOException {
+    for (int i = 0; i < s.length(); i++) {
+      out.writeByte(s.charAt(i));
+    }
+    out.writeByte('\n');

Review Comment:
   The reason for this is because, now the deserialize method uses `readLine` 
instead of `readUTF` (because it does not work with it anymore), that is the 
only way I could think of in which I could still load the tasks one by one in 
an iterator fashion.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to