huaxingao commented on code in PR #14652:
URL: https://github.com/apache/iceberg/pull/14652#discussion_r2572851702


##########
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java:
##########
@@ -67,17 +75,27 @@ protected CloseableIterable<ColumnarBatch> newBatchIterable(
       Expression residual,
       Map<Integer, ?> idToConstant,
       SparkDeleteFilter deleteFilter) {
+    CloseableIterable<ColumnarBatch> iterable;
     switch (format) {
       case PARQUET:
-        return newParquetIterable(inputFile, start, length, residual, 
idToConstant, deleteFilter);
-
+        iterable =
+            newParquetIterable(
+                inputFile,
+                start,
+                length,
+                residual,
+                idToConstant,
+                deleteFilter != null ? deleteFilter.requiredSchema() : 
expectedSchema());
+        break;
       case ORC:
-        return newOrcIterable(inputFile, start, length, residual, 
idToConstant);
-
+        iterable = newOrcIterable(inputFile, start, length, residual, 
idToConstant);
+        break;
       default:
         throw new UnsupportedOperationException(
             "Format: " + format + " not supported for batched reads");
     }
+
+    return CloseableIterable.transform(iterable, new 
BatchDeleteFilter(deleteFilter)::filterBatch);

Review Comment:
   Just to confirm: this refactor applies delete filtering to ORC vectorized 
reads as well (now that filtering is done in BaseBatchReader). If that’s the 
intent, could we add ORC tests mirroring the Parquet ones?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to