aokolnychyi commented on code in PR #10943:
URL: https://github.com/apache/iceberg/pull/10943#discussion_r1838676561


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseRowReader.java:
##########
@@ -48,6 +50,17 @@ abstract class BaseRowReader<T extends ScanTask> extends 
BaseReader<InternalRow,
     super(table, taskGroup, tableSchema, expectedSchema, caseSensitive);
   }
 
+  BaseRowReader(
+      Table table,
+      ScanTaskGroup<T> taskGroup,
+      Schema tableSchema,
+      Schema expectedSchema,
+      boolean caseSensitive,
+      Integer pushedLimit) {
+    this(table, taskGroup, tableSchema, expectedSchema, caseSensitive);
+    this.pushedLimit = pushedLimit;

Review Comment:
   Why do we push it all the way to the reader instead of simply reading just 
enough files to match the limit based on the number of records we have in the 
metadata for each data file?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to