RussellSpitzer commented on code in PR #10943:
URL: https://github.com/apache/iceberg/pull/10943#discussion_r1779152923


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkPartitioningAwareScan.java:
##########
@@ -78,7 +78,19 @@ abstract class SparkPartitioningAwareScan<T extends 
PartitionScanTask> extends S
       Schema expectedSchema,
       List<Expression> filters,
       Supplier<ScanReport> scanReportSupplier) {
-    super(spark, table, readConf, expectedSchema, filters, scanReportSupplier);
+    this(spark, table, scan, readConf, expectedSchema, filters, 
scanReportSupplier, null);
+  }
+
+  SparkPartitioningAwareScan(
+      SparkSession spark,
+      Table table,
+      Scan<?, ? extends ScanTask, ? extends ScanTaskGroup<?>> scan,
+      SparkReadConf readConf,
+      Schema expectedSchema,
+      List<Expression> filters,
+      Supplier<ScanReport> scanReportSupplier,
+      Integer pushedLimit) {

Review Comment:
   I'm wondering if we should include this in SparkReadConf rather than having 
a separate argument. I have similar thoughts around SparkInputPartition. I'm 
not a big fan of having to plumb the new arguments all the way through the code 
base but those two options may not look great either since they aren't a great 
fit imho. 
   
   Ideally I think I would want something like SparkReadContext but I don't 
know how often more things like this will come up



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to