RussellSpitzer commented on code in PR #7636:
URL: https://github.com/apache/iceberg/pull/7636#discussion_r1199298553


##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java:
##########
@@ -432,6 +426,10 @@ private Scan buildBatchScan(Long snapshotId, Long 
asOfTimestamp, String branch,
             .filter(filterExpression())
             .project(expectedSchema);
 
+    if (withStats) {
+      scan = scan.includeColumnStats();

Review Comment:
   Unrelated to this PR, but something we should think about is how this can be 
overly expensive on extremely wide tables. @pvary is actually dealing with a 
similar issue, we may want to extract just the columns relating to our 
aggregation to avoid memory issues. Not an issue for this PR though, just 
wanted to bring it up.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to