huaxingao commented on code in PR #7636:
URL: https://github.com/apache/iceberg/pull/7636#discussion_r1199560264


##########
spark/v3.3/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java:
##########
@@ -288,29 +289,37 @@ public void testIncrementalScanOptions() throws 
IOException {
         });
 
     // test (1st snapshot, current snapshot] incremental scan.
-    List<SimpleRecord> result =
+    Dataset<Row> resultDf1 =
         spark
             .read()
             .format("iceberg")
             .option("start-snapshot-id", snapshotIds.get(3).toString())
-            .load(tableLocation)
-            .orderBy("id")
-            .as(Encoders.bean(SimpleRecord.class))
-            .collectAsList();
-    Assert.assertEquals("Records should match", expectedRecords.subList(1, 4), 
result);
+            .load(tableLocation);
+    List<SimpleRecord> result1 =

Review Comment:
   Yes, we can check if pushdown is being used. I normally check the explain 
string to find out if pushdown is being used.
   For example, 
   ```
   SELECT min(data), max(data), count(data) FROM table;
   ```
   If aggregate is pushed down, the physical plan has 
   ```
   LocalTableScan [min(data)#4461, max(data)#4462, count(data)#4463L]
   ```
   If aggregate is not pushed down, the physical plan has 
   ```
   BatchScan default.table[data#4471]
   ```
   
   I didn't check the explain string in this test to see if aggregate is pushed 
down, but I added a test for incremental Scan in `TestAggregatePushDown`. I 
have checked the explain string in that test to make sure aggregate is pushed 
down.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to