atifiu commented on PR #6252:
URL: https://github.com/apache/iceberg/pull/6252#issuecomment-1757848584

   @huaxingao Thanks for your response. Even in the case of max on non filter 
column, aggregate pushdown is not working.
   
   In the below explain plan partition is defined on initial_page_view_dtm  and 
I am filtering on the same. So in the case of this table which is relatively 
small aggregate pushdown works without filter but as soon I add filter it does 
not work. While in case of large table aggregate pushdown is not working at all 
and it gives message 
   
   > SparkScanBuilder: Skipping aggregate pushdown: detected row level deletes
   
   ```
   == Physical Plan ==
   AdaptiveSparkPlan isFinalPlan=false
   +- HashAggregate(keys=[], functions=[max(pageviewdate#465)])
      +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [plan_id=86]
         +- HashAggregate(keys=[], functions=[partial_max(pageviewdate#465)])
            +- Project [pageviewdate#465]
               +- Filter (initial_page_view_dtm#468 >= 2023-01-01 00:00:00)
                  +- BatchScan[pageviewdate#465, initial_page_view_dtm#468] 
spark_catalog.schema.table1 (branch=null) [filters=initial_page_view_dtm IS NOT 
NULL, initial_page_view_dtm >= 1672549200000000, groupedBy=] RuntimeFilters: []
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to