rluvaton commented on PR #21679:
URL: https://github.com/apache/datafusion/pull/21679#issuecomment-4280717141

   Looking at the code how can we support implementing spark 
[`reduce`](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.reduce.html)
 lambda function where the input of the second lambda (`finish` function) 
depend on the output for the first function (`merge`)
   
   how can we do that in sql and how can we do that in when we are creating the 
physical expr ourself (like Comet does).
   
   I just wanna know if we can implement that without creating breaking changes 
and if the current infrastructure support it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to