mbutrovich opened a new issue, #4347:
URL: https://github.com/apache/datafusion-comet/issues/4347

   Related to #2723.
   
   A lot of Spark SQL tests don't put their source data in a format that Comet 
accelerates reading from. Comet generally wants data in Parquet or Iceberg. A 
number of Spark SQL suites (_e.g._, UDFSuite) don't write to Parquet, so their 
scans are just `LocalTableScanExec`, and Comet is not exercising UDF 
compatibility because the scan underneath isn't native. We can either do what 
#2723 suggest and enable converting to columnar, or try supporting 
`LocalTableScanExec` now that we have #2735. @andygrove tried the former in 
#2714, but the logs are gone now so I'm not sure how ugly it was.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to