gnehil commented on issue #223: URL: https://github.com/apache/doris-spark-connector/issues/223#issuecomment-2272750114
> > > > Which Spark connector version are you using? > > > > > > > > > 1.3.1 > > > org.apache.doris spark-doris-connector-3.2_${spark.scala.version} 1.3.1 > > > > > > This is due to a query execution error in Doris, not the connector. The problem occurs in versions 1.2.8 and 1.2.9. You can upgrade to version 2.x to solve this problem. > > 我也遇到这个问题了,doris.filter.query只对分区列生效, spark.read.format..xxxx.load后接上.cache()再createOrReplaceTempView, 然后视图上执行where 可以正常过滤掉数据. 如果把cache()去掉,在视图上执行where过滤依然不生效, 猜测应该是把where条件下推到doris引擎中去执行导致 That's right. The connector has correctly pushed the predicate down to the scan operator, and the exception occurred on the Doris side. Caching the dataframe returned by load is a solution. Adding predicates to the cached dataframe will be processed by Spark. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org