alamb commented on code in PR #21827:
URL: https://github.com/apache/datafusion/pull/21827#discussion_r3149953966
##########
datafusion/physical-plan/src/union.rs:
##########
@@ -612,13 +612,21 @@ impl ExecutionPlan for InterleaveExec {
self: Arc<Self>,
children: Vec<Arc<dyn ExecutionPlan>>,
) -> Result<Arc<dyn ExecutionPlan>> {
- // New children are no longer interleavable, which might be a bug of
optimization rewrite.
- assert_or_internal_err!(
- can_interleave(children.iter()),
- "Can not create InterleaveExec: new children can not be
interleaved"
- );
- check_if_same_properties!(self, children);
- Ok(Arc::new(InterleaveExec::try_new(children)?))
+ // After optimizer rewrites (e.g. join_selection changing join modes,
+ // or additional EnforceDistribution passes), children may no longer
+ // have consistent hash partitioning. Fall back to UnionExec instead
+ // of panicking — correctness is preserved, only the interleave
+ // optimization is lost.
+ if can_interleave(children.iter()) {
+ check_if_same_properties!(self, children);
+ Ok(Arc::new(InterleaveExec::try_new(children)?))
+ } else {
+ warn!(
Review Comment:
I agree with @jonahgao that flagging this with a warning only is non ideal
because:
1. It will likely get ignored by many people
2. If someone sees this warning in their logs they can't do anything about
it (it is likely a bug in DataFusion)
I would personally suggest:
1. Revert this change (so the error is returned)
2. At the callsite where we may change an InterleaveExec so it's chidlren
are no longer interleavable (aka in EnforceDistribution) rewrite any
InterleaveExec back to union
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]