This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new b8e0246f3ef4 [SPARK-48466][SQL][UI][FOLLOWUP] Handle nested empty 
relation in SparkPlanInfo
b8e0246f3ef4 is described below

commit b8e0246f3ef43f49dd6804e5a583bbe0d09f5a81
Author: liuzqt <liuz...@hotmail.com>
AuthorDate: Mon Mar 24 18:51:55 2025 +0800

    [SPARK-48466][SQL][UI][FOLLOWUP] Handle nested empty relation in 
SparkPlanInfo
    
    ### What changes were proposed in this pull request?
    
    Fix `SparkPlanInfo.fromLogicalPlan` to handle nested empty relation.
    
    Before:
    <img width="294" alt="Screenshot 2025-03-21 at 11 30 56 AM" 
src="https://github.com/user-attachments/assets/d03f6b88-13ad-4b67-bc4d-18b532e4dea2";
 />
    
    After:
    
    <img width="390" alt="Screenshot 2025-03-20 at 5 51 21 PM" 
src="https://github.com/user-attachments/assets/0e4f775c-b9cf-4955-af17-5e47fa44e44b";
 />
    
    ### Why are the changes needed?
    
    A followup for https://github.com/apache/spark/pull/46830, in the original 
PR I forget to handle nested empty relation.
    
    ### Does this PR introduce _any_ user-facing change?
    Yes, UI change
    
    ### How was this patch tested?
    Verifed in Spark UI
    
    ### Was this patch authored or co-authored using generative AI tooling?
    NO
    
    Closes #50350 from liuzqt/SPARK-48466.
    
    Authored-by: liuzqt <liuz...@hotmail.com>
    Signed-off-by: Wenchen Fan <wenc...@databricks.com>
---
 .../src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala
index 615c8746a3e5..4410fe50912f 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala
@@ -18,7 +18,7 @@
 package org.apache.spark.sql.execution
 
 import org.apache.spark.annotation.DeveloperApi
-import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
+import org.apache.spark.sql.catalyst.plans.logical.{EmptyRelation, LogicalPlan}
 import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, 
QueryStageExec}
 import org.apache.spark.sql.execution.adaptive.LogicalQueryStage
 import org.apache.spark.sql.execution.columnar.InMemoryTableScanExec
@@ -56,6 +56,7 @@ private[execution] object SparkPlanInfo {
   private def fromLogicalPlan(plan: LogicalPlan): SparkPlanInfo = {
     val childrenInfo = plan match {
       case LogicalQueryStage(_, physical) => Seq(fromSparkPlan(physical))
+      case EmptyRelation(logical) => Seq(fromLogicalPlan(logical))
       case _ => (plan.children ++ plan.subqueries).map(fromLogicalPlan)
     }
     new SparkPlanInfo(


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to