This is an automated email from the ASF dual-hosted git repository.
yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 36299f1b2bff [SPARK-47285][SQL] AdaptiveSparkPlanExec should always
use the context.session
36299f1b2bff is described below
commit 36299f1b2bff8c892624bc5c0e4c00ea3f261532
Author: ulysses-you <[email protected]>
AuthorDate: Wed Mar 6 09:53:53 2024 +0800
[SPARK-47285][SQL] AdaptiveSparkPlanExec should always use the
context.session
### What changes were proposed in this pull request?
Use `context.session` instead of `session` to avoid potential issue. For
example, a cached plan may re-instance `AdaptiveSparkPlanExec` with a different
active session.
### Why are the changes needed?
avoid potential issue
### Does this PR introduce _any_ user-facing change?
avoid potential issue
### How was this patch tested?
avoid potential issue
### Was this patch authored or co-authored using generative AI tooling?
no
Closes #45388 from ulysses-you/aqe.
Authored-by: ulysses-you <[email protected]>
Signed-off-by: Kent Yao <[email protected]>
---
.../apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git
a/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
b/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
index b2a1141f2f5b..2879aaca7215 100644
---
a/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
+++
b/sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
@@ -87,7 +87,7 @@ case class AdaptiveSparkPlanExec(
// The logical plan optimizer for re-optimizing the current logical plan.
@transient private val optimizer = new AQEOptimizer(conf,
- session.sessionState.adaptiveRulesHolder.runtimeOptimizerRules)
+ context.session.sessionState.adaptiveRulesHolder.runtimeOptimizerRules)
// `EnsureRequirements` may remove user-specified repartition and assume the
query plan won't
// change its output partitioning. This assumption is not true in AQE. Here
we check the
@@ -103,7 +103,8 @@ case class AdaptiveSparkPlanExec(
@transient private val costEvaluator =
conf.getConf(SQLConf.ADAPTIVE_CUSTOM_COST_EVALUATOR_CLASS) match {
- case Some(className) => CostEvaluator.instantiate(className,
session.sparkContext.getConf)
+ case Some(className) =>
+ CostEvaluator.instantiate(className,
context.session.sparkContext.getConf)
case _ =>
SimpleCostEvaluator(conf.getConf(SQLConf.ADAPTIVE_FORCE_OPTIMIZE_SKEWED_JOIN))
}
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]