Jiwon Park created SPARK-56274:
----------------------------------
Summary: Fix redundant SparkConf set() call in
SparkClusterSubmissionWorker
Key: SPARK-56274
URL: https://issues.apache.org/jira/browse/SPARK-56274
Project: Spark
Issue Type: Improvement
Components: Kubernetes
Affects Versions: kubernetes-operator-0.9.0
Reporter: Jiwon Park
In {{SparkClusterSubmissionWorker.getResourceSpec()}}, each entry in
{{sparkConf}} is written to {{effectiveSparkConf}} twice per iteration:
{code:java}
for (Map.Entry<String, String> entry : confFromSpec.entrySet()) {
effectiveSparkConf.set(entry.getKey(), entry.getValue()); // first set
String value = entry.getValue();
if ("spark.kubernetes.container.image".equals(entry.getKey())) {
value = value.replace("{{SPARK_VERSION}}", sparkVersion);
}
effectiveSparkConf.set(entry.getKey(), value); // second set
}
{code}
For the {{spark.kubernetes.container.image}} key, the first {{set()}} stores
the raw (unsubstituted) value before the {{{{SPARK_VERSION}}}} placeholder is
replaced, and the second {{set()}} immediately overwrites it with the correct
value. For all other keys, both calls write the identical value. The first
{{set()}} is therefore always redundant.
*Fix:* Remove the first {{set()}} call and restructure the loop to call
{{set()}} exactly once per entry with the correctly substituted value.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]