This is an automated email from the ASF dual-hosted git repository. dongjoon pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/spark-kubernetes-operator.git
The following commit(s) were added to refs/heads/main by this push:
new e0e797f [SPARK-52653] Fix `*.operatorContainer` to
`*.operatorPod.operatorContainer` in `operations.md`
e0e797f is described below
commit e0e797f56a3ef472223bcfead699a27a1b807f8d
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Wed Jul 2 07:49:54 2025 -0700
[SPARK-52653] Fix `*.operatorContainer` to
`*.operatorPod.operatorContainer` in `operations.md`
### What changes were proposed in this pull request?
This PR aims to fix the followings.
- `operatorDeployment.operatorContainer.jvmArgs` ->
`operatorDeployment.operatorPod.operatorContainer.jvmArgs`
- `operatorDeployment.operatorContainer.env` ->
`operatorDeployment.operatorPod.operatorContainer.env`
- `operatorDeployment.operatorContainer.envFrom` ->
`operatorDeployment.operatorPod.operatorContainer.envFrom`
- `operatorDeployment.operatorContainer.probes` ->
`operatorDeployment.operatorPod.operatorContainer.probes`
- `operatorDeployment.operatorContainer.securityContext` ->
`operatorDeployment.operatorPod.operatorContainer.securityContext`
- `operatorDeployment.operatorContainer.resources` ->
`operatorDeployment.operatorPod.operatorContainer.resources`
### Why are the changes needed?
The previous path is wrong. We need to use new path like the following.
```
$ helm install spark spark/spark-kubernetes-operator --set
operatorDeployment.operatorContainer.jvmArgs="-XX:+PrintFlagsFinal"
NAME: spark
LAST DEPLOYED: Wed Jul 2 07:48:18 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
$ kubectl get deploy spark-kubernetes-operator -oyaml | grep -C1
OPERATOR_JAVA_OPTS
value:
-Dlog4j.configurationFile=/opt/spark-operator/conf/log4j2.properties
- name: OPERATOR_JAVA_OPTS
value: -Dfile.encoding=UTF8
```
```
$ helm install spark spark/spark-kubernetes-operator --set
operatorDeployment.operatorPod.operatorContainer.jvmArgs="-XX:+PrintFlagsFinal"
$ kubectl get deploy spark-kubernetes-operator -oyaml | grep -C1
OPERATOR_JAVA_OPTS
value:
-Dlog4j.configurationFile=/opt/spark-operator/conf/log4j2.properties
- name: OPERATOR_JAVA_OPTS
value: -XX:+PrintFlagsFinal
```
### Does this PR introduce _any_ user-facing change?
No, this is a documentation update.
### How was this patch tested?
Manual review.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #269 from dongjoon-hyun/SPARK-52653.
Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
---
docs/operations.md | 124 ++++++++++++++++++++++++++---------------------------
1 file changed, 62 insertions(+), 62 deletions(-)
diff --git a/docs/operations.md b/docs/operations.md
index 05b45a7..96a58b6 100644
--- a/docs/operations.md
+++ b/docs/operations.md
@@ -59,68 +59,68 @@ helm install spark-kubernetes-operator \
The configurable parameters of the Helm chart and which default values as
detailed in the
following table:
-| Parameters | Description
| Default value
|
-|---------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
-| image.repository | The image
repository of spark-kubernetes-operator.
| apache/spark-kubernetes-operator
|
-| image.pullPolicy | The image
pull policy of spark-kubernetes-operator.
| IfNotPresent
|
-| image.tag | The image
tag of spark-kubernetes-operator.
| 0.5.0-SNAPSHOT
|
-| image.digest | The image
digest of spark-kubernetes-operator. If set then it takes precedence and the
image tag will be ignored.
|
|
-| imagePullSecrets | The image
pull secrets of spark-kubernetes-operator.
|
|
-| operatorDeployment.replica | Operator
replica count. Must be 1 unless leader election is configured.
| 1
|
-| operatorDeployment.strategy.type | Operator pod
upgrade strategy. Must be Recreate unless leader election is configured.
| Recreate
|
-| operatorDeployment.operatorPod.annotations | Custom
annotations to be added to the operator pod
|
|
-| operatorDeployment.operatorPod.labels | Custom
labels to be added to the operator pod
|
|
-| operatorDeployment.operatorPod.nodeSelector | Custom
nodeSelector to be added to the operator pod.
|
|
-| operatorDeployment.operatorPod.topologySpreadConstraints | Custom
topologySpreadConstraints to be added to the operator pod.
|
|
-| operatorDeployment.operatorPod.dnsConfig | DNS
configuration to be used by the operator pod.
|
|
-| operatorDeployment.operatorPod.volumes | Additional
volumes to be added to the operator pod.
|
|
-| operatorDeployment.operatorPod.priorityClassName | Priority
class name to be used for the operator pod
|
|
-| operatorDeployment.operatorPod.securityContext | Security
context overrides for the operator pod
|
|
-| operatorDeployment.operatorContainer.jvmArgs | JVM arg
override for the operator container.
| `"-Dfile.encoding=UTF8 -XX:+ExitOnOutOfMemoryError"`
|
-| operatorDeployment.operatorContainer.env | Custom env
to be added to the operator container.
|
|
-| operatorDeployment.operatorContainer.envFrom | Custom
envFrom to be added to the operator container, e.g. for downward API.
|
|
-| operatorDeployment.operatorContainer.probes | Probe config
for the operator container.
|
|
-| operatorDeployment.operatorContainer.securityContext | Security
context overrides for the operator container.
| run as non root for baseline secuirty standard compliance
|
-| operatorDeployment.operatorContainer.resources | Resources
for the operator container.
| memory 4Gi, ephemeral storage 2Gi and 1 cpu
|
-| operatorDeployment.additionalContainers | Additional
containers to be added to the operator pod, e.g. sidecar.
|
|
-| operatorRbac.serviceAccount.create | Whether to
create service account for operator to use.
| true
|
-| operatorRbac.serviceAccount.name | Name of the
operator Role.
| `"spark-operator"`
|
-| operatorRbac.clusterRole.create | Whether to
create ClusterRole for operator to use.
| true
|
-| operatorRbac.clusterRole.name | Name of the
operator ClusterRole.
| `"spark-operator-clusterrole"`
|
-| operatorRbac.clusterRoleBinding.create | Whether to
create ClusterRoleBinding for operator to use.
| true
|
-| operatorRbac.clusterRoleBinding.name | Name of the
operator ClusterRoleBinding.
| `"spark-operator-clusterrolebinding"`
|
-| operatorRbac.role.create | Whether to
create Role for operator to use in each workload namespace(s). At least one of
`clusterRole.create` or `role.create` should be enabled
| false
|
-| operatorRbac.role.name | Name of the
operator Role
| `"spark-operator-role"`
|
-| operatorRbac.roleBinding.create | Whether to
create RoleBinding for operator to use. At least one of
`clusterRoleBinding.create` or `roleBinding.create` should be enabled
| false
|
-| operatorRbac.roleBinding.name | Name of the
operator RoleBinding in each workload namespace(s).
| `"spark-operator-rolebinding"`
|
-| operatorRbac.roleBinding.roleRef | RoleRef for
the created Operator RoleBinding. Override this when you want the created
RoleBinding refer to ClusterRole / Role that's different from the default
operator Role. | Refers to default `operatorRbac.role.name`
|
-| operatorRbac.configManagement.create | Enable this
to create a Role for operator configuration management (hot property loading
and leader election).
| true
|
-| operatorRbac.configManagement.roleName | Role name
for operator configuration management.
| `spark-operator-config-role`
|
-| operatorRbac.configManagement.roleBinding | RoleBinding
name for operator configuration management.
| `"spark-operator-config-monitor-role-binding"`
|
-| operatorRbac.labels | Labels to be
applied on all created `operatorRbac` resources.
| `"app.kubernetes.io/component": "operator-rbac"`
|
-| workloadResources.namespaces.create | Whether to
create dedicated namespaces for Spark workload.
| true
|
-| workloadResources.namespaces.overrideWatchedNamespaces | When
enabled, operator would by default only watch namespace(s) provided in data
field.
| true
|
-| workloadResources.namespaces.data | List of
namespaces to create for Spark workload. The chart namespace would be used if
this is empty.
|
|
-| workloadResources.clusterRole.create | When
enabled, a ClusterRole would be created for Spark workload to use.
| true
|
-| workloadResources.clusterRole.name | Name of the
Spark workload ClusterRole.
| "spark-workload-clusterrole"
|
-| workloadResources.role.create | When
enabled, a Role would be created in each namespace for Spark workload. At least
one of `clusterRole.create` or `role.create` should be enabled.
| false
|
-| workloadResources.role.name | Name for
Spark workload Role.
| "spark-workload-role"
|
-| workloadResources.roleBinding.create | When
enabled, a RoleBinding would be created in each namespace for Spark workload.
This shall be enabled unless access is configured from 3rd party.
| true
|
-| workloadResources.roleBinding.name | Name of the
Spark workload RoleBinding.
| "spark-workload-rolebinding"
|
-| workloadResources.serviceAccounts.create | Whether to
create a service account for Spark workload.
| true
|
-| workloadResources.serviceAccounts.name | The name of
Spark workload service account.
| `spark`
|
-| workloadResources.labels | Labels to be
applied for all workload resources.
| `"app.kubernetes.io/component": "spark-workload"`
|
-| workloadResources.annotations | Annotations
to be applied for all workload resources.
| `"helm.sh/resource-policy": keep`
|
-| workloadResources.sparkApplicationSentinel.create | If enabled,
sentinel resources will be created for operator to watch and reconcile for the
health probe purpose.
| false
|
-| workloadResources.sparkApplicationSentinel.sentinelNamespaces | A list of
namespaces where sentinel resources will be created in. Note that these
namespaces have to be a subset of `workloadResources.namespaces.data`.
|
|
-| operatorConfiguration.append | If set to
true, below conf file & properties would be appended to default conf.
Otherwise, they would override default properties.
| true
|
-| operatorConfiguration.log4j2.properties | The default
log4j2 configuration.
| Refer default
[log4j2.properties](../build-tools/helm/spark-kubernetes-operator/conf/log4j2.properties)
|
-| operatorConfiguration.spark-operator.properties | The default
operator configuration.
|
|
-| operatorConfiguration.metrics.properties | The default
operator metrics (sink) configuration.
|
|
-| operatorConfiguration.dynamicConfig.create | If set to
true, a config map would be created & watched by operator as source of truth
for hot properties loading.
| false
|
-| operatorConfiguration.dynamicConfig.enable | If set to
true, operator would honor the created config mapas source of truth for hot
properties loading.
| false
|
-| operatorConfiguration.dynamicConfig.annotations | Annotations
to be applied for the dynamicConfig resources.
| `"helm.sh/resource-policy": keep`
|
-| operatorConfiguration.dynamicConfig.data | Data field
(key-value pairs) that acts as hot properties in the config map.
| `spark.kubernetes.operator.reconciler.intervalSeconds: "60"`
|
+| Parameters |
Description
| Default value
|
+|------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
+| image.repository | The image
repository of spark-kubernetes-operator.
| apache/spark-kubernetes-operator
|
+| image.pullPolicy | The image
pull policy of spark-kubernetes-operator.
| IfNotPresent
|
+| image.tag | The image
tag of spark-kubernetes-operator.
| 0.5.0-SNAPSHOT
|
+| image.digest | The image
digest of spark-kubernetes-operator. If set then it takes precedence and the
image tag will be ignored.
|
|
+| imagePullSecrets | The image
pull secrets of spark-kubernetes-operator.
|
|
+| operatorDeployment.replica | Operator
replica count. Must be 1 unless leader election is configured.
| 1
|
+| operatorDeployment.strategy.type | Operator
pod upgrade strategy. Must be Recreate unless leader election is configured.
| Recreate
|
+| operatorDeployment.operatorPod.annotations | Custom
annotations to be added to the operator pod
|
|
+| operatorDeployment.operatorPod.labels | Custom
labels to be added to the operator pod
|
|
+| operatorDeployment.operatorPod.nodeSelector | Custom
nodeSelector to be added to the operator pod.
|
|
+| operatorDeployment.operatorPod.topologySpreadConstraints | Custom
topologySpreadConstraints to be added to the operator pod.
|
|
+| operatorDeployment.operatorPod.dnsConfig | DNS
configuration to be used by the operator pod.
|
|
+| operatorDeployment.operatorPod.volumes |
Additional volumes to be added to the operator pod.
|
|
+| operatorDeployment.operatorPod.priorityClassName | Priority
class name to be used for the operator pod
|
|
+| operatorDeployment.operatorPod.securityContext | Security
context overrides for the operator pod
|
|
+| operatorDeployment.operatorPod.operatorContainer.jvmArgs | JVM arg
override for the operator container.
| `"-Dfile.encoding=UTF8 -XX:+ExitOnOutOfMemoryError"`
|
+| operatorDeployment.operatorPod.operatorContainer.env | Custom
env to be added to the operator container.
|
|
+| operatorDeployment.operatorPod.operatorContainer.envFrom | Custom
envFrom to be added to the operator container, e.g. for downward API.
|
|
+| operatorDeployment.operatorPod.operatorContainer.probes | Probe
config for the operator container.
|
|
+| operatorDeployment.operatorPod.operatorContainer.securityContext | Security
context overrides for the operator container.
| run as non root for baseline secuirty standard compliance
|
+| operatorDeployment.operatorPod.operatorContainer.resources | Resources
for the operator container.
| memory 4Gi, ephemeral storage 2Gi and 1 cpu
|
+| operatorDeployment.additionalContainers |
Additional containers to be added to the operator pod, e.g. sidecar.
|
|
+| operatorRbac.serviceAccount.create | Whether
to create service account for operator to use.
| true
|
+| operatorRbac.serviceAccount.name | Name of
the operator Role.
| `"spark-operator"`
|
+| operatorRbac.clusterRole.create | Whether
to create ClusterRole for operator to use.
| true
|
+| operatorRbac.clusterRole.name | Name of
the operator ClusterRole.
| `"spark-operator-clusterrole"`
|
+| operatorRbac.clusterRoleBinding.create | Whether
to create ClusterRoleBinding for operator to use.
| true
|
+| operatorRbac.clusterRoleBinding.name | Name of
the operator ClusterRoleBinding.
| `"spark-operator-clusterrolebinding"`
|
+| operatorRbac.role.create | Whether
to create Role for operator to use in each workload namespace(s). At least one
of `clusterRole.create` or `role.create` should be enabled
| false
|
+| operatorRbac.role.name | Name of
the operator Role
| `"spark-operator-role"`
|
+| operatorRbac.roleBinding.create | Whether
to create RoleBinding for operator to use. At least one of
`clusterRoleBinding.create` or `roleBinding.create` should be enabled
| false
|
+| operatorRbac.roleBinding.name | Name of
the operator RoleBinding in each workload namespace(s).
| `"spark-operator-rolebinding"`
|
+| operatorRbac.roleBinding.roleRef | RoleRef
for the created Operator RoleBinding. Override this when you want the created
RoleBinding refer to ClusterRole / Role that's different from the default
operator Role. | Refers to default `operatorRbac.role.name`
|
+| operatorRbac.configManagement.create | Enable
this to create a Role for operator configuration management (hot property
loading and leader election).
| true
|
+| operatorRbac.configManagement.roleName | Role name
for operator configuration management.
| `spark-operator-config-role`
|
+| operatorRbac.configManagement.roleBinding |
RoleBinding name for operator configuration management.
| `"spark-operator-config-monitor-role-binding"`
|
+| operatorRbac.labels | Labels to
be applied on all created `operatorRbac` resources.
| `"app.kubernetes.io/component": "operator-rbac"`
|
+| workloadResources.namespaces.create | Whether
to create dedicated namespaces for Spark workload.
| true
|
+| workloadResources.namespaces.overrideWatchedNamespaces | When
enabled, operator would by default only watch namespace(s) provided in data
field.
| true
|
+| workloadResources.namespaces.data | List of
namespaces to create for Spark workload. The chart namespace would be used if
this is empty.
|
|
+| workloadResources.clusterRole.create | When
enabled, a ClusterRole would be created for Spark workload to use.
| true
|
+| workloadResources.clusterRole.name | Name of
the Spark workload ClusterRole.
| "spark-workload-clusterrole"
|
+| workloadResources.role.create | When
enabled, a Role would be created in each namespace for Spark workload. At least
one of `clusterRole.create` or `role.create` should be enabled.
| false
|
+| workloadResources.role.name | Name for
Spark workload Role.
| "spark-workload-role"
|
+| workloadResources.roleBinding.create | When
enabled, a RoleBinding would be created in each namespace for Spark workload.
This shall be enabled unless access is configured from 3rd party.
| true
|
+| workloadResources.roleBinding.name | Name of
the Spark workload RoleBinding.
| "spark-workload-rolebinding"
|
+| workloadResources.serviceAccounts.create | Whether
to create a service account for Spark workload.
| true
|
+| workloadResources.serviceAccounts.name | The name
of Spark workload service account.
| `spark`
|
+| workloadResources.labels | Labels to
be applied for all workload resources.
| `"app.kubernetes.io/component": "spark-workload"`
|
+| workloadResources.annotations |
Annotations to be applied for all workload resources.
| `"helm.sh/resource-policy": keep`
|
+| workloadResources.sparkApplicationSentinel.create | If
enabled, sentinel resources will be created for operator to watch and reconcile
for the health probe purpose.
| false
|
+| workloadResources.sparkApplicationSentinel.sentinelNamespaces | A list of
namespaces where sentinel resources will be created in. Note that these
namespaces have to be a subset of `workloadResources.namespaces.data`.
|
|
+| operatorConfiguration.append | If set to
true, below conf file & properties would be appended to default conf.
Otherwise, they would override default properties.
| true
|
+| operatorConfiguration.log4j2.properties | The
default log4j2 configuration.
| Refer default
[log4j2.properties](../build-tools/helm/spark-kubernetes-operator/conf/log4j2.properties)
|
+| operatorConfiguration.spark-operator.properties | The
default operator configuration.
|
|
+| operatorConfiguration.metrics.properties | The
default operator metrics (sink) configuration.
|
|
+| operatorConfiguration.dynamicConfig.create | If set to
true, a config map would be created & watched by operator as source of truth
for hot properties loading.
| false
|
+| operatorConfiguration.dynamicConfig.enable | If set to
true, operator would honor the created config mapas source of truth for hot
properties loading.
| false
|
+| operatorConfiguration.dynamicConfig.annotations |
Annotations to be applied for the dynamicConfig resources.
| `"helm.sh/resource-policy": keep`
|
+| operatorConfiguration.dynamicConfig.data | Data
field (key-value pairs) that acts as hot properties in the config map.
| `spark.kubernetes.operator.reconciler.intervalSeconds: "60"`
|
For more information check the [Helm
documentation](https://helm.sh/docs/helm/helm_install/).
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
