MonkeyCanCode commented on code in PR #3812:
URL: https://github.com/apache/polaris/pull/3812#discussion_r2825153386
##########
Makefile:
##########
@@ -287,6 +290,32 @@ helm-lint: check-dependencies ## Run Helm chart lint check
@ct lint --charts helm/polaris
@echo "--- Helm chart linting complete ---"
+helm-fixtures: DEPENDENCIES := kubectl
+.PHONY: helm-fixtures
+helm-fixtures: check-dependencies ## Create namespace and deploy fixtures for
Helm chart testing
+ @echo "--- Creating namespace and deploying fixtures ---"
+ @kubectl create namespace polaris --dry-run=client -o yaml | kubectl
apply -f -
+ @kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/
+ @echo "--- Waiting for database pods to be ready ---"
+ @kubectl wait --namespace polaris --for=condition=ready pod
--selector=app.kubernetes.io/name=postgres --timeout=120s
+ @kubectl wait --namespace polaris --for=condition=ready pod
--selector=app.kubernetes.io/name=mongodb --timeout=120s
+ @echo "--- Fixtures deployed and ready ---"
+
+helm-fixtures-cleanup: DEPENDENCIES := kubectl
+.PHONY: helm-fixtures-cleanup
+helm-fixtures-cleanup: check-dependencies ## Remove fixtures and namespace for
Helm chart testing
+ @echo "--- Removing fixtures and namespace ---"
+ @kubectl delete --namespace polaris -f helm/polaris/ci/fixtures/
--ignore-not-found
+ @kubectl delete namespace polaris --ignore-not-found
Review Comment:
As we are deleting namespace here already, we may as well just do `kubectl
delete namespace polaris --wait=true --ignore-not-found` so it will do cascade
delete for resources within the namespace?
##########
site/content/in-dev/unreleased/helm-chart/production.md:
##########
@@ -0,0 +1,215 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+title: Production Configuration
+linkTitle: Production Configuration
+type: docs
+weight: 200
+---
+
+This guide provides instructions for configuring the Apache Polaris Helm chart
for a production environment. For full list of chart values, see the [Chart
Reference]({{% relref "reference" %}}) page.
+
+## Prerequisites
+
+- A Kubernetes cluster (1.33+ recommended)
+- Helm 3.x or 4.x installed
+- `kubectl` configured to access your cluster
+- A PostgreSQL or MongoDB database
+
+## Adding the Helm Repository
+
+Add the official Apache Polaris Helm repository:
+
+```bash
+helm repo add polaris https://downloads.apache.org/incubator/polaris/helm-chart
+helm repo update
+```
+
+## Installation
+
+Create a `values.yaml` file with your production configuration. See the Chart
[Values Reference]({{% relref "reference" %}}) for all available configuration
options.
+
+Create the target namespace and install the chart:
+
+```bash
+kubectl create namespace polaris
+helm install polaris polaris/polaris --namespace polaris --devel --values
your-production-values.yaml
+```
+
+{{< alert note >}}
+The `--devel` flag is required when installing while Polaris is in the
incubation phase. Helm treats the `-incubating` suffix as a pre-release by
SemVer rules, and will skip charts that are not in a stable versioning scheme
by default.
+{{< /alert >}}
+
+Verify the installation:
+
+```bash
+helm test polaris --namespace polaris
+```
+
+## Production Configuration
+
+The default Helm chart values are suitable for development and testing, but
they are not recommended for production. The following sections describe the
key areas to configure for a production deployment.
+
+### Authentication
+
+Polaris supports internal authentication (with RSA key pairs or symmetric
keys) and external authentication via OIDC with identity providers like
Keycloak, Okta, Azure AD, and others.
+
+By default, the Polaris Helm chart uses internal authentication with
auto-generated keys. In a multi-replica production environment, all Polaris
pods must share the same token signing keys to avoid token validation failures.
+
+See the [Authentication]({{% relref "authentication" %}}) page for detailed
configuration instructions.
+
+### Persistence
+
+By default, the Polaris Helm chart uses the `in-memory` metastore, which is
not suitable for production. A persistent backend must be configured to ensure
data is not lost when pods restart.
+
+Polaris supports PostgreSQL (JDBC) and MongoDB (NoSQL, beta) as
production-ready persistence backends. See the [Persistence]({{% relref
"persistence" %}}) page for detailed configuration instructions.
+
+### Networking
+
+For configuring external access to Polaris using the Gateway API or Ingress,
see the [Services & Networking]({{% relref "networking" %}}) guide.
+
+### Resource Management
+
+For a production environment, it is crucial to define resource requests and
limits for the Polaris pods. Resource requests ensure that pods are allocated
enough resources to run, while limits prevent them from consuming too many
resources on the node.
+
+Define resource requests and limits for the Polaris pods:
+
+```yaml
+resources:
+ requests:
+ memory: "8Gi"
+ cpu: "4"
+ limits:
+ memory: "8Gi"
+ cpu: "4"
+```
+
+Adjust these values based on expected workload and available cluster resources.
+
+### Scaling
+
+For high availability, multiple replicas of the Polaris server can be run.
This requires a persistent backend to be configured as described above.
+
+#### Static Replicas
+
+`replicaCount` must be set to the desired number of pods:
+
+```yaml
+replicaCount: 3
+```
+
+#### Autoscaling
+
+
+Horizontal autoscaling can be enabled to define the minimum and maximum number
of replicas, and CPU or memory utilization targets:
+
+```yaml
+autoscaling:
+ enabled: true
+ minReplicas: 2
+ maxReplicas: 5
+ targetCPUUtilizationPercentage: 80
+ targetMemoryUtilizationPercentage: 80
+```
+
+#### Pod Topology Spreading
+
+For better fault tolerance, `topologySpreadConstraints` can be used to
distribute pods across different nodes, racks, or availability zones. This
helps prevent a single infrastructure failure from taking down all Polaris
replicas.
+
+Here is an example that spreads pods across different zones and keeps the
number of pods in each zone from differing by more than one:
+
+```yaml
+topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: "topology.kubernetes.io/zone"
+ whenUnsatisfiable: "DoNotSchedule"
+```
+
+### Pod Priority
+
+In a production environment, it is advisable to set a `priorityClassName` for
the Polaris pods. This ensures that the Kubernetes scheduler gives them higher
priority over less critical workloads, and helps prevent them from being
evicted from a node that is running out of resources.
+
+First, a `PriorityClass` must be created in the cluster. For example:
+
+```yaml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: polaris-high-priority
+value: 1000000
+globalDefault: false
+description: "This priority class should be used for Polaris service pods
only."
+```
+
+Then, the `priorityClassName` can be set in the `values.yaml` file:
+
+```yaml
+priorityClassName: "polaris-high-priority"
+```
+
+## Bootstrapping Realms
+
+When installing Polaris for the first time, it is necessary to bootstrap each
realm using the Polaris admin tool.
+
+For more information on bootstrapping realms, see the [Admin Tool]({{% relref
"../admin-tool#bootstrapping-realms-and-principal-credentials" %}}) guide.
+
+Example for the PostgreSQL backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=relational-jdbc" \
+ --env="quarkus.datasource.username=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.username}' | base64 --decode)" \
+ --env="quarkus.datasource.password=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.password}' | base64 --decode)" \
+ --env="quarkus.datasource.jdbc.url=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.jdbcUrl}' | base64 --decode)" \
+ -- \
+ bootstrap -r POLARIS -c POLARIS,root,$ROOT_PASSWORD
Review Comment:
Should we add a statement here to clarify the relationship between `db-name`
and `realm` (which we already stated in
http://localhost:1313/in-dev/unreleased/realm/). For people who may just jump
to `production.md` and `persistence.md`, the magic word "POLARIS" may be
confusing and run into issues.
##########
site/content/in-dev/unreleased/helm-chart/production.md:
##########
@@ -0,0 +1,215 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+title: Production Configuration
+linkTitle: Production Configuration
+type: docs
+weight: 200
+---
+
+This guide provides instructions for configuring the Apache Polaris Helm chart
for a production environment. For full list of chart values, see the [Chart
Reference]({{% relref "reference" %}}) page.
+
+## Prerequisites
+
+- A Kubernetes cluster (1.33+ recommended)
+- Helm 3.x or 4.x installed
+- `kubectl` configured to access your cluster
+- A PostgreSQL or MongoDB database
+
+## Adding the Helm Repository
+
+Add the official Apache Polaris Helm repository:
+
+```bash
+helm repo add polaris https://downloads.apache.org/incubator/polaris/helm-chart
+helm repo update
+```
+
+## Installation
+
+Create a `values.yaml` file with your production configuration. See the Chart
[Values Reference]({{% relref "reference" %}}) for all available configuration
options.
+
+Create the target namespace and install the chart:
+
+```bash
+kubectl create namespace polaris
+helm install polaris polaris/polaris --namespace polaris --devel --values
your-production-values.yaml
+```
+
+{{< alert note >}}
+The `--devel` flag is required when installing while Polaris is in the
incubation phase. Helm treats the `-incubating` suffix as a pre-release by
SemVer rules, and will skip charts that are not in a stable versioning scheme
by default.
+{{< /alert >}}
+
+Verify the installation:
+
+```bash
+helm test polaris --namespace polaris
+```
+
+## Production Configuration
+
+The default Helm chart values are suitable for development and testing, but
they are not recommended for production. The following sections describe the
key areas to configure for a production deployment.
+
+### Authentication
+
+Polaris supports internal authentication (with RSA key pairs or symmetric
keys) and external authentication via OIDC with identity providers like
Keycloak, Okta, Azure AD, and others.
+
+By default, the Polaris Helm chart uses internal authentication with
auto-generated keys. In a multi-replica production environment, all Polaris
pods must share the same token signing keys to avoid token validation failures.
+
+See the [Authentication]({{% relref "authentication" %}}) page for detailed
configuration instructions.
+
+### Persistence
+
+By default, the Polaris Helm chart uses the `in-memory` metastore, which is
not suitable for production. A persistent backend must be configured to ensure
data is not lost when pods restart.
+
+Polaris supports PostgreSQL (JDBC) and MongoDB (NoSQL, beta) as
production-ready persistence backends. See the [Persistence]({{% relref
"persistence" %}}) page for detailed configuration instructions.
+
+### Networking
+
+For configuring external access to Polaris using the Gateway API or Ingress,
see the [Services & Networking]({{% relref "networking" %}}) guide.
+
+### Resource Management
+
+For a production environment, it is crucial to define resource requests and
limits for the Polaris pods. Resource requests ensure that pods are allocated
enough resources to run, while limits prevent them from consuming too many
resources on the node.
+
+Define resource requests and limits for the Polaris pods:
+
+```yaml
+resources:
+ requests:
+ memory: "8Gi"
+ cpu: "4"
+ limits:
+ memory: "8Gi"
+ cpu: "4"
+```
+
+Adjust these values based on expected workload and available cluster resources.
+
+### Scaling
+
+For high availability, multiple replicas of the Polaris server can be run.
This requires a persistent backend to be configured as described above.
+
+#### Static Replicas
+
+`replicaCount` must be set to the desired number of pods:
+
+```yaml
+replicaCount: 3
+```
+
+#### Autoscaling
+
+
+Horizontal autoscaling can be enabled to define the minimum and maximum number
of replicas, and CPU or memory utilization targets:
+
+```yaml
+autoscaling:
+ enabled: true
+ minReplicas: 2
+ maxReplicas: 5
+ targetCPUUtilizationPercentage: 80
+ targetMemoryUtilizationPercentage: 80
+```
+
+#### Pod Topology Spreading
+
+For better fault tolerance, `topologySpreadConstraints` can be used to
distribute pods across different nodes, racks, or availability zones. This
helps prevent a single infrastructure failure from taking down all Polaris
replicas.
+
+Here is an example that spreads pods across different zones and keeps the
number of pods in each zone from differing by more than one:
+
+```yaml
+topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: "topology.kubernetes.io/zone"
+ whenUnsatisfiable: "DoNotSchedule"
+```
+
+### Pod Priority
+
+In a production environment, it is advisable to set a `priorityClassName` for
the Polaris pods. This ensures that the Kubernetes scheduler gives them higher
priority over less critical workloads, and helps prevent them from being
evicted from a node that is running out of resources.
+
+First, a `PriorityClass` must be created in the cluster. For example:
+
+```yaml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: polaris-high-priority
+value: 1000000
+globalDefault: false
+description: "This priority class should be used for Polaris service pods
only."
+```
+
+Then, the `priorityClassName` can be set in the `values.yaml` file:
+
+```yaml
+priorityClassName: "polaris-high-priority"
+```
+
+## Bootstrapping Realms
+
+When installing Polaris for the first time, it is necessary to bootstrap each
realm using the Polaris admin tool.
+
+For more information on bootstrapping realms, see the [Admin Tool]({{% relref
"../admin-tool#bootstrapping-realms-and-principal-credentials" %}}) guide.
+
+Example for the PostgreSQL backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=relational-jdbc" \
+ --env="quarkus.datasource.username=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.username}' | base64 --decode)" \
+ --env="quarkus.datasource.password=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.password}' | base64 --decode)" \
+ --env="quarkus.datasource.jdbc.url=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.jdbcUrl}' | base64 --decode)" \
+ -- \
+ bootstrap -r POLARIS -c POLARIS,root,$ROOT_PASSWORD
+```
+
+Example for the NoSQL (MongoDB) backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=nosql" \
+ --env="polaris.persistence.nosql.backend=MongoDb" \
+ --env="quarkus.mongodb.database=polaris" \
Review Comment:
I could be wrong here, I thought db name is case sensitive in mongodb. In
this case, do we need to change to `quarkus.mongodb.database=polaris`
otherwise, we will need to change realm on line 201 to `polaris` from `POLARIS`
instead?
##########
site/content/in-dev/unreleased/helm-chart/dev.md:
##########
@@ -0,0 +1,264 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+title: Development & Testing
+linkTitle: Development & Testing
+type: docs
+weight: 800
+---
+
+This guide provides instructions for developers who want to work with the
Polaris Helm chart locally, including setting up a local Kubernetes cluster,
building from source, and running tests.
+
+## Local Development with Minikube
+
+### Prerequisites
+
+- [Minikube](https://minikube.sigs.k8s.io/docs/start/) installed
+- [Helm](https://helm.sh/docs/intro/install/) 3.x ou 4.x installed
+- [kubectl](https://kubernetes.io/docs/tasks/tools/) installed
+
+### Starting Minikube
+
+Start a local Minikube cluster:
+
+```bash
+minikube start
+```
+
+### Installing from the Official Repository
+
+Add the official Polaris Helm repository and install the chart:
+
+```bash
+helm repo add polaris https://downloads.apache.org/incubator/polaris/helm-chart
+helm repo update
+kubectl create namespace polaris
+helm install polaris polaris/polaris --namespace polaris --devel
+```
+
+{{< alert note >}}
+The `--devel` flag is required while Polaris is in the incubation phase.
+{{< /alert >}}
+
+Verify the installation:
+
+```bash
+helm test polaris --namespace polaris
+```
+
+## Building and Installing from Source
+
+This section assumes you have cloned the Polaris Git repository and set up
prerequisites to build the project. See the [Install Dependencies]({{% relref
"../getting-started/install-dependencies" %}}) guide for details.
+
+### Building Container Images
+
+Start Minikube and configure Docker to use Minikube's Docker daemon:
+
+```bash
+minikube start
+eval $(minikube docker-env)
+```
+
+Build the container images. The Polaris server image is required, the admin
tool image is optional (useful for bootstrapping realms if necessary, see
below):
+
+```bash
+./gradlew \
+ :polaris-server:assemble \
+ :polaris-server:quarkusAppPartsBuild --rerun \
+ :polaris-admin:assemble \
+ :polaris-admin:quarkusAppPartsBuild --rerun \
+ -Dquarkus.container-image.build=true
+```
+
+Alternatively, you can use Make to start Minikube and build the images:
+
+```bash
+make minikube-start-cluster
+make build
+make minikube-load-images
+```
+
+### Creating the Namespace and Fixtures
+
+Create the namespace and deploy fixtures:
+
+```bash
+kubectl create namespace polaris
+kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/
+```
+
+The fixtures deploy a PostgreSQL instance and a MongoDB instance that can be
used for testing purposes. If you plan to test with persistence backends, wait
for the database pods to be ready:
+
+```bash
+kubectl wait --namespace polaris --for=condition=ready pod
--selector=app.kubernetes.io/name=postgres --timeout=120s
+kubectl wait --namespace polaris --for=condition=ready pod
--selector=app.kubernetes.io/name=mongodb --timeout=120s
+```
+
+Alternatively, you can use Make to deploy the fixtures:
+
+```bash
+make helm-fixtures
+```
+
+{{< alert warning >}}
+The fixtures in `helm/polaris/ci/fixtures/` are intended for testing purposes
only and are not suitable for production use. Especially, the PostgreSQL and
MongoDB instances are configured without encryption or security and should
never be deployed as is in production!
+{{< /alert >}}
+
+### Installing the Chart Manually
+
+Install the chart from the local source, using either the `values.yaml` file,
or any of the values files in `helm/polaris/ci/`. Example:
+
+```bash
+# Non-persistent backend
+helm upgrade --install --namespace polaris polaris helm/polaris
+
+# Persistent backend
+helm upgrade --install --namespace polaris \
+ --values helm/polaris/ci/persistence-values.yaml \
+ polaris helm/polaris
+```
+
+### Bootstrapping Realms for Development
+
+When doing adhoc testing with a persistent backend, you may want to bootstrap
all the realms using the admin tool. For more information, see the [Admin
Tool]({{% relref "../admin-tool#bootstrapping-realms-and-principal-credentials"
%}}) guide.
+
+Example for the PostgreSQL backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=relational-jdbc" \
+ --env="quarkus.datasource.username=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.username}' | base64 --decode)" \
+ --env="quarkus.datasource.password=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.password}' | base64 --decode)" \
+ --env="quarkus.datasource.jdbc.url=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.jdbcUrl}' | base64 --decode)" \
+ -- \
+ bootstrap -r POLARIS -c POLARIS,root,s3cr3t
+```
+
+Example for the NoSQL (MongoDB) backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=nosql" \
+ --env="polaris.persistence.nosql.backend=MongoDb" \
+ --env="quarkus.mongodb.database=polaris" \
+ --env="quarkus.mongodb.connection-string=$(kubectl get secret
polaris-nosql-persistence -n polaris -o jsonpath='{.data.connectionString}' |
base64 --decode)" \
+ -- \
+ bootstrap -r POLARIS -c POLARIS,root,s3cr3t
+```
+
+Both commands above bootstrap a realm named `POLARIS` with root password:
`s3cr3t`.
+
+{{< alert note >}}
+The Helm integration tests (see below) do not require bootstrapping realms, as
they do not exercise (yet) Polaris REST APIs.
+{{< /alert >}}
+
+## Running Chart Tests
+
+### Prerequisites
+
+Install the required tools:
+
+```bash
+brew install chart-testing
+brew install yamllint
+make helm-install-plugins
+```
+
+The following tools will be installed:
+
+* [Helm Unit Test](https://github.com/helm-unittest/helm-unittest)
+* [Helm JSON Schema](https://github.com/losisin/helm-values-schema-json)
+* [Chart Testing](https://github.com/helm/chart-testing)
+* [yamllint](https://github.com/adrienverge/yamllint)
+
+### Unit Tests
+
+Helm unit tests do not require a Kubernetes cluster. Run them from the Polaris
repo root:
+
+```bash
+make helm-unittest
+```
+
+### Linting
+
+Lint the chart using the Chart Testing tool:
+
+```bash
+make helm-lint
+```
+
+### Making Changes to Documentation or Schema
+
+If you make changes to the Helm chart's `values.yaml` file, you need to
regenerate the documentation and schema. Run from the Polaris repo root:
+
+```bash
+make helm-schema-generate helm-doc-generate
+```
+
+Alternatively, you can run:
+
+```bash
+make helm
+```
+
+This will run all Helm-related targets, including unit tests, linting, schema
generation, and documentation generation (it will not run integration tests
though).
+
+### Integration Tests
+
+Integration tests are tests executed with the `ct install` tool. They require
a Kubernetes cluster with fixtures deployed, as explained in the [Installing
the Chart Manually](#installing-the-chart-manually) section above.
+
+The simplest way to run the integration tests is to use Make:
+
+```bash
+make helm-integration-test
+```
+
+The above command will build and load the images into Minikube, deploy the
fixtures, and run all `ct install` tests.
+
+## Cleanup
+
+Uninstall the chart, remove resources and delete the namespace:
+
+```bash
+helm uninstall --namespace polaris polaris
+kubectl delete --namespace polaris -f helm/polaris/ci/fixtures/
Review Comment:
same posted on the makefile, should we just delete namespace directly?
##########
site/content/in-dev/unreleased/helm-chart/dev.md:
##########
@@ -0,0 +1,264 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+title: Development & Testing
+linkTitle: Development & Testing
+type: docs
+weight: 800
+---
+
+This guide provides instructions for developers who want to work with the
Polaris Helm chart locally, including setting up a local Kubernetes cluster,
building from source, and running tests.
+
+## Local Development with Minikube
+
+### Prerequisites
+
+- [Minikube](https://minikube.sigs.k8s.io/docs/start/) installed
+- [Helm](https://helm.sh/docs/intro/install/) 3.x ou 4.x installed
Review Comment:
typo on `ou`
##########
site/content/in-dev/unreleased/helm-chart/dev.md:
##########
@@ -0,0 +1,264 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+title: Development & Testing
+linkTitle: Development & Testing
+type: docs
+weight: 800
+---
+
+This guide provides instructions for developers who want to work with the
Polaris Helm chart locally, including setting up a local Kubernetes cluster,
building from source, and running tests.
+
+## Local Development with Minikube
+
+### Prerequisites
+
+- [Minikube](https://minikube.sigs.k8s.io/docs/start/) installed
+- [Helm](https://helm.sh/docs/intro/install/) 3.x ou 4.x installed
+- [kubectl](https://kubernetes.io/docs/tasks/tools/) installed
+
+### Starting Minikube
+
+Start a local Minikube cluster:
+
+```bash
+minikube start
+```
+
+### Installing from the Official Repository
+
+Add the official Polaris Helm repository and install the chart:
+
+```bash
+helm repo add polaris https://downloads.apache.org/incubator/polaris/helm-chart
+helm repo update
+kubectl create namespace polaris
+helm install polaris polaris/polaris --namespace polaris --devel
+```
+
+{{< alert note >}}
+The `--devel` flag is required while Polaris is in the incubation phase.
+{{< /alert >}}
+
+Verify the installation:
+
+```bash
+helm test polaris --namespace polaris
+```
+
+## Building and Installing from Source
+
+This section assumes you have cloned the Polaris Git repository and set up
prerequisites to build the project. See the [Install Dependencies]({{% relref
"../getting-started/install-dependencies" %}}) guide for details.
+
+### Building Container Images
+
+Start Minikube and configure Docker to use Minikube's Docker daemon:
+
+```bash
+minikube start
+eval $(minikube docker-env)
+```
+
+Build the container images. The Polaris server image is required, the admin
tool image is optional (useful for bootstrapping realms if necessary, see
below):
+
+```bash
+./gradlew \
+ :polaris-server:assemble \
+ :polaris-server:quarkusAppPartsBuild --rerun \
+ :polaris-admin:assemble \
+ :polaris-admin:quarkusAppPartsBuild --rerun \
+ -Dquarkus.container-image.build=true
+```
+
+Alternatively, you can use Make to start Minikube and build the images:
+
+```bash
+make minikube-start-cluster
+make build
+make minikube-load-images
+```
+
+### Creating the Namespace and Fixtures
+
+Create the namespace and deploy fixtures:
+
+```bash
+kubectl create namespace polaris
+kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/
+```
+
+The fixtures deploy a PostgreSQL instance and a MongoDB instance that can be
used for testing purposes. If you plan to test with persistence backends, wait
for the database pods to be ready:
+
+```bash
+kubectl wait --namespace polaris --for=condition=ready pod
--selector=app.kubernetes.io/name=postgres --timeout=120s
+kubectl wait --namespace polaris --for=condition=ready pod
--selector=app.kubernetes.io/name=mongodb --timeout=120s
+```
+
+Alternatively, you can use Make to deploy the fixtures:
+
+```bash
+make helm-fixtures
+```
+
+{{< alert warning >}}
+The fixtures in `helm/polaris/ci/fixtures/` are intended for testing purposes
only and are not suitable for production use. Especially, the PostgreSQL and
MongoDB instances are configured without encryption or security and should
never be deployed as is in production!
+{{< /alert >}}
+
+### Installing the Chart Manually
+
+Install the chart from the local source, using either the `values.yaml` file,
or any of the values files in `helm/polaris/ci/`. Example:
+
+```bash
+# Non-persistent backend
+helm upgrade --install --namespace polaris polaris helm/polaris
+
+# Persistent backend
+helm upgrade --install --namespace polaris \
+ --values helm/polaris/ci/persistence-values.yaml \
+ polaris helm/polaris
+```
+
+### Bootstrapping Realms for Development
+
+When doing adhoc testing with a persistent backend, you may want to bootstrap
all the realms using the admin tool. For more information, see the [Admin
Tool]({{% relref "../admin-tool#bootstrapping-realms-and-principal-credentials"
%}}) guide.
+
+Example for the PostgreSQL backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=relational-jdbc" \
+ --env="quarkus.datasource.username=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.username}' | base64 --decode)" \
+ --env="quarkus.datasource.password=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.password}' | base64 --decode)" \
+ --env="quarkus.datasource.jdbc.url=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.jdbcUrl}' | base64 --decode)" \
+ -- \
+ bootstrap -r POLARIS -c POLARIS,root,s3cr3t
+```
+
+Example for the NoSQL (MongoDB) backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=nosql" \
+ --env="polaris.persistence.nosql.backend=MongoDb" \
+ --env="quarkus.mongodb.database=polaris" \
Review Comment:
Same here for the database name. Will lower case `polaris` as database name
works for bootstrap on line 170?
##########
site/content/in-dev/unreleased/helm-chart/production.md:
##########
@@ -0,0 +1,215 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+title: Production Configuration
+linkTitle: Production Configuration
+type: docs
+weight: 200
+---
+
+This guide provides instructions for configuring the Apache Polaris Helm chart
for a production environment. For full list of chart values, see the [Chart
Reference]({{% relref "reference" %}}) page.
+
+## Prerequisites
+
+- A Kubernetes cluster (1.33+ recommended)
+- Helm 3.x or 4.x installed
+- `kubectl` configured to access your cluster
+- A PostgreSQL or MongoDB database
+
+## Adding the Helm Repository
+
+Add the official Apache Polaris Helm repository:
+
+```bash
+helm repo add polaris https://downloads.apache.org/incubator/polaris/helm-chart
+helm repo update
+```
+
+## Installation
+
+Create a `values.yaml` file with your production configuration. See the Chart
[Values Reference]({{% relref "reference" %}}) for all available configuration
options.
+
+Create the target namespace and install the chart:
+
+```bash
+kubectl create namespace polaris
+helm install polaris polaris/polaris --namespace polaris --devel --values
your-production-values.yaml
+```
+
+{{< alert note >}}
+The `--devel` flag is required when installing while Polaris is in the
incubation phase. Helm treats the `-incubating` suffix as a pre-release by
SemVer rules, and will skip charts that are not in a stable versioning scheme
by default.
+{{< /alert >}}
+
+Verify the installation:
+
+```bash
+helm test polaris --namespace polaris
+```
+
+## Production Configuration
+
+The default Helm chart values are suitable for development and testing, but
they are not recommended for production. The following sections describe the
key areas to configure for a production deployment.
+
+### Authentication
+
+Polaris supports internal authentication (with RSA key pairs or symmetric
keys) and external authentication via OIDC with identity providers like
Keycloak, Okta, Azure AD, and others.
+
+By default, the Polaris Helm chart uses internal authentication with
auto-generated keys. In a multi-replica production environment, all Polaris
pods must share the same token signing keys to avoid token validation failures.
+
+See the [Authentication]({{% relref "authentication" %}}) page for detailed
configuration instructions.
+
+### Persistence
+
+By default, the Polaris Helm chart uses the `in-memory` metastore, which is
not suitable for production. A persistent backend must be configured to ensure
data is not lost when pods restart.
+
+Polaris supports PostgreSQL (JDBC) and MongoDB (NoSQL, beta) as
production-ready persistence backends. See the [Persistence]({{% relref
"persistence" %}}) page for detailed configuration instructions.
+
+### Networking
+
+For configuring external access to Polaris using the Gateway API or Ingress,
see the [Services & Networking]({{% relref "networking" %}}) guide.
+
+### Resource Management
+
+For a production environment, it is crucial to define resource requests and
limits for the Polaris pods. Resource requests ensure that pods are allocated
enough resources to run, while limits prevent them from consuming too many
resources on the node.
+
+Define resource requests and limits for the Polaris pods:
+
+```yaml
+resources:
+ requests:
+ memory: "8Gi"
+ cpu: "4"
+ limits:
+ memory: "8Gi"
+ cpu: "4"
+```
+
+Adjust these values based on expected workload and available cluster resources.
+
+### Scaling
+
+For high availability, multiple replicas of the Polaris server can be run.
This requires a persistent backend to be configured as described above.
+
+#### Static Replicas
+
+`replicaCount` must be set to the desired number of pods:
+
+```yaml
+replicaCount: 3
+```
+
+#### Autoscaling
+
Review Comment:
nit: extra newline
##########
site/content/in-dev/unreleased/helm-chart/production.md:
##########
@@ -0,0 +1,215 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+title: Production Configuration
+linkTitle: Production Configuration
+type: docs
+weight: 200
+---
+
+This guide provides instructions for configuring the Apache Polaris Helm chart
for a production environment. For full list of chart values, see the [Chart
Reference]({{% relref "reference" %}}) page.
+
+## Prerequisites
+
+- A Kubernetes cluster (1.33+ recommended)
+- Helm 3.x or 4.x installed
+- `kubectl` configured to access your cluster
+- A PostgreSQL or MongoDB database
+
+## Adding the Helm Repository
+
+Add the official Apache Polaris Helm repository:
+
+```bash
+helm repo add polaris https://downloads.apache.org/incubator/polaris/helm-chart
+helm repo update
+```
+
+## Installation
+
+Create a `values.yaml` file with your production configuration. See the Chart
[Values Reference]({{% relref "reference" %}}) for all available configuration
options.
+
+Create the target namespace and install the chart:
+
+```bash
+kubectl create namespace polaris
+helm install polaris polaris/polaris --namespace polaris --devel --values
your-production-values.yaml
+```
+
+{{< alert note >}}
+The `--devel` flag is required when installing while Polaris is in the
incubation phase. Helm treats the `-incubating` suffix as a pre-release by
SemVer rules, and will skip charts that are not in a stable versioning scheme
by default.
+{{< /alert >}}
+
+Verify the installation:
+
+```bash
+helm test polaris --namespace polaris
+```
+
+## Production Configuration
+
+The default Helm chart values are suitable for development and testing, but
they are not recommended for production. The following sections describe the
key areas to configure for a production deployment.
+
+### Authentication
+
+Polaris supports internal authentication (with RSA key pairs or symmetric
keys) and external authentication via OIDC with identity providers like
Keycloak, Okta, Azure AD, and others.
+
+By default, the Polaris Helm chart uses internal authentication with
auto-generated keys. In a multi-replica production environment, all Polaris
pods must share the same token signing keys to avoid token validation failures.
+
+See the [Authentication]({{% relref "authentication" %}}) page for detailed
configuration instructions.
+
+### Persistence
+
+By default, the Polaris Helm chart uses the `in-memory` metastore, which is
not suitable for production. A persistent backend must be configured to ensure
data is not lost when pods restart.
+
+Polaris supports PostgreSQL (JDBC) and MongoDB (NoSQL, beta) as
production-ready persistence backends. See the [Persistence]({{% relref
"persistence" %}}) page for detailed configuration instructions.
+
+### Networking
+
+For configuring external access to Polaris using the Gateway API or Ingress,
see the [Services & Networking]({{% relref "networking" %}}) guide.
+
+### Resource Management
+
+For a production environment, it is crucial to define resource requests and
limits for the Polaris pods. Resource requests ensure that pods are allocated
enough resources to run, while limits prevent them from consuming too many
resources on the node.
+
+Define resource requests and limits for the Polaris pods:
+
+```yaml
+resources:
+ requests:
+ memory: "8Gi"
+ cpu: "4"
+ limits:
+ memory: "8Gi"
+ cpu: "4"
+```
+
+Adjust these values based on expected workload and available cluster resources.
+
+### Scaling
+
+For high availability, multiple replicas of the Polaris server can be run.
This requires a persistent backend to be configured as described above.
+
+#### Static Replicas
+
+`replicaCount` must be set to the desired number of pods:
+
+```yaml
+replicaCount: 3
+```
+
+#### Autoscaling
+
+
+Horizontal autoscaling can be enabled to define the minimum and maximum number
of replicas, and CPU or memory utilization targets:
+
+```yaml
+autoscaling:
+ enabled: true
+ minReplicas: 2
+ maxReplicas: 5
+ targetCPUUtilizationPercentage: 80
+ targetMemoryUtilizationPercentage: 80
+```
+
+#### Pod Topology Spreading
+
+For better fault tolerance, `topologySpreadConstraints` can be used to
distribute pods across different nodes, racks, or availability zones. This
helps prevent a single infrastructure failure from taking down all Polaris
replicas.
+
+Here is an example that spreads pods across different zones and keeps the
number of pods in each zone from differing by more than one:
+
+```yaml
+topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: "topology.kubernetes.io/zone"
+ whenUnsatisfiable: "DoNotSchedule"
+```
+
+### Pod Priority
+
+In a production environment, it is advisable to set a `priorityClassName` for
the Polaris pods. This ensures that the Kubernetes scheduler gives them higher
priority over less critical workloads, and helps prevent them from being
evicted from a node that is running out of resources.
+
+First, a `PriorityClass` must be created in the cluster. For example:
+
+```yaml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: polaris-high-priority
+value: 1000000
+globalDefault: false
+description: "This priority class should be used for Polaris service pods
only."
+```
+
+Then, the `priorityClassName` can be set in the `values.yaml` file:
+
+```yaml
+priorityClassName: "polaris-high-priority"
+```
+
+## Bootstrapping Realms
+
+When installing Polaris for the first time, it is necessary to bootstrap each
realm using the Polaris admin tool.
+
+For more information on bootstrapping realms, see the [Admin Tool]({{% relref
"../admin-tool#bootstrapping-realms-and-principal-credentials" %}}) guide.
+
+Example for the PostgreSQL backend:
+
+```bash
+kubectl run polaris-bootstrap \
+ -n polaris \
+ --image=apache/polaris-admin-tool:latest \
+ --restart=Never \
+ --rm -it \
+ --env="polaris.persistence.type=relational-jdbc" \
+ --env="quarkus.datasource.username=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.username}' | base64 --decode)" \
+ --env="quarkus.datasource.password=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.password}' | base64 --decode)" \
+ --env="quarkus.datasource.jdbc.url=$(kubectl get secret polaris-persistence
-n polaris -o jsonpath='{.data.jdbcUrl}' | base64 --decode)" \
+ -- \
+ bootstrap -r POLARIS -c POLARIS,root,$ROOT_PASSWORD
+```
+
+Example for the NoSQL (MongoDB) backend:
Review Comment:
As NoSQL is still in beta (but in the values file we stated `Only MongoDb is
supported for production use.`), do we need clarify this is in beta only? From
persitence.md, we do clarify this. Thus wondering if that is worth mentioned it
here again.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]