Deploy Helm Charts
Helm Chart Deployment
The ClusterProfile spec.helmCharts can list a number of Helm charts to get deployed to the managed clusters with a specific label selector.
Note
Sveltos will deploy the Helm charts in the exact order they are defined (top-down approach).
Example: Single Helm chart
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: kyverno
spec:
clusterSelector:
matchLabels:
env: prod
helmCharts:
- repositoryURL: https://kyverno.github.io/kyverno/
repositoryName: kyverno
chartName: kyverno/kyverno
chartVersion: v3.3.3
releaseName: kyverno-latest
releaseNamespace: kyverno
helmChartAction: Install
In the above YAML definition, we install Kyverno on a managed cluster with the label selector set to env=prod.
Example: Multiple Helm charts
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: prometheus-grafana
spec:
clusterSelector:
matchLabels:
env: fv
helmCharts:
- repositoryURL: https://prometheus-community.github.io/helm-charts
repositoryName: prometheus-community
chartName: prometheus-community/prometheus
chartVersion: 26.0.0
releaseName: prometheus
releaseNamespace: prometheus
helmChartAction: Install
- repositoryURL: https://grafana.github.io/helm-charts
repositoryName: grafana
chartName: grafana/grafana
chartVersion: 8.6.4
releaseName: grafana
releaseNamespace: grafana
helmChartAction: Install
In the above YAML definition, we initially install the Prometheus community Helm chart and afterwards the Grafana Helm chart. The two defined Helm charts will get deployed on a managed cluster with the label selector set to env=fv.
Example: Update Helm Chart Values
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: kyverno
spec:
clusterSelector:
matchLabels:
env: fv
syncMode: Continuous
helmCharts:
- repositoryURL: https://kyverno.github.io/kyverno/
repositoryName: kyverno
chartName: kyverno/kyverno
chartVersion: v3.3.3
releaseName: kyverno-latest
releaseNamespace: kyverno
helmChartAction: Install
values: |
admissionController:
replicas: 1
The values field is the way to update different Helm chart values.
Example: Update Helm Chart Values From Referenced ConfigMap/Secret
Sveltos allows us to manage Helm chart values using ConfigMaps/Secrets.
Note
Referenced Secrets must be of type addons.projectsveltos.io/cluster-profile
For example, we can create a file named cleanup-controller.yaml with below content.
Create the ConfigMap
resource.
Then, create another file named admission_controller.yaml with the below content.
Create a ConfigMap
resource.
Within the Sveltos ClusterProfile
resource, define the helmCharts
section. We can specify the Helm chart details and leverage the valuesFrom to reference the ConfigMaps
.
This injects the probe configurations from the ConfigMaps
into the Helm chart values during deployment.
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: kyverno
spec:
clusterSelector:
matchLabels:
env: fv
syncMode: Continuous
helmCharts:
- repositoryURL: https://kyverno.github.io/kyverno/
repositoryName: kyverno
chartName: kyverno/kyverno
chartVersion: v3.3.3
releaseName: kyverno-latest
releaseNamespace: kyverno
helmChartAction: Install
values: |
admissionController:
replicas: 1
valuesFrom:
- kind: ConfigMap
name: cleanup-controller
namespace: default
- kind: ConfigMap
name: admission-controller
namespace: default
Example: Template-based Referencing for ValuesFrom
In the ValuesFrom section, we can express ConfigMap
and Secret
names as templates. This allows us to generate them dynamically based on the available cluster information, simplifying management and reducing repetition.
Available cluster information
- cluster namespace:
.Cluster.metadata.namespace
- cluster name:
.Cluster.metadata.name
- cluster type:
.Cluster.kind
Consider two SveltosCluster instances in the civo namespace.
$ kubectl get sveltoscluster -n civo --show-labels
NAME READY VERSION LABELS
pre-production true v1.29.2+k3s1 env=civo,projectsveltos.io/k8s-version=v1.29.2
production true v1.28.7+k3s1 env=civo,projectsveltos.io/k8s-version=v1.28.7
Four ConfigMaps
are available within the same namespace.
$ kubectl get configmap -n civo
NAME DATA AGE
admission-controller-pre-production 1 8m31s
admission-controller-production 1 7m49s
cleanup-controller-pre-production 1 8m48s
cleanup-controller-production 1 8m1s
The only difference between the ConfigMaps
is the admissionController
and cleanupController
replicas setting: 1 for pre-production and 3 for production.
The below points are included in the ClusterProfile
.
- Matches both SveltosClusters
- Dynamic ConfigMap Selection:
- For the
pre-production
cluster, the profile should use theadmission-controller-pre-production
andcleanup-controller-pre-production
ConfigMaps. - For the
production
cluster, the profile should use theadmission-controller-production
andcleanup-controller-production
ConfigMaps.
- For the
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: kyverno
spec:
clusterSelector:
matchLabels:
env: civo
syncMode: Continuous
helmCharts:
- repositoryURL: https://kyverno.github.io/kyverno/
repositoryName: kyverno
chartName: kyverno/kyverno
chartVersion: v3.3.3
releaseName: kyverno-latest
releaseNamespace: kyverno
helmChartAction: Install
values: |
backgroundController:
replicas: 3
valuesFrom:
- kind: ConfigMap
name: cleanup-controller-{{ .Cluster.metadata.name }}
namespace: civo
- kind: ConfigMap
name: admission-controller-{{ .Cluster.metadata.name }}
namespace: civo
Example: Express Helm Values as Templates
Both the values section and the content stored in referenced ConfigMaps
and Secrets
can be written using templates. Sveltos will instantiate the templates using resources in the management cluster.
Sveltos deploys the Helm chart with the final, resolved values. See the template section for more details.
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: deploy-calico
spec:
clusterSelector:
matchLabels:
env: prod
helmCharts:
- repositoryURL: https://projectcalico.docs.tigera.io/charts
repositoryName: projectcalico
chartName: projectcalico/tigera-operator
chartVersion: v3.24.5
releaseName: calico
releaseNamespace: tigera-operator
helmChartAction: Install
values: |
installation:
calicoNetwork:
ipPools:
{{ range $cidr := .Cluster.spec.clusterNetwork.pods.cidrBlocks }}
- cidr: {{ $cidr }}
encapsulation: VXLAN
{{ end }}
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: deploy-cilium-v1-26
spec:
clusterSelector:
matchLabels:
env: fv
helmCharts:
- chartName: cilium/cilium
chartVersion: 1.12.12
helmChartAction: Install
releaseName: cilium
releaseNamespace: kube-system
repositoryName: cilium
repositoryURL: https://helm.cilium.io/
values: |
k8sServiceHost: "{{ .Cluster.spec.controlPlaneEndpoint.host }}"
k8sServicePort: "{{ .Cluster.spec.controlPlaneEndpoint.port }}"
hubble:
enabled: false
nodePort:
enabled: true
kubeProxyReplacement: strict
operator:
replicas: 1
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
Example: OCI Registry
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: vault
spec:
clusterSelector:
matchLabels:
env: fv
syncMode: Continuous
helmCharts:
- repositoryURL: oci://registry-1.docker.io/bitnamicharts
repositoryName: oci-vault
chartName: vault
chartVersion: 0.7.2
releaseName: vault
releaseNamespace: vault
helmChartAction: Install
Example: Private Registry
Docker Hub
Create a file named secret_content.yaml with the below content. Remember to replace the redacted values with the actual Docker Hub username and password/token.
{"auths":{"https://registry-1.docker.io/v1/":{"username":"REDACTED","password":"REDACTED","auth":"username:password base64 encoded"}}}
Use the kubectl command to create a Secret named regcred in the default namespace. The command references the secret_content.yaml file and sets the type to kubernetes.io/dockerconfigjson.
$ kubectl create secret generic regcred --from-file=.dockerconfigjson=secret_content.yaml --type=kubernetes.io/dockerconfigjson
Then we can configure the ClusterProfile
to use the newly created Secret
for authentication with the Docker Hub.
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: projectsveltos
spec:
clusterSelector:
matchLabels:
env: fv
syncMode: Continuous
helmCharts:
- repositoryURL: oci://registry-1.docker.io/gianlucam76
repositoryName: projectsveltos
chartName: projectsveltos
chartVersion: 0.46.0
releaseName: projectsveltos
releaseNamespace: projectsveltos
helmChartAction: Install
registryCredentialsConfig:
credentials:
name: regcred
namespace: default
In this example, the registryCredentialsConfig
section references the regcred Secret stored in the default
namespace. This ensures the Helm chart can access the private registry during deployment.
Harbor
Another example using Harbor (on Civo cluster) as registry. Create a file named secret_harbor_content.yaml with the below content. Remember to replace the base64 encoded string with the actual Harbor credentials.
Create a Secret named credentials in the default
namespace using the secret_harbor_content.yaml file.
$ kubectl create secret generic credentials --from-file=.dockerconfigjson=secret_harbor_content.yaml --type=kubernetes.io/dockerconfigjson
---
apiVersion: config.projectsveltos.io/v1beta1
kind: ClusterProfile
metadata:
name: projectsveltos
spec:
clusterSelector:
matchLabels:
env: fv
syncMode: Continuous
helmCharts:
- repositoryURL: oci://harbor.4fc01642-cfc0-4c55-a139-d593c92b232f.k8s.civo.com/library
repositoryName: projectsveltos
chartName: projectsveltos
chartVersion: 0.38.1
releaseName: projectsveltos
releaseNamespace: projectsveltos
helmChartAction: Install
registryCredentialsConfig:
insecureSkipTLSVerify: true
credentials:
name: credentials
namespace: default
The Harbor credentials can be also stored as BasicAuth. Like the example below.
The profile's HelmCharts section can reference the secret like the snippet below.
Note
The insecureSkipTLSVerify
option should only be used if your private registry does not support TLS verification. It's generally recommended to use a secure TLS connection and set the CASecretRef
field in the registryCredentialsConfig
Upgrade CRDs
Helm doesn't currently offer built-in support for upgrading CRDs. This was a deliberate decision to avoid potential data loss. There's also ongoing discussion within the Helm community about the ideal way to manage CRD lifecycles. Future Helm versions might address this.
For custom Helm charts, you can work around this limitation by:
- Placing CRDs in templates: Instead of the crds/ directory, include your CRDs within the chart's templates folder. This allows them to be upgraded during the chart update process.
- Separate Helm chart: As suggested by the official Helm documentation, consider creating a separate Helm chart specifically for your CRDs. This allows independent management of those resources.
However, using third-party Helm charts can be problematic as upgrading their CRDs might not be possible by default. Here's where Sveltos comes in.
Sveltos allows you to control CRD upgrades for third-party charts through the upgradeCRDs
field within your ClusterProfile configuration.
When upgradeCRDs
is set to true, Sveltos will initially patch all Custom Resource Definition (CRD) instances located in the Helm chart's crds/ directory.
Once these CRDs are updated, Sveltos will proceed with the Helm upgrade process.
Options
Sveltos allows you to configure Helm charts options during deployment. For a complete list of Helm options, refer to the CRD.