Live Migrate Production Workloads from OpenShift Templates to Helm Charts
Table of Contents
- Introduction
- The Challenge
- The Workaround
- Existing ConfigMap
- Existing Helm Chart
- Try Installing the Helm Chart
- Add Label & Annotations
- Create an Empty Helm Release
- Adopt Existing Objects
- Helm Upgrade
- Helm Uninstall
- Conclusion
- References
- About the Author βπ»
Introduction
In an earlier post, we looked at how to convert OpenShift templates to helm charts. In this post, we’ll see how to live migrate running production customer workloads from being managed by OpenShift templates to Helm charts.
The Challenge
Helm is not designed to be used as a target of migration from another deployment-management system like OCP templates. Helm assumes that when Helm charts are installed, they will create afresh all Kubernetes objects included in the chart.
In our case, many Kubernetes objects, that are now packaged as Helm charts, were already created in the OCP cluster using OCP templates. When you install a chart, it simply tries to create all objects & fails because objects by the same name already exist in the cluster.
This document explores a workaround for this issue.
The Workaround
Helm uses a label & 2 annotations to keep track of which objects in the cluster are managed by Helm & which Helm release created them. Specifically, every Helm-managed object has:
metadata:
name: ...
labels:
app.kubernetes.io/managed-by: Helm
annotations:
meta.helm.sh/release-name: ...
meta.helm.sh/release-namespace: ...
Itβs possible to make Helm βadoptβ existing cluster objects by adding these labels & annotations manually.
The steps are as follows:
- Create an βemptyβ Helm release of the Helm chart that contains objects you wish to adopt
- Label & annotate all objects
- Run
helm upgrade
Helm has now adopted existing cluster objects! All subsequent helm upgrades will work as if these objects were created by Helm itself!
Let us now explore this workaroundβs implementation using a simple example.
Existing ConfigMap
Say you have a ConfigMap deployed in a cluster:
apiVersion: v1
data:
key: value
kind: ConfigMap
metadata:
name: my-cm
namespace: my-ns
Existing Helm Chart
Say you also have a Helm chart with the same ConfigMap:
tree my-chart
my-chart
βββ Chart.yaml
βββ templates
βββ config-map.yaml
cat my-chart/templates/config-map.yaml
apiVersion: v1
data:
key: value
kind: ConfigMap
metadata:
name: my-cm
Try Installing the Helm Chart
Now, if you try to install this chart, you get an error:
helm install my-release my-chart
Error: INSTALLATION FAILED:
rendered manifests contain a resource that already exists.
Unable to continue with install:
ConfigMap "my-cm" in namespace "my-ns" exists and cannot be imported into the current release:
invalid ownership metadata;
label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm";
annotation validation error: missing key "meta.helm.sh/release-name": must be set to "my-release";
annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "my-ns"
Add Label & Annotations
The last 3 lines of the error above, tell us exactly what we need to do to make our ConfigMap adoptable by Helm. So letβs do that:
$ kubectl label cm my-cm app.kubernetes.io/managed-by=Helm
configmap/my-cm labeled
$ kubectl annotate cm my-cm meta.helm.sh/release-name=my-release
configmap/my-cm annotated
$ kubectl annotate cm my-cm meta.helm.sh/release-namespace=helm-test
configmap/my-cm annotated
The ConfigMap now looks like this:
kubectl get cm my-cm -o yaml
apiVersion: v1
data:
key: value
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: my-ns
labels:
app.kubernetes.io/managed-by: Helm
name: my-cm
namespace: my-ns
At this point, even though the required labels & annotations exist on the object, itβs not being managed by Helm at all because there exists no Helm release in the cluster that can adopt this ConfigMap resource.
Create an Empty Helm Release
So the next step is to create an βemptyβ Helm release. It has to be empty in the sense that the chart cannot contain any objects. If it does, we will get the same error as before.
So first, create an empty chart:
$ tree my-chart
my-chart
βββ Chart.yaml
$ cat my-chart/Chart.yaml
apiVersion: v2
name: my-chart
type: application
version: 0.1.0
Now, install this chart:
$ helm list | grep my-chart
<no output>
$ helm install my-release my-chart
NAME: my-release
NAMESPACE: my-ns
STATUS: deployed
REVISION: 1
TEST SUITE: None
At this point, even though the chart is installed, it hasnβt yet adopted our ConfigMap. You can verify this by running:
$ helm get all my-release
NAME: my-release
NAMESPACE: my-ns
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES: null
COMPUTED VALUES: {}
HOOKS:
MANIFEST:
Adopt Existing Objects
The next step is to use the original Helm chart, with the ConfigMap object to finally adopt the ConfigMap that already exists in the cluster. We do this by running helm upgrade
using the chart:
$ tree my-chart
my-chart
βββ Chart.yaml
βββ templates
βββ config-map.yaml
$ helm upgrade my-release my-chart
Release "my-release" has been upgraded. Happy Helming!
NAME: my-release
NAMESPACE: my-ns
STATUS: deployed
REVISION: 2
TEST SUITE: None
This is when the cluster objects are actually adopted by Helm. We can verify this:
$ helm get all my-release
NAME: my-release
NAMESPACE: my-ns
STATUS: deployed
REVISION: 2
TEST SUITE: None
USER-SUPPLIED VALUES: null
COMPUTED VALUES: {}
HOOKS:
MANIFEST:
# Source: my-chart/templates/config-map.yaml
apiVersion: v1
data:
key: value
kind: ConfigMap
metadata:
name: my-cm
Helm Upgrade
From this point on, helm upgrade will work as expected. Letβs try this by adding some data to the ConfigMap:
apiVersion: v1
data:
key: value
new-key: new-value
kind: ConfigMap
metadata:
name: my-cm
$ helm upgrade my-release my-chart
Release "my-release" has been upgraded. Happy Helming!
NAME: my-release
NAMESPACE: my-ns
STATUS: deployed
REVISION: 3
TEST SUITE: None
$ kubectl get cm my-cm -o yaml
apiVersion: v1
data:
key: value
new-key: new-value
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: my-ns
labels:
app.kubernetes.io/managed-by: Helm
name: my-cm
Helm Uninstall
As the objects are now managed by Helm, helm uninstall
will delete them as well:
$ helm uninstall my-release
release "my-release" uninstalled
$ kubectl get cm my-cm
Error from server (NotFound): configmaps "my-cm" not found
Conclusion
This article was quite an elaborate walkthrough of how you can accomplish safe & zero-downtime migration of workloads managed by OpenShift templates to Helm charts. Head on over to part 3 of this blog series to learn how to migrate DeploymentConfigs to Deployments & Routes to Ingresses.
References
How to apply helm chart to existing resources?
How to Include New Kubernetes Resource Into Existing Helm Release
About the Author βπ»
Harish KM is a Principal DevOps Engineer at QloudX & a top-ranked AWS Ambassador since 2020. π¨π»βπ»
With over a decade of industry experience as everything from a full-stack engineer to a cloud architect, Harish has built many world-class solutions for clients around the world! π·π»ββοΈ
With over 20 certifications in cloud (AWS, Azure, GCP), containers (Kubernetes, Docker) & DevOps (Terraform, Ansible, Jenkins), Harish is an expert in a multitude of technologies. π
These days, his focus is on the fascinating world of DevOps & how it can transform the way we do things! π
[…] Live Migrate Production Workloads from OpenShift Templates to Helm Charts […]