Migrate DeploymentConfigs & Routes from OpenShift Templates to Helm Charts
Table of Contents
- Introduction
- Migrate OCP Objects to Helm Objects
- Migrate DeploymentConfigs to Deployments
- Migrate Routes to Ingresses
- Conclusion
- About the Author βπ»
Introduction
In an earlier blog post, we looked at how to live migrate running production customer workloads from being managed by OpenShift templates to Helm charts. In this post, we will focus on DeploymentConfigs & Routes.
Migrate OCP Objects to Helm Objects
When existing OCP customers migrate from OCP templates to Helm charts, most cluster objects like Services & ConfigMaps can be adopted by Helm charts as it is, without deleting & recreating them.
However, there exist other cluster objects like DeploymentConfigs & Routes, which are not supposed to be adopted by Helm charts. Instead, the Helm charts are supposed to replace DeploymentConfigs with Deployments & Routes with Ingresses.
Migrate DeploymentConfigs to Deployments
Consider a typical service deployed in OCP. It has a Service & a DeploymentConfig object:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Assume these are deployed to the cluster using OCP templates.
Once these manifests are deployed to our cluster, there will be 1 Nginx pod running & the Nginx service will route all traffic to this 1 pod:
$ kubectl get all | grep nginx
service/nginx
deploymentconfig.apps.openshift.io/nginx
replicationcontroller/nginx-1
pod/nginx-1-k48kj
Nginx Helm Chart
Now, our equivalent Helm chart for Nginx will contain a Service & a Deployment (instead of a DeploymentConfig).
As described in part 2, we will label & annotate the existing Nginx Service object in the cluster so that it gets adopted by Helm without destroying & recreating it.
But we will not do the same for the Nginx DeploymentConfig object, because we intend to replace it with our Helm chartβs Deployment object.
Avoiding Downtime
However, in order to avoid downtime, we need to deploy both the DeploymentConfig & the Deployment in parallel, and gradually transfer all incoming traffic from pods created by DeploymentConfig to pods created by Deployment.
Letβs begin by releasing an empty chart:
$ helm install nginx nginx
NAME: nginx
NAMESPACE: my-ns
STATUS: deployed
REVISION: 1
TEST SUITE: None
Now, label & annotate the Nginx Service:
$ kubectl label service nginx app.kubernetes.io/managed-by=Helm
service/nginx labeled
$ kubectl annotate service nginx meta.helm.sh/release-name=nginx
service/nginx annotated
$ kubectl annotate service nginx meta.helm.sh/release-namespace=my-ns
service/nginx annotated
Next, add the Service & Deployment objects to the Helm chart & upgrade it:
$ tree nginx
nginx
βββ Chart.yaml
βββ templates
βββ deployment.yaml
βββ service.yaml
$ helm upgrade nginx nginx
Release "nginx" has been upgraded. Happy Helming!
NAME: nginx
NAMESPACE: my-ns
STATUS: deployed
REVISION: 2
TEST SUITE: None
Running DeploymentConfig & Deployment Together
Now the cluster has 1 Nginx DeploymentConfig & 1 Nginx Deployment, each of these has created a pod of its own:
$ kubectl get all | grep nginx
service/nginx
deploymentconfig.apps.openshift.io/nginx
replicationcontroller/nginx-1
pod/nginx-1-k48kj
deployment.apps/nginx
replicaset.apps/nginx-6799fc88d8
pod/nginx-6799fc88d8-9267r
But since both DeploymentConfig & Deployment use the same label selectors, the Nginx Service is now routing traffic to pods created by both the DeploymentConfig & the Deployment:
$ kubectl describe service nginx | grep Endpoints
Endpoints: 10.131.1.225:80, 10.131.1.250:80
We can now safely delete the DeploymentConfig without causing a service disruption. The Nginx Service will simply start sending all traffic to the Deploymentβs pod after this.
Migrate Routes to Ingresses
Migrating existing customers from routes created by OCP templates to ingresses created by Helm charts is fairly straight forward. Routes & ingresses donβt conflict with each other, so the Helm charts can come in & install their Ingresses while the Routes still exist in the cluster. The Routes can then be cleaned up later.
Consider a simple Route deployed in a cluster:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: nginx
spec:
host: nginx
to:
kind: Service
name: nginx
And a Helm chart with an equivalent Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: nginx
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
This Helm chart can directly be installed onto the cluster without causing any disruptions to the existing Route.
Once the Helm chart is installed, the older Route can safely be deleted.
Routes Created by Ingresses
Note that OCP automatically creates a Route for every Ingress deployed onto the cluster. However, this auto-generated Route has a name with random characters & thus doesnβt conflict the the existing Route object.
Conclusion
This article was quite an elaborate walkthrough of how you can accomplish safe & zero-downtime migration of workloads managed by OpenShift templates to Helm charts. If you found this useful, please share it with your community. π
About the Author βπ»
Harish KM is a Principal DevOps Engineer at QloudX & a top-ranked AWS Ambassador since 2020. π¨π»βπ»
With over a decade of industry experience as everything from a full-stack engineer to a cloud architect, Harish has built many world-class solutions for clients around the world! π·π»ββοΈ
With over 20 certifications in cloud (AWS, Azure, GCP), containers (Kubernetes, Docker) & DevOps (Terraform, Ansible, Jenkins), Harish is an expert in a multitude of technologies. π
These days, his focus is on the fascinating world of DevOps & how it can transform the way we do things! π