Basic Kubernetes Resource Change Tracking using Metadata Managed Fields
Table of Contents
- Introduction
- Managed Fields
- Inspect Change History (Summary)
- Inspect Change History (Details)
- Conclusion
- About the Author ✍🏻
Introduction
There is often a need, especially while investigating an issue, to see the modification history of a Kubernetes resource. This article demonstrates a rudimentary way to inspect the change history of a resource in the absence of full-fledged audit logging mechanisms.
Managed Fields
metadata.managedFields
in a Kubernetes resource, records information about how different tools & controllers have modified the resource. This includes details like the name of the tool (like kubectl
), the fields it changed & the operation performed: update, apply, etc.
Inspect the YAML manifest of any Kubernetes resource in the cluster & you’ll see managed fields:
metadata:
managedFields:
- apiVersion: # API version of this resource
fieldsType: FieldsV1
fieldsV1: # What changed?
...
manager: # Who made the change? Which tool or controller?
operation: Apply / Update / ...
time: # When was the change made?
Here’s an example:
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
managedFields:
- apiVersion: karpenter.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:kustomize.toolkit.fluxcd.io/name: {}
f:kustomize.toolkit.fluxcd.io/namespace: {}
manager: kustomize-controller
operation: Apply
time: '2024-05-26T13:02:59Z'
- apiVersion: karpenter.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:karpenter.sh/nodepool-hash: {}
manager: karpenter
operation: Update
time: '2024-04-16T12:30:23Z'
- apiVersion: karpenter.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:resources:
f:cpu: {}
f:ephemeral-storage: {}
f:memory: {}
f:pods: {}
manager: karpenter
operation: Update
subresource: status
time: '2024-05-26T13:17:19Z'
- apiVersion: karpenter.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:disruption:
f:budgets: {}
f:consolidationPolicy: {}
manager: kubectl-edit
operation: Update
time: '2024-05-26T13:42:00Z'
Inspect Change History (Summary)
To make it easier to inspect the change history of an object, let’s use this utility script that presents the managed fields information in a concise manner:
#!/usr/bin/env bash
RESOURCE=$1
kubectl get "$RESOURCE" -o jsonpath='{.metadata.managedFields}' | jq -r \
'sort_by(.time) | reverse | .[] | [.time, .operation, .manager] | @tsv'
Using the utility to view change history:
$ alias kh=kube-history
$ kh nodepool/default
2024-05-26T13:42:00Z Update kubectl-edit
2024-05-26T13:17:19Z Update karpenter
2024-05-26T13:02:59Z Apply kustomize-controller
2024-04-16T12:30:23Z Update karpenter
This tells us a lot about what’s happened. We can see that someone has recently edited this object (NodePool
) manually using kubectl edit
. The updates by Karpenter are expected in this case. And we also see that this object is managed via GitOps — Flux CD (kustomize-controller
).
Let’s take another example:
$ kubens flux-system
Context "..." modified.
Active namespace is "flux-system".
$ kh kustomization/karpenter
2024-05-26T13:10:48Z Update gotk-kustomize-controller
2024-05-26T13:10:48Z Update flux
2024-04-16T12:24:52Z Apply kustomize-controller
Here again, we see that Flux created & updated this Karpenter Kustomization
.
Inspect Change History (Details)
When investigating an issue, finding a suspicious change event would be the first step. That’s what we did above. Once you have identified an event, use this utility to view what changed in the event:
#!/usr/bin/env bash
RESOURCE=$1
MANAGER=$2
kubectl get "$RESOURCE" --show-managed-fields -o yaml | yq \
"(.metadata.managedFields[] | select(.manager == \"$MANAGER\")).fieldsV1"
Trying this on one of the change events, we get:
$ alias khd=kube-history-details
$ khd nodepool/default kubectl-edit
f:spec:
f:disruption:
f:budgets: {}
f:consolidationPolicy: {}
This tells us that the consolidation policy & budget were manually modified (in a Flux-managed object).
Conclusion
What we explored here, is a simple way to see what kind of change occurred on a resource & when. Managed fields are not meant to be an audit mechanism. For full audit capabilities, see https://kubernetes.io/docs/tasks/debug/debug-cluster/audit.
If you’re on a managed Kubernetes platform like Amazon EKS, enable audit logging to CloudWatch from the Observability tab of your EKS console & use CloudWatch log insights to explore in detail, resource modification histories:
See Amazon EKS control plane logging
About the Author ✍🏻
Harish KM is a Principal DevOps Engineer at QloudX & a top-ranked AWS Ambassador since 2020. 👨🏻💻
With over a decade of industry experience as everything from a full-stack engineer to a cloud architect, Harish has built many world-class solutions for clients around the world! 👷🏻♂️
With over 20 certifications in cloud (AWS, Azure, GCP), containers (Kubernetes, Docker) & DevOps (Terraform, Ansible, Jenkins), Harish is an expert in a multitude of technologies. 📚
These days, his focus is on the fascinating world of DevOps & how it can transform the way we do things! 🚀