How I Upgraded A Production Grade Kubernetes Cluster From 1.22 To 1.23.17 using Kubeadm
0Kubernetes, image credit - thelinuxnotes
In this article, I will share my experience in upgrading a bare metal Production grade Kubernetes cluster from 1.22 -> 1.23.17 using Kubeadm. The process is quite straightforward, however, there are some gotchas I encountered and I have documented them here, which may be helpful to you.
The process is in 2 parts:
- Upgrade the Control Plane Nodes
- Upgrade the Worker Nodes
First, run kubectl get nodes -o wide
to see all the nodes.
Kubernetes Control Plane Upgrade: 1.22 -> 1.23.17
- Drain the node:
kubectl drain --ignore-daemonsets --delete-emptydir-data <node>
- ssh into the node
ssh -i ~/.ssh/key <user>@<ip>
For The First Master Node
- Run the following command to list the available k8s versions:
apt-cache madison kubeadm | grep 1.23
- Run upgrade kubeadm
apt-mark unhold kubeadm && apt-get install -y kubeadm='1.23.17-00' && apt-mark hold kubeadm
- You may need to get the kubeadm-config file:
kubectl get cm/kubeadm-config -o yaml -n=kube-system
#~/kubeadm-config-1.23.yaml
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: <ip>:<port>
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
#imageRepository: k8s.gcr.io
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.22.0
networking:
dnsDomain: cluster.local
podSubnet: <ip>/16
serviceSubnet: <ip>/12
scheduler: {}
- Run the command to upgrade the node:
kubeadm upgrade apply v1.23.17 --config ~/kubeadm-config-1.23.yaml --ignore-preflight-errors=CoreDNSUnsupportedPlugins
- Run the command to upgrade kubelet and kubectl
apt-mark unhold kubelet kubectl && apt-get install -y kubelet='1.23.17-00' kubectl='1.23.17-00' && apt-mark hold kubelet kubectl
- Uncordon the node:
kubectl uncordon <node>
For Each Of The Rest Of The Master Nodes
- Drain the node:
kubectl drain --ignore-daemonsets --delete-emptydir-data <node>
- ssh into the node
ssh -i ~/.ssh/key <user>@<ip>
- Check available kubelet & kubectl versions(Minor version upgrade is ok e.g 1.23.14):
apt-cache madison kubeadm | grep 1.23
- Run upgrade kubeadm
apt-mark unhold kubeadm && \
apt-get install -y kubeadm='1.23.17-00' && \
apt-mark hold kubeadm
- Run the command to upgrade the node:
kubeadm upgrade node --ignore-preflight-errors=CoreDNSUnsupportedPlugins
- Run the command to upgrade kubelet and kubectl:
apt-mark unhold kubelet kubectl && apt-get update && \
apt-get install -y kubelet='1.23.17-00' kubectl='1.23.17-00' && \
apt-mark hold kubelet kubectl
- Uncordon the node:
kubectl uncordon <node>
Kubernetes Worker Upgrade: 1.22 -> 1.23.17
- Drain the node:
kubectl drain --ignore-daemonsets --delete-emptydir-data <node>
For Each Of The Worker Nodes
- ssh into the node:
ssh -i ~/.ssh/key <user>@<ip>
- Run the following command to list the available k8s versions:
apt-cache madison kubeadm | grep 1.23
- Run the command to upgrade kubeadm, kubelet and kubectl:
apt-mark unhold kubeadm && apt-get install -y kubeadm='1.23.17-00' && apt-mark hold kubeadm && kubeadm upgrade node && apt-mark unhold kubelet kubectl && apt-get install -y kubelet='1.23.17-00' kubectl='1.23.17-00' && apt-mark hold kubelet kubectl && systemctl daemon-reload && systemctl restart kubelet
- Uncordon the node:
kubectl uncordon <node>
At this point, we have successfully upgraded the Kubernetes cluster from 1.22 -> 1.23.17.
kubernetesbare-metalupgrade