Upgrade EKS cluster from v1.18 to v1.19
Managing cluster on EKS with managed nodes relieves lots of SRE engineers’ maintenance work when doing K8S version upgrade. However, it still required attentions and checks before moving to new version.
I think it would be great if I could note down some details of the upgrading procedure for our company’s production EKS cluster and share with others.
Before you go
Read the official before you go. Since cloud native services keep evolving, it’s important to check the latest document before you do any upgrade.
AWS docs
- Planning Kubernetes Upgrades with Amazon EKS
- Updating a cluster
- Updating a managed node group
- Self-managed node updates
- Amazon-eks-user-guide update-cluster
- Amazon EKS add-on configuration
- Managing the Amazon VPC CNI add-on
- Managing the CoreDNS add-on
- Managing the kube-proxy add-on
Eksctl docs
Other references
- Upgrade AWS EKS Cluster with Zero Downtime
- Amazon EKS Upgrade Journey From 1.18 to 1.19
- Amazon EKS Upgrade Journey From 1.19 to 1.20
- Upgrading Kubernetes Worker Nodes in GKE, AKS, and EKS
Version information
Note from official docs
Managed control node & managed node group can be upgraded automatically with simple command
Additionally, you may be several releases behind your target release. In-place upgrades are incremental to the next highest Kubernetes minor version, which means if you have several releases between your current and target versions, **you’ll need to move through each one in sequence.** In this scenario, it might be best to consider a blue/green infrastructure deployment strategy, which is a complex undertaking in its own right.
Instruction
In general, there are 4 steps to do:
- Upgrade the cluster control plane
- Upgrade the nodes in your cluster
- Update your Kubernetes add-ons and custom controllers, as required
- Update your Kubernetes manifests, as required
We are upgrading from v1.18 to v1.19 in this article and I use cluster.yaml
to manage our EKS cluster.
You can find example config file from EKS official document.
Upgrade control node
1. update EKS version incluster.yaml
metadata:
name: ${CLUSTER_NAME}
region: us-west-2
version: “1.19”
2. Plan and apply changes
Plan before you apply your changes
# plan change
eksctl upgrade cluster --config-file cluster.yaml
You should only see this change in plan result: 2021–09–24 13:33:55 [ℹ] (plan) would upgrade cluster “${CLUSTER_NAME}” control plane from current version “1.18” to “1.19”
Then apply changes
# apply change
eksctl upgrade cluster --config-file cluster.yaml --approve
Your AWS EKS dashboard status should become Updating and it takes about 40 mins to upgrade control plane.

Upgrade managed node group
1. Upgrade managed node group with eksctl upgrade nodegroup
What eksctl upgrade nodegroup
will do is creating new nodes(with latest AMI), moving workload(Pods) to new nodes and deleting old nodes.
eksctl upgrade nodegroup --name=<node-group-name> --cluster=<cluster-name> --kubernetes-version=1.19
Note. The default timeout for eksctl upgrade nodegroup
is 45min. The process may show the following failure when timeout happens. You can add —-timeout 120m
to increase timeout
Error: error updating nodegroup stack: waiting for CloudFormation stack "eksctl-${CLUSTER_NAME}-nodegroup-${NODE_GROUP_NAME}": RequestCanceled: waiter context canceled
Upgrade add-ons
Replace default add-ons with AWS managed add-ons
If your are upgrading from v1.17 to v1.18, it’s better to replace default add-ons with AWS managed add-ons with EKS as well.
Follow the steps below to use AWS managed add-ons:
1. Add the following config to your cluster.yaml file
# You should use version that suits your EKS cluster version
addons:
- name: kube-proxy
version: v1.19.6-eksbuild.2
- name: coredns
version: v1.8.0-eksbuild.1
- name: vpc-cni
version: v1.9.1-eksbuild.1
2. Apply changes with eksctl create addon
and don’t forget to use --force
to replace default add-ons config
$ eksctl create addon --force --config-file cluster.yaml
3. Check your current add-ons.
eksctl get addons --cluster <cluster-name>
Upgrade AWS managed add-ons
1. Update the add-ons version in your cluster.yaml
config file
addons:
- name: kube-proxy
version: v1.19.6-eksbuild.2
- name: coredns
version: v1.8.0-eksbuild.1
- name: vpc-cni
version: v1.9.1-eksbuild.1
2. Update with the following command
eksctl update addon -f config.yaml
Note. Sometimes the add-ons Pod is using updated image but you still get old version from eksctl get addons --cluster <cluster-name>
. Just apply again to see if it works.
Upgrade default add-ons
If your want to use default add-ons, you need to use eksctl utils update-${add-on-name}
for default add-ons. Refer to following links for more information:
- Eksctl doc: Cluster upgrades
- Eksctl doc: Updating default add-ons
- Backup and upgrade your EKS cluster from version 1.17 to 1.18 with Velero and eksctl.
- Difference between pre and post 1.18 addons
# plan change
eksctl utils update-kube-proxy --cluster=<cluster-name>
eksctl utils update-coredns --cluster=<cluster-name>
eksctl utils update-aws-node --cluster=<cluster-name># apply change
eksctl utils update-kube-proxy --cluster=<cluster-name> --approve
eksctl utils update-coredns --cluster=<cluster-name> --approve
eksctl utils update-aws-node --cluster=<cluster-name> --approve
Upgrade manifests
From v1.18 to v1.19
From v1.18 to v1.19, K8S has add pathType
as required argument in Ingress
resource, you need to update all your Ingress
manifests.
Refer to following links for more information:
That’s all. I hope this would help people who is doing the same thing like me. Please share your experience in the comment and correct me if I’m making mistakes.