“Instances failed to join the kubernetes cluster” error when creating AWS EKS node-group

Kun-Hung Tsai
1 min readDec 13, 2020

--

I was building AWS EKS cluster for our team these days. When I tried to create node group for EKS cluster, it took very long time (more than 15 minutes) to create and in the end I got this error: “Instances failed to join the kubernetes cluster”.

Even after checking the AWS troubleshooting page for EKS, I still couldn’t figure out the root cause for this error.

After consulting the AWS support team, it turned out that it was because I accidentally put my NAT gateway in the private subnet.

The feedback from AWS support team:

As I checked the EKS resources, I saw that the API server endpoint access for your EKS cluster is public access enabled and private access disabled, which means that when the EKS worker nodes communicate with the control plane, the request will leave the VPC to the endpoint.

Therefore, these node instances cannot communicate with the API server endpoint via public access as the request cannot leave the VPC with the current VPC configuration

To solve the problem, I moved the NAT gateway back to public subnet. Now the instances for the node group in the private subnets can communicate with the control plan through Internet and I can create node group without any problem.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Kun-Hung Tsai
Kun-Hung Tsai

Responses (2)

Write a response