Install K8S v1.22.2 on Ubuntu 18.04 with Kubeadm
Host environment
- Ubuntu 18.04
- K8S v1.22.2
First, you will need to have root permission on the host.
Installation
Install packages
Follow the installation instructions to set up and install all essentials from official document, including kubeadm, kubectl and kubelet.
Don’t forget to use apt-mark hold
to fix package version to prevent from unexpected system update.
$ sudo apt-mark hold kubelet kubeadm kubectl
Test your installation.
$ kubeadm version
$ kubectl version
$ service kubelet status
Don’t be panic if you encounter kubelet service failure for the moment.
The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.
Install CRI
We use containerd as container runtime in this article.
$ sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
$ sudo apt-get install containerd.io
Then, we are going to use systemd
as cgroup driver. From official document, the cgroup driver settings should be the same for CRI and kubelet.
Configure cgroup driver for kubelet
Follow the document, include cgroup configuration in kubeadm initial configuration file.
Configure cgroup driver for containerd
Follow the document and update configure file for containerd to use systemd
as cgroup driver.
[plugins]
[plugins.”io.containerd.grpc.v1.cri”]
systemd_cgroup = true
[plugins.”io.containerd.grpc.v1.cri”.containerd]
[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes]
[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc]
[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]
SystemdCgroup = true
Then restart containerd service.
$ sudo systemctl restart containerd
Init cluster
Initial your K8S cluster with kubeadm init
. You can reference official document to create configuration file for your environment.
$ sudo kubeadm init — config ${your_kubeadm_yaml_path}
Problems you may encounter
Kubeadm unknown service runtime.v1alpha2.RuntimeService
You may encounter the following problem in initialization after configuring containerd cgroup driver:
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time=”2021–09–19T01:15:45+08:00" level=fatal msg=”getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService”
, error: exit status 1
From this Github issue, I tried to remove containerd configuration and then init with kubeadm again.
$ sudo rm /etc/containerd/config.toml
$ sudo systemctl restart containerd
$ sudo kubeadm init — config ${your_kubeadm_yaml_path}
Unable to deploy pod on control plane
Kubeadm sets taint on control nodes as default to prevent user accidentally deploy pod on them. Since I am using single node here, I will have to remove the taint to make things work.
$ kubectl taint nodes — all node-role.kubernetes.io/master-# test
$ kubectl get nodes -o json | jq ‘.items[].spec.taints’
Unable to connect to the server: x509: certificate signed by unknown authority
Remove the whole .kube folder, recreate it and copy the configuration again.
rm -rf $HOME/.kube
Ref (in Chinese)
Install network add-on(CNI)
There are several choices for CNI provide in official document and we are using Calico here.
Please be noted that the default pod subnet
settings is 192.168.0.0/16
, you need to change them according to your kubeadm.yaml
.
$ curl https://docs.projectcalico.org/manifests/calico.yaml -O
$ sed -i ‘s/192.168.0.0\/16/172.16.0.0\/16/g’ calico.yaml
kubectl apply -f calico.yaml
Test
Copy kubectl configuration from /etc/kubernetes/admin.conf to home directory and start to test.
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check if node is ready
$ kubectl get node# expected result
${your-hostname} Ready control-plane,master 36m v1.22.2
Check system related Pod
Check Pods in kube-system namespace are running as expected.
$ kubectl get pod -n kube-system# expected result
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-xxx 1/1 Running 0 26m
calico-node-xxx 1/1 Running 0 26m
coredns-xxx 1/1 Running 0 28m
etcd-${your-hostname} 1/1 Running 0 28m
kube-apiserver-${your-hostname} 1/1 Running 0 28m
kube-controller-manager-${your-hostname} 1/1 Running 0 28m
kube-proxy-xxx 1/1 Running 0 28m
kube-scheduler-${your-hostname} 1/1 Running 0 28m
Check network connection
Start a Pod with curl
to test if network(CNI) is working
# kubectl create namespace test
# kubectl run -i — tty — rm curl — image=radial/busyboxplus:curl# In Pod
[ root@curl:/ ]$ curl https://www.google.com/
Reset cluster
If you need to reset the whole cluster due to configuration mistake or other reasons, use kubeadm reset
to reset the cluster.
$ sudo kubeadm reset clean-node
$ rm -rf $HOME/.kube
Reference
kubeadm 部署 K8S 并使用 containerd 做运行时 (in Chinese)
K8s 學習筆記 — kubeadm 手動安裝 (in Chinese)
That’s all. I help this help and feel free to share your questions and thoughts in the comment.