Highly Available Kubernetes Control Plane
Introduction
This tutorial demonstrates how to create a highly-available (HA) Kubernetes control plane. High availability ensures that your cluster remains operational even if one or more control plane nodes fail. This setup uses kubeadm for bootstrapping the cluster and a load balancer to distribute traffic across multiple control plane nodes.
Prerequisites:
- Basic understanding of Kubernetes concepts.
- Three or more virtual machines (VMs) or physical servers to act as control plane nodes. These should be identical configurations. Example:
controlplane-0,controlplane-1,controlplane-2. - One additional VM or server to act as a load balancer. Example:
loadbalancer - Each VM or server should have a static IP address.
- Network connectivity between all VMs.
kubeadm,kubelet, andkubectlinstalled on all control plane nodes and the load balancer. (See installation steps below).- A container runtime (e.g., Docker, containerd) installed on all control plane nodes.
Task 1: Install Kubernetes Components
On all control plane nodes (controlplane-0, controlplane-1, controlplane-2) and the load balancer (loadbalancer), perform the following steps.
-
Install Container Runtime (Docker Example):
NODE_TYPE // bashapt-get update apt-get install -y apt-transport-https ca-certificates curl curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" apt-get update apt-get install -y docker-ce docker-ce-cli containerd.io systemctl enable docker systemctl start docker -
Install kubeadm, kubelet, and kubectl:
NODE_TYPE // bashapt-get update apt-get install -y apt-transport-https ca-certificates curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectlEnsure the versions ofkubeadm,kubelet, andkubectlare the same across all nodes. Useapt-cache madison kubeadmto see available versions. If necessary specify a version like this:apt-get install -y kubelet=1.29.0-00 kubeadm=1.29.0-00 kubectl=1.29.0-00. -
Configure systemd cgroup driver:
Edit
/etc/containerd/config.toml(if using containerd) or/etc/docker/daemon.json(if using Docker) to set the cgroup driver tosystemd. For containerd the file may not exist, if not create it and add the following.NODE_TYPE // toml# /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = trueFor docker the file should look like this:
NODE_TYPE // json# /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] }Restart the container runtime after making the change:
NODE_TYPE // bashsystemctl restart docker # If using Docker systemctl restart containerd # If using containerd systemctl restart kubelet
Task 2: Configure the Load Balancer
On the load balancer node (loadbalancer), install and configure a load balancer to distribute traffic to the control plane nodes. This example uses HAProxy. Replace the IP addresses below with the static IPs of your control plane nodes.
-
Install HAProxy:
NODE_TYPE // bashapt-get update apt-get install -y haproxy -
Configure HAProxy:
Edit
/etc/haproxy/haproxy.cfgand add the following configuration. Be sure to modify theserverlines with the correct IP addresses of your control plane nodes.NODE_TYPE // cfgfrontend kubernetes-frontend bind *:6443 mode tcp option tcplog default_backend kubernetes-backend backend kubernetes-backend mode tcp option tcp-check balance roundrobin server controlplane-0 <controlplane-0-ip>:6443 check server controlplane-1 <controlplane-1-ip>:6443 check server controlplane-2 <controlplane-2-ip>:6443 checkReplace<controlplane-0-ip>,<controlplane-1-ip>, and<controlplane-2-ip>with the actual IP addresses of your control plane nodes. -
Enable and start HAProxy:
NODE_TYPE // bashsystemctl enable haproxy systemctl start haproxy
Task 3: Initialize the First Control Plane Node
On the first control plane node (controlplane-0), initialize the Kubernetes cluster.
-
Create a
kubeadm-config.yamlfile:Create a file named
kubeadm-config.yamlwith the following content. Replace<loadbalancer-ip>with the IP address of your load balancer.NODE_TYPE // yamlapiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: v1.29.0 # Or your desired version controlPlaneEndpoint: "<loadbalancer-ip>:6443" apiServer: certSANs: - "<loadbalancer-ip>" - "<controlplane-0-ip>" networking: podSubnet: "10.244.0.0/16" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemdEnsure thekubernetesVersionmatches the version ofkubeadm,kubelet, andkubectlyou installed. Also, replace<loadbalancer-ip>and<controlplane-0-ip>with the actual IP addresses. -
Initialize the cluster:
NODE_TYPE // bashkubeadm init --config kubeadm-config.yamlThis process may take several minutes. Make sure to save thekubeadm joincommand that is printed at the end of the output. You will need it to join the other control plane nodes.Example
kubeadm joincommand output:NODE_TYPE // outputkubeadm join <loadbalancer-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> -
Configure kubectl:
NODE_TYPE // bashmkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config -
Install a pod network add-on (e.g., Calico):
NODE_TYPE // bashkubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yamlThe Calico version above is an example. Consult the Calico documentation for the latest recommended version.
Task 4: Join Additional Control Plane Nodes
On the remaining control plane nodes (controlplane-1, controlplane-2), join the cluster using the kubeadm join command you saved in the previous step.
-
Join the cluster:
NODE_TYPE // bashkubeadm join <loadbalancer-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>If you lost the kubeadm join command, you can regenerate it.
NODE_TYPE // bashkubeadm token create --print-join-command -
Copy the admin.conf file. On
controlplane-0, copy the/etc/kubernetes/admin.conffile tocontrolplane-1andcontrolplane-2into the/etc/kubernetes/admin.conflocation. This is required for the kube-scheduler and kube-controller-manager to function properly.
Task 5: Verify the Control Plane
On any of the control plane nodes, verify that the control plane is highly available.
-
Check the status of the nodes:
NODE_TYPE // bashkubectl get nodesNODE_TYPE // outputNAME STATUS ROLES AGE VERSION controlplane-0 Ready control-plane 10m v1.29.0 controlplane-1 Ready control-plane 5m v1.29.0 controlplane-2 Ready control-plane 5m v1.29.0 -
Check the status of the control plane pods:
NODE_TYPE // bashkubectl get pods -n kube-systemYou should see multiple instances of
kube-apiserver,kube-scheduler, andkube-controller-managerrunning. -
Simulate a control plane node failure:
Power off or disconnect one of the control plane nodes. Verify that the cluster remains operational and that pods can still be created and accessed.
Task 6: Add Worker Nodes (Optional)
If you have worker nodes, you can join them to the cluster using the same kubeadm join command used for the control plane nodes (without the --control-plane flag).
Conclusion
You have successfully configured a highly-available Kubernetes control plane using kubeadm and HAProxy. This setup provides increased resilience and ensures that your cluster remains operational even in the event of control plane node failures. You learned how to install Kubernetes components, configure a load balancer, initialize the cluster, and join additional control plane nodes.