Managing Kubernetes Lifecycles
Introduction
This tutorial will guide you through managing the lifecycle of Kubernetes clusters. You’ll learn how to create, update, scale, and delete clusters. Familiarity with basic Kubernetes concepts (pods, deployments, services) and command-line tools like kubectl and kops is recommended. We will be focusing on using kops to provision clusters on AWS, but the principles apply broadly.
Setting Up Your Environment
Before you begin, you need to set up your environment with the necessary tools:
- AWS Account: Ensure you have an active AWS account with sufficient permissions.
kubectl: Installkubectl, the Kubernetes command-line tool.kops: Installkops, the Kubernetes Operations tool. We’ll use it to create and manage our cluster.awscli: Install and configure the AWS CLI tool.terraform: Install terraform to deploy needed resources.
# Example using apt-get
sudo apt-get update
sudo apt-get install kubectl kops awscli terraformCreating a Kubernetes Cluster with kops
kops simplifies the process of creating and managing Kubernetes clusters.
-
Create an S3 Bucket for
kopsState:kopsneeds a place to store the state of your cluster. Create an S3 bucket for this purpose. The name must be globally unique.NODE_TYPE // bashaws s3api create-bucket --bucket kops-state-store-example --region us-east-1 export KOPS_STATE_STORE=s3://kops-state-store-exampleReplacekops-state-store-examplewith a unique bucket name. Region must match the one you intent do deploy to. -
Create a Kubernetes Cluster: Use
kops create clusterto define your cluster configuration. This example creates a cluster in theus-east-1region.NODE_TYPE // bashkops create cluster --name=my-cluster.k8s.local --zones=us-east-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro --yes--name: The name of your cluster. It is best practice to include a domain name (even if it’s a fake one like.k8s.local).--zones: The AWS availability zones where your nodes will run.--node-count: The number of nodes in your cluster.--node-size: The AWS instance type for your nodes.--master-size: The AWS instance type for the master node.--yes: Automatically confirm all prompts.
t2.micro is suitable for testing.
-
Update the Cluster: Apply the configuration to AWS and create the cluster.
NODE_TYPE // bashkops update cluster my-cluster.k8s.local --yes -
Validate the Cluster: Wait for the cluster to become ready and validate its status.
NODE_TYPE // bashkops validate cluster my-cluster.k8s.localNODE_TYPE // outputValidating cluster my-cluster.k8s.local INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1a Master t2.micro 1 1 us-east-1a nodes-us-east-1a Node t2.micro 2 2 us-east-1a NODE STATUS NAME ROLE READY ip-172-20-34-65.ec2.internal node True ip-172-20-42-210.ec2.internal node True ip-172-20-57-205.ec2.internal master True Your cluster my-cluster.k8s.local is ready
Scaling Your Kubernetes Cluster
Scaling your cluster involves adjusting the number of nodes based on your application’s resource demands.
-
Edit Instance Group: Modify the instance group to change the desired number of nodes.
NODE_TYPE // bashkops edit ig nodes-us-east-1a --name my-cluster.k8s.localThis opens the instance group configuration in your default editor. Modify the
minSizeandmaxSizefields. For example, to scale to 3 nodes:NODE_TYPE // yamlapiVersion: kops.k8s.io/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: "2023-10-27T14:33:00Z" labels: kops.k8s.io/cluster: my-cluster.k8s.local name: nodes-us-east-1a spec: image: kope.io/k8s-1.27-debian-bookworm-amd64-hvm-20231026 machineType: t2.micro maxSize: 3 minSize: 3 nodeLabels: kops.k8s.io/instance-group: nodes-us-east-1a role: Node subnets: - us-east-1a -
Update Cluster: Apply the changes.
NODE_TYPE // bashkops update cluster my-cluster.k8s.local --yes kops rolling-update clusterkops rolling-update clusterperforms a rolling update, gradually replacing old nodes with new ones to minimize downtime. -
Verify Scaling: Check the number of nodes in your cluster.
NODE_TYPE // bashkubectl get nodesNODE_TYPE // outputNAME STATUS ROLES AGE VERSION ip-172-20-34-65.ec2.internal Ready node 24h v1.27.4 ip-172-20-42-210.ec2.internal Ready node 24h v1.27.4 ip-172-20-57-205.ec2.internal Ready master 24h v1.27.4 ip-172-20-61-100.ec2.internal Ready node 10m v1.27.4
Updating Your Kubernetes Cluster
Keeping your Kubernetes cluster up-to-date is crucial for security and stability.
-
Check Available Updates: Determine if any updates are available.
NODE_TYPE // bashkops update cluster my-cluster.k8s.localkopswill show you the changes it plans to make. Pay close attention to Kubernetes version upgrades. -
Apply Updates: Apply the updates to your cluster.
NODE_TYPE // bashkops update cluster my-cluster.k8s.local --yes kops rolling-update cluster -
Verify Update: Check the Kubernetes version of your nodes after the update.
NODE_TYPE // bashkubectl get nodes -o wideNODE_TYPE // outputNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-20-34-65.ec2.internal Ready node 24h v1.28.0 172.20.34.65 3.21.123.456 Ubuntu 22.04.3 LTS 5.15.0-86-generic docker://20.10.12 ip-172-20-42-210.ec2.internal Ready node 24h v1.28.0 172.20.42.210 44.23.234.123 Ubuntu 22.04.3 LTS 5.15.0-86-generic docker://20.10.12 ip-172-20-57-205.ec2.internal Ready master 24h v1.28.0 172.20.57.205 54.24.135.678 Ubuntu 22.04.3 LTS 5.15.0-86-generic docker://20.10.12 ip-172-20-61-100.ec2.internal Ready node 10m v1.28.0 172.20.61.100 3.22.146.789 Ubuntu 22.04.3 LTS 5.15.0-86-generic docker://20.10.12
Deleting Your Kubernetes Cluster
When you no longer need your cluster, it’s important to delete it to avoid incurring unnecessary costs.
-
Delete Cluster: Use
kops delete clusterto remove the cluster.NODE_TYPE // bashkops delete cluster my-cluster.k8s.local --yesDeleting a cluster is irreversible. Make sure you have backed up any important data before proceeding. -
Verify Deletion: Confirm that the resources associated with the cluster have been removed from your AWS account. This may take some time.
Conclusion
In this tutorial, you learned how to manage the lifecycle of Kubernetes clusters using kops. You created a cluster, scaled it, performed an update, and finally, deleted it. This comprehensive approach ensures you can effectively manage your Kubernetes infrastructure from creation to deletion.