Technical Theory

Managing Kubernetes Lifecycles

Introduction

This tutorial will guide you through managing the lifecycle of Kubernetes clusters. You’ll learn how to create, update, scale, and delete clusters. Familiarity with basic Kubernetes concepts (pods, deployments, services) and command-line tools like kubectl and kops is recommended. We will be focusing on using kops to provision clusters on AWS, but the principles apply broadly.

Setting Up Your Environment

Before you begin, you need to set up your environment with the necessary tools:

  1. AWS Account: Ensure you have an active AWS account with sufficient permissions.
  2. kubectl: Install kubectl, the Kubernetes command-line tool.
  3. kops: Install kops, the Kubernetes Operations tool. We’ll use it to create and manage our cluster.
  4. awscli: Install and configure the AWS CLI tool.
  5. terraform: Install terraform to deploy needed resources.
NODE_TYPE // bash
# Example using apt-get
sudo apt-get update
sudo apt-get install kubectl kops awscli terraform
Check the official documentation for each tool for the most up-to-date installation instructions.

Creating a Kubernetes Cluster with kops

kops simplifies the process of creating and managing Kubernetes clusters.

  1. Create an S3 Bucket for kops State: kops needs a place to store the state of your cluster. Create an S3 bucket for this purpose. The name must be globally unique.

    NODE_TYPE // bash
    aws s3api create-bucket --bucket kops-state-store-example --region us-east-1
    export KOPS_STATE_STORE=s3://kops-state-store-example
    Replace kops-state-store-example with a unique bucket name. Region must match the one you intent do deploy to.
  2. Create a Kubernetes Cluster: Use kops create cluster to define your cluster configuration. This example creates a cluster in the us-east-1 region.

    NODE_TYPE // bash
    kops create cluster --name=my-cluster.k8s.local --zones=us-east-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro --yes
    • --name: The name of your cluster. It is best practice to include a domain name (even if it’s a fake one like .k8s.local).
    • --zones: The AWS availability zones where your nodes will run.
    • --node-count: The number of nodes in your cluster.
    • --node-size: The AWS instance type for your nodes.
    • --master-size: The AWS instance type for the master node.
    • --yes: Automatically confirm all prompts.
Choose appropriate instance sizes based on your workload. t2.micro is suitable for testing.
  1. Update the Cluster: Apply the configuration to AWS and create the cluster.

    NODE_TYPE // bash
    kops update cluster my-cluster.k8s.local --yes
  2. Validate the Cluster: Wait for the cluster to become ready and validate its status.

    NODE_TYPE // bash
    kops validate cluster my-cluster.k8s.local
    NODE_TYPE // output
    Validating cluster my-cluster.k8s.local
    
    INSTANCE GROUPS
    NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
    master-us-east-1a	Master	t2.micro	1	1	us-east-1a
    nodes-us-east-1a	Node	t2.micro	2	2	us-east-1a
    
    NODE STATUS
    NAME							ROLE	READY
    ip-172-20-34-65.ec2.internal		node	True
    ip-172-20-42-210.ec2.internal		node	True
    ip-172-20-57-205.ec2.internal		master	True
    
    Your cluster my-cluster.k8s.local is ready

Scaling Your Kubernetes Cluster

Scaling your cluster involves adjusting the number of nodes based on your application’s resource demands.

  1. Edit Instance Group: Modify the instance group to change the desired number of nodes.

    NODE_TYPE // bash
    kops edit ig nodes-us-east-1a --name my-cluster.k8s.local

    This opens the instance group configuration in your default editor. Modify the minSize and maxSize fields. For example, to scale to 3 nodes:

    NODE_TYPE // yaml
    apiVersion: kops.k8s.io/v1alpha2
    kind: InstanceGroup
    metadata:
      creationTimestamp: "2023-10-27T14:33:00Z"
      labels:
        kops.k8s.io/cluster: my-cluster.k8s.local
      name: nodes-us-east-1a
    spec:
      image: kope.io/k8s-1.27-debian-bookworm-amd64-hvm-20231026
      machineType: t2.micro
      maxSize: 3
      minSize: 3
      nodeLabels:
        kops.k8s.io/instance-group: nodes-us-east-1a
      role: Node
      subnets:
      - us-east-1a
  2. Update Cluster: Apply the changes.

    NODE_TYPE // bash
    kops update cluster my-cluster.k8s.local --yes
    kops rolling-update cluster

    kops rolling-update cluster performs a rolling update, gradually replacing old nodes with new ones to minimize downtime.

  3. Verify Scaling: Check the number of nodes in your cluster.

    NODE_TYPE // bash
    kubectl get nodes
    NODE_TYPE // output
    NAME                                        STATUS   ROLES           AGE   VERSION
    ip-172-20-34-65.ec2.internal             Ready    node            24h   v1.27.4
    ip-172-20-42-210.ec2.internal             Ready    node            24h   v1.27.4
    ip-172-20-57-205.ec2.internal             Ready    master          24h   v1.27.4
    ip-172-20-61-100.ec2.internal             Ready    node            10m   v1.27.4

Updating Your Kubernetes Cluster

Keeping your Kubernetes cluster up-to-date is crucial for security and stability.

  1. Check Available Updates: Determine if any updates are available.

    NODE_TYPE // bash
    kops update cluster my-cluster.k8s.local

    kops will show you the changes it plans to make. Pay close attention to Kubernetes version upgrades.

  2. Apply Updates: Apply the updates to your cluster.

    NODE_TYPE // bash
    kops update cluster my-cluster.k8s.local --yes
    kops rolling-update cluster
  3. Verify Update: Check the Kubernetes version of your nodes after the update.

    NODE_TYPE // bash
    kubectl get nodes -o wide
    NODE_TYPE // output
    NAME                                        STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
    ip-172-20-34-65.ec2.internal             Ready    node            24h   v1.28.0   172.20.34.65    3.21.123.456     Ubuntu 22.04.3 LTS   5.15.0-86-generic   docker://20.10.12
    ip-172-20-42-210.ec2.internal             Ready    node            24h   v1.28.0   172.20.42.210   44.23.234.123     Ubuntu 22.04.3 LTS   5.15.0-86-generic   docker://20.10.12
    ip-172-20-57-205.ec2.internal             Ready    master          24h   v1.28.0   172.20.57.205   54.24.135.678     Ubuntu 22.04.3 LTS   5.15.0-86-generic   docker://20.10.12
    ip-172-20-61-100.ec2.internal             Ready    node            10m   v1.28.0   172.20.61.100   3.22.146.789     Ubuntu 22.04.3 LTS   5.15.0-86-generic   docker://20.10.12

Deleting Your Kubernetes Cluster

When you no longer need your cluster, it’s important to delete it to avoid incurring unnecessary costs.

  1. Delete Cluster: Use kops delete cluster to remove the cluster.

    NODE_TYPE // bash
    kops delete cluster my-cluster.k8s.local --yes
    Deleting a cluster is irreversible. Make sure you have backed up any important data before proceeding.
  2. Verify Deletion: Confirm that the resources associated with the cluster have been removed from your AWS account. This may take some time.

Conclusion

In this tutorial, you learned how to manage the lifecycle of Kubernetes clusters using kops. You created a cluster, scaled it, performed an update, and finally, deleted it. This comprehensive approach ensures you can effectively manage your Kubernetes infrastructure from creation to deletion.

Next Topic