Technical Theory

Kubernetes Service Types and Endpoints

Introduction

This tutorial explains Kubernetes Services, focusing on ClusterIP, NodePort, and LoadBalancer types, and their interaction with Endpoints. You should have a basic understanding of Kubernetes Pods, Deployments, and YAML syntax. Access to a Kubernetes cluster (Minikube, Kind, or a cloud provider) is required. kubectl must be configured to interact with your cluster.

Service Type Reachability Typical Use Case
ClusterIP Only inside the cluster. “Internal communication (e.g., Frontend app talking to Backend API).”
NodePort External (via Node IP). “Development/Testing, or when using an external load balancer you manage yourself.”
LoadBalancer External (via dedicated IP). “Production applications requiring stable, external access (internet-facing).”

Prerequisites

  • A running Kubernetes cluster (e.g., Minikube, Kind, or a cloud-based cluster).
  • kubectl configured to interact with your cluster.
  • Basic knowledge of Kubernetes Pods and Deployments.
  • Familiarity with YAML syntax.

Task 1: Deploying a Sample Application

First, let’s deploy a simple application that our Services will expose. We’ll use a basic Nginx deployment.

  1. Create a file named nginx-deployment.yaml with the following content:

    NODE_TYPE // yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
  2. Apply the deployment:

    NODE_TYPE // bash
    kubectl apply -f nginx-deployment.yaml
    NODE_TYPE // output
    deployment.apps/nginx-deployment created
  3. Verify the deployment:

    NODE_TYPE // bash
    kubectl get deployments
    NODE_TYPE // output
    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   2/2     2            2           1m
  4. Verify the pods are running:

    NODE_TYPE // bash
    kubectl get pods -l app=nginx
    NODE_TYPE // output
    NAME                               READY   STATUS    RESTARTS   AGE
    nginx-deployment-6d6d84cf67-64t9b   1/1     Running   0          1m
    nginx-deployment-6d6d84cf67-j7bxt   1/1     Running   0          1m

Task 2: Creating a ClusterIP Service

ClusterIP is the default service type. It exposes the service on a cluster-internal IP. Only accessible from within the cluster.

graph TD
    %% Define Nodes and Pods
    subgraph K8s_Cluster["Kubernetes Cluster"]
        direction TB
        
        ClientPod1["Client Pod A\n(wants Backend)"]:::clientBox
        ClientPod2["Client Pod B\n(wants Backend)"]:::clientBox

        %% The Service (Updated professional styling)
        Service_CIP["Service: backend
Type: ClusterIP
Internal IP: 10.96.0.100
Port: 80"]:::serviceBox %% Backend Pods subgraph BackendNodes["Worker Nodes (Backend)"] direction LR Pod1["Backend Pod 1\nIP: 10.244.1.5:8080"]:::podBox Pod2["Backend Pod 2\nIP: 10.244.2.8:8080"]:::podBox end end %% Define Flow (Updated with colored connections) ClientPod1 ==>|10.96.0.100:80| Service_CIP ClientPod2 ==>|10.96.0.100:80| Service_CIP %% Load Balancing (Updated connections) Service_CIP -.->|Balancing to| Pod1 Service_CIP -.->|Balancing to| Pod2 %% PROFESSIONAL COLOUR PALETTE STYLING %% Define Rounded Boxes (all components are now boxes with consistent rx/ry) classDef clientBox fill:#E8F0F7,stroke:#5A91BF,stroke-width:2px,rx:8,ry:8,color:#222; classDef podBox fill:#D8FDF8,stroke:#4ED8C7,stroke-width:2px,rx:8,ry:8,color:#222; %% Main Service Box (Muted Teal accent) classDef serviceBox fill:#C7E6E2,stroke:#009688,stroke-width:3px,color:#004D40,rx:10,ry:10,font-weight:bold; %% Subgraph Styling style BackendNodes fill:#F1F1F1,stroke:#C6C6C6,stroke-width:1px,color:#666; style K8s_Cluster fill:#F9F9F9,stroke:#666,stroke-width:2px,color:#222; %% Link Styling linkStyle 0,1 stroke:#5A91BF,stroke-width:3px,fill:none; linkStyle 2,3 stroke:#009688,stroke-width:2px,stroke-dasharray: 5 5;
  1. Create a file named nginx-clusterip-service.yaml with the following content:

    NODE_TYPE // yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-clusterip-service
    spec:
      type: ClusterIP
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
  2. Apply the service:

    NODE_TYPE // bash
    kubectl apply -f nginx-clusterip-service.yaml
    NODE_TYPE // output
    service/nginx-clusterip-service created
  3. Describe the service to find its ClusterIP:

    NODE_TYPE // bash
    kubectl describe service nginx-clusterip-service
    NODE_TYPE // output
    Name:              nginx-clusterip-service
    Namespace:         default
    Labels:            <none>
    Annotations:       <none>
    Selector:          app=nginx
    Type:              ClusterIP
    IP Family Policy:  SingleStack
    IP Families:       IPv4
    IP:                10.96.124.144 # This is the ClusterIP
    IPs:               10.96.124.144
    Port:              <unset>  80/TCP
    TargetPort:        80/TCP
    Endpoints:         10.244.0.5:80,10.244.0.6:80
    Session Affinity:  None
    Events:            <none>
    The IP field in the output is the ClusterIP assigned to the service. The Endpoints field shows the IP addresses of the Pods that this service is routing traffic to.
  4. Verify connectivity from within the cluster. Launch a pod in the same namespace with curl installed and attempt to connect to the ClusterIP.

    NODE_TYPE // yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: curl-pod
    spec:
      containers:
      - name: curl
        image: curlimages/curl
        command: ["sleep", "infinity"] # Keep the pod running
  5. Apply the pod:

    NODE_TYPE // bash
    kubectl apply -f curl-pod.yaml
    NODE_TYPE // output
    pod/curl-pod created
  6. Exec into the pod.

    NODE_TYPE // bash
    kubectl exec -it curl-pod -- /bin/bash
  7. Within the curl-pod terminal, use curl to access the ClusterIP:

    NODE_TYPE // bash
    curl nginx-clusterip-service:80
    NODE_TYPE // output
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    If you get a “command not found” error for curl, update the curl-pod.yaml to include apt update && apt install curl -y in the container’s command. This will install curl at startup.
  8. Exit the curl pod:

    NODE_TYPE // bash
    exit
  9. Clean up the curl pod:

    NODE_TYPE // bash
    kubectl delete pod curl-pod
    NODE_TYPE // output
    pod "curl-pod" deleted

Task 3: Creating a NodePort Service

NodePort exposes the service on each Node’s IP at a static port. Accessible from outside the cluster using <NodeIP>:<NodePort>.

graph TD
    %% Define External Source (Dark grey for external entities)
    UserBrowser["External User\n(e.g., Browser)"]:::externalBox

    %% Define Cluster (Light grey background)
    subgraph K8s_Cluster["Kubernetes Cluster"]
        direction TB

        %% Nodes (Updated styling)
        subgraph Node1["Worker Node 1\nIP: 192.168.1.10"]
            Proxy1["kube-proxy"]:::proxyBox
            PodA["App Pod 1\nIP: 10.244.1.5"]:::podBox
        end

        subgraph Node2["Worker Node 2\nIP: 192.168.1.11"]
            Proxy2["kube-proxy"]:::proxyBox
            PodB["App Pod 2\nIP: 10.244.2.8"]:::podBox
        end

        %% The Service (NodePort/Muted Teal)
        Service_NP["Service: frontend
Type: NodePort
NodePort: 32000
ClusterIP: 10.96.0.200"]:::serviceBox end %% External Flow (User accesses NodePort on ANY Node) UserBrowser ==>|192.168.1.10:32000| Proxy1 UserBrowser ==>|192.168.1.11:32000| Proxy2 %% Internal Forwarding (kube-proxy routes to ClusterIP) Proxy1 -->|Routes to ClusterIP| Service_NP Proxy2 -->|Routes to ClusterIP| Service_NP %% Internal Load Balancing Service_NP -.->|Load Balances to| PodA Service_NP -.->|Load Balances to| PodB %% PROFESSIONAL COLOUR PALETTE STYLING %% Define Shapes classDef externalBox fill:#37474F,stroke:#263238,stroke-width:2px,rx:8,ry:8,color:#FFF; classDef podBox fill:#D8FDF8,stroke:#4ED8C7,stroke-width:2px,rx:8,ry:8,color:#222; classDef proxyBox fill:#E8EAF6,stroke:#7986CB,stroke-width:1px,color:#222,rx:8,ry:8; %% Main Service Box (Muted Teal accent) classDef serviceBox fill:#C7E6E2,stroke:#009688,stroke-width:3px,color:#004D40,rx:10,ry:10,font-weight:bold; %% Subgraph Styling style Node1 fill:#F1F1F1,stroke:#C6C6C6,stroke-width:1px,color:#666; style Node2 fill:#F1F1F1,stroke:#C6C6C6,stroke-width:1px,color:#666; style K8s_Cluster fill:#F9F9F9,stroke:#666,stroke-width:2px,color:#222; %% Link Styling linkStyle 0,1 stroke:#263238,stroke-width:3px,fill:none; linkStyle 2,3 stroke:#7986CB,stroke-width:2px,stroke-dasharray: 3 3; linkStyle 4,5 stroke:#009688,stroke-width:2px,stroke-dasharray: 5 5;
  1. Create a file named nginx-nodeport-service.yaml with the following content:

    NODE_TYPE // yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-nodeport-service
    spec:
      type: NodePort
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
        nodePort: 30080 # Choose a port between 30000-32767
    The nodePort must be within the range of 30000-32767. If omitted, Kubernetes will automatically assign a port.
  2. Apply the service:

    NODE_TYPE // bash
    kubectl apply -f nginx-nodeport-service.yaml
    NODE_TYPE // output
    service/nginx-nodeport-service created
  3. Describe the service to find the assigned NodePort:

    NODE_TYPE // bash
    kubectl describe service nginx-nodeport-service
    NODE_TYPE // output
    Name:                     nginx-nodeport-service
    Namespace:                default
    Labels:                   <none>
    Annotations:              <none>
    Selector:                 app=nginx
    Type:                     NodePort
    IP Family Policy:         SingleStack
    IP Families:              IPv4
    IP:                       10.108.224.197
    IPs:                      10.108.224.197
    Port:                     <unset>  80/TCP
    TargetPort:               80/TCP
    NodePort:                 <unset>  30080/TCP  # This is the NodePort
    Endpoints:                10.244.0.5:80,10.244.0.6:80
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:                   <none>
  4. Access the service from outside the cluster. You’ll need the IP address of one of your Kubernetes nodes.

    NODE_TYPE // bash
    # For Minikube:
    minikube ip
    NODE_TYPE // bash
    # For other clusters, find the IP of one of your nodes.
    kubectl get nodes -o wide
  5. Open a web browser and navigate to <NodeIP>:<NodePort> (e.g., 192.168.64.2:30080). You should see the Nginx welcome page.

Task 4: Creating a LoadBalancer Service

LoadBalancer exposes the service externally using a cloud provider’s load balancer.

This type typically works in cloud environments (AWS, GCP, Azure). Minikube requires extra configuration to simulate a LoadBalancer. For local testing, MetalLB can be used.
graph TD
    %% Define External Source
    UserBrowser["External User\n(e.g., Browser)"]:::externalBox

    %% Cloud Infrastructure
    subgraph CloudProvider["Cloud Provider"]
        direction TB
        ExternalLB["External Load Balancer
Public IP: 35.1.1.1"]:::cloudLBBox end %% Kubernetes Cluster subgraph K8s_Cluster["Kubernetes Cluster"] direction TB %% The Service (The Logical Manager) Service_LB["Service: prod-app
(Type: LoadBalancer)
NodePort: 31500"]:::serviceBox %% Nodes subgraph Node1["Worker Node 1"] Kernel1["Linux Kernel
(DNAT Rules)"]:::proxyBox PodA["App Pod 1"]:::podBox end subgraph Node2["Worker Node 2"] Kernel2["Linux Kernel
(DNAT Rules)"]:::proxyBox PodB["App Pod 2"]:::podBox end end %% RELATIONSHIPS (Control Plane) Service_LB -.->|1. Configures| ExternalLB Service_LB -.->|2. Programs| Kernel1 Service_LB -.->|2. Programs| Kernel2 %% TRAFFIC FLOW (Data Plane) UserBrowser ==>|Traffic| ExternalLB ExternalLB ==>|via NodePort 31500| Kernel1 ExternalLB ==>|via NodePort 31500| Kernel2 %% Redirection Kernel1 ==>|Direct to Pod| PodA Kernel2 ==>|Direct to Pod| PodB %% STYLING classDef externalBox fill:#37474F,stroke:#263238,stroke-width:2px,rx:8,ry:8,color:#FFF; classDef cloudLBBox fill:#E3F2FD,stroke:#1E88E5,stroke-width:3px,color:#0D47A1,rx:12,ry:12; classDef podBox fill:#D8FDF8,stroke:#4ED8C7,stroke-width:2px,rx:8,ry:8,color:#222; classDef proxyBox fill:#FFF,stroke:#009688,stroke-width:1px,color:#222,rx:5,ry:5; classDef serviceBox fill:#C7E6E2,stroke:#009688,stroke-width:3px,color:#004D40,rx:10,ry:10,font-weight:bold; style CloudProvider fill:#E1F5FE,stroke:#03A9F4,stroke-width:2px; style K8s_Cluster fill:#F9F9F9,stroke:#666,stroke-width:2px; %% Link Styling linkStyle 0,1,2 stroke:#009688,stroke-width:1px,stroke-dasharray: 5 5; linkStyle 3,4,5,6,7 stroke:#1E88E5,stroke-width:3px;
  1. Create a file named nginx-loadbalancer-service.yaml with the following content:

    NODE_TYPE // yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-loadbalancer-service
    spec:
      type: LoadBalancer
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
  2. Apply the service:

    NODE_TYPE // bash
    kubectl apply -f nginx-loadbalancer-service.yaml
    NODE_TYPE // output
    service/nginx-loadbalancer-service created
  3. Describe the service to find the External IP:

    NODE_TYPE // bash
    kubectl describe service nginx-loadbalancer-service
    NODE_TYPE // output
    Name:                     nginx-loadbalancer-service
    Namespace:                default
    Labels:                   <none>
    Annotations:              <none>
    Selector:                 app=nginx
    Type:                     LoadBalancer
    IP Family Policy:         SingleStack
    IP Families:              IPv4
    IP:                       10.102.132.224
    IPs:                      10.102.132.224
    Port:                     <unset>  80/TCP
    TargetPort:               80/TCP
    NodePort:                 <unset>  30952/TCP
    Endpoints:                10.244.0.5:80,10.244.0.6:80
    Session Affinity:         None
    External Traffic Policy:  Cluster
    External IPs:             34.121.123.124  # This is the External IP
    Events:
      Type    Reason                Age   From                Message
      ----    ------                ----  ----                -------
      Normal  EnsuringLoadBalancer  11s   service-controller  Ensuring load balancer
      Normal  EnsuredLoadBalancer   11s   service-controller  Ensured load balancer
    The External IPs field (or LoadBalancer Ingress in some environments) will show the IP address assigned by the cloud provider’s load balancer.
    sequenceDiagram
        participant User as External User
        participant LB as Cloud Load Balancer
        participant Node1 as Worker Node (Kernel/iptables)
        participant PodB as Target Pod (on Node 2)
    
        Note over Node1: Kube-proxy has already
    programmed the Kernel rules User->>LB: Request to Public IP LB->>Node1: Forward to NodePort (31500) activate Node1 Note right of Node1: Kernel intercepts packet:
    "Load balance to Pod B IP" Node1->>PodB: Direct route to Pod IP deactivate Node1 activate PodB PodB-->>User: Direct Response (via Gateway) deactivate PodB
  4. Access the service using the External IP in a web browser. It may take a few minutes for the load balancer to provision and the IP to become accessible.

Task 5: Examining Endpoints

Endpoints are the IP addresses and ports of the Pods that a Service routes traffic to. Kubernetes automatically manages Endpoints based on the Service’s selector.

  1. List the endpoints for the nginx-clusterip-service:

    NODE_TYPE // bash
    kubectl get endpoints nginx-clusterip-service
    NODE_TYPE // output
    NAME                      ENDPOINTS                         AGE
    nginx-clusterip-service   10.244.0.5:80,10.244.0.6:80   6m30s

    This shows the IP addresses and ports of the Pods that are part of the nginx-deployment and match the Service’s selector (app=nginx).

  2. Scale the nginx-deployment to 3 replicas:

    NODE_TYPE // bash
    kubectl scale deployment nginx-deployment --replicas=3
    NODE_TYPE // output
    deployment.apps/nginx-deployment scaled
  3. Wait for the new Pod to become ready, then check the endpoints again:

    NODE_TYPE // bash
    kubectl get endpoints nginx-clusterip-service
    NODE_TYPE // output
    NAME                      ENDPOINTS                                     AGE
    nginx-clusterip-service   10.244.0.5:80,10.244.0.6:80,10.244.0.7:80   7m

    You should see a new IP address in the ENDPOINTS list, corresponding to the new Pod.

Task 6: Cleaning Up

To clean up the resources created in this tutorial:

NODE_TYPE // bash
kubectl delete service nginx-clusterip-service nginx-nodeport-service nginx-loadbalancer-service
kubectl delete deployment nginx-deployment

Conclusion

In this tutorial, you learned about the different types of Kubernetes Services (ClusterIP, NodePort, LoadBalancer) and how they expose applications. You also learned how Kubernetes automatically manages Endpoints based on Pods that match the Service’s selector. Understanding these concepts is crucial for building scalable and resilient applications in Kubernetes.

Next Topic