Technical Theory

Managing Container Output Streams

Introduction

This tutorial will guide you through the process of managing and evaluating container output streams within a Kubernetes cluster. You’ll learn how to access stdout and stderr from running containers, which is crucial for debugging and monitoring applications. Prior knowledge of Kubernetes basics, including pods, deployments, and services, is recommended. You should also have a working Kubernetes cluster and kubectl configured.

Prerequisites

  1. A running Kubernetes cluster (e.g., Minikube, Kind, or a cloud provider cluster).
  2. kubectl configured to connect to your cluster.
  3. Basic understanding of Kubernetes Pods and Deployments.

Task 1: Deploy a Simple Application

First, let’s deploy a simple application that generates output to stdout. We’ll use a basic nginx deployment for this purpose.

  1. Create a file named nginx-deployment.yaml with the following content:

    NODE_TYPE // yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
  2. Apply the deployment using kubectl:

    NODE_TYPE // bash
    kubectl apply -f nginx-deployment.yaml
    NODE_TYPE // output
    deployment.apps/nginx-deployment created
  3. Verify that the pods are running:

    NODE_TYPE // bash
    kubectl get pods
    NODE_TYPE // output
    NAME                                 READY   STATUS    RESTARTS   AGE
    nginx-deployment-66b6c48b7-8nqbm   1/1     Running   0          20s
    nginx-deployment-66b6c48b7-q4dzw   1/1     Running   0          20s

Task 2: View Logs Using kubectl logs

The primary way to access container logs in Kubernetes is using the kubectl logs command.

  1. Get the name of one of the nginx pods:

    NODE_TYPE // bash
    kubectl get pods -l app=nginx

    Let’s assume the pod name is nginx-deployment-66b6c48b7-8nqbm.

  2. View the logs for the container within the pod:

    NODE_TYPE // bash
    kubectl logs nginx-deployment-66b6c48b7-8nqbm

    This will display the stdout and stderr output from the nginx container. You might not see much output immediately, as nginx only logs requests.

  3. Tail the logs in real-time:

    NODE_TYPE // bash
    kubectl logs -f nginx-deployment-66b6c48b7-8nqbm

    The -f flag follows the log output, displaying new entries as they are generated.

  4. View logs from a specific container (if a pod has multiple containers):

    NODE_TYPE // bash
    kubectl logs nginx-deployment-66b6c48b7-8nqbm -c nginx

    In this case, it’s redundant since we only have one container named nginx. However, in multi-container pods, specifying the container name is essential.

  5. View the logs from the previous instance of the container:

    NODE_TYPE // bash
    kubectl logs --previous nginx-deployment-66b6c48b7-8nqbm -c nginx

    This is useful if the container has crashed and been restarted. The --previous flag retrieves logs from the terminated container instance, if available.

    The --previous flag only works if the container terminated abnormally.
  6. View logs for all pods matching a label selector:

    NODE_TYPE // bash
    kubectl logs -l app=nginx --all-containers

    This command aggregates logs from all containers in all pods that match the app=nginx label. The --all-containers ensures logs from all containers in matching pods are displayed. This is useful for debugging complex applications.

    You can use label selectors to filter pods and containers for log viewing.

Task 3: Generate Log Output in Nginx

To see more interesting logs, let’s generate some traffic to the nginx server.

  1. Expose the Deployment as a Service: Create a service to expose the nginx deployment. Save the following to nginx-service.yaml:

    NODE_TYPE // yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
      type: LoadBalancer

    Apply the service:

    NODE_TYPE // bash
    kubectl apply -f nginx-service.yaml
  2. Get the Service’s External IP (or NodePort): Depending on your Kubernetes environment (Minikube, Cloud Provider), getting the external IP differs.

    • Minikube:
      NODE_TYPE // bash
      minikube service nginx-service --url
      This will print the URL to access the service.
    • Cloud Provider (e.g., GKE, EKS, AKS):
      NODE_TYPE // bash
      kubectl get service nginx-service
      Look for the EXTERNAL-IP column. It might take a few minutes to populate. If using LoadBalancer isn’t feasible, change the service type to NodePort, and then access it via <NodeIP>:<NodePort>.
  3. Generate Traffic: Once you have the URL or external IP and port, use curl or a web browser to generate traffic:

    NODE_TYPE // bash
    curl <your_nginx_url>
  4. Check the Logs Again: Now, repeat the kubectl logs command from Task 2. You should see access logs from Nginx:

    NODE_TYPE // bash
    kubectl logs -f nginx-deployment-66b6c48b7-8nqbm
    NODE_TYPE // output
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
    /docker-entrypoint.sh: Configuration complete; ready for start up
    172.17.0.1 - - [09/Apr/2026:12:00:00 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" "-"

Task 4: Understanding Log Levels and Configuration

While kubectl logs lets you view the output, understanding how logs are generated and configured within the container is crucial. Nginx, for instance, has a configuration file (usually /etc/nginx/nginx.conf) that dictates the logging format and level. Other applications use different logging libraries and configuration methods.

  1. Accessing the Nginx Configuration (Example): You can use kubectl exec to access the container and inspect its configuration:

    NODE_TYPE // bash
    kubectl exec -it nginx-deployment-66b6c48b7-8nqbm -- bash

    This opens a shell inside the container.

  2. Inspect the Configuration: Inside the container’s shell, view the Nginx configuration:

    NODE_TYPE // bash
    cat /etc/nginx/nginx.conf

    Examine the access_log and error_log directives to see how Nginx is configured to log requests.

  3. Exit the Shell: Type exit to leave the container’s shell.

    Modifying configuration files directly inside running containers is generally discouraged. Instead, use ConfigMaps or other Kubernetes configuration management techniques.

Task 5: Centralized Logging Solutions

kubectl logs is useful for quick debugging, but for production environments, a centralized logging solution is essential. Kubernetes integrates with several logging systems, including:

  • Elasticsearch/Kibana (EFK Stack): A popular open-source solution.
  • Fluentd/Fluent Bit: Log collectors that can forward logs to various backends.
  • Google Cloud Logging (GCP): Integrated with Google Kubernetes Engine (GKE).
  • Amazon CloudWatch Logs (AWS): Integrated with Amazon Elastic Kubernetes Service (EKS).
  • Azure Monitor Logs (Azure): Integrated with Azure Kubernetes Service (AKS).

Setting up a centralized logging system is beyond the scope of this tutorial, but it typically involves deploying a logging agent (like Fluentd or Fluent Bit) as a DaemonSet on your cluster. These agents collect logs from each node and forward them to a central storage and analysis system.

Task 6: Cleaning Up

To remove the resources created during this tutorial:

NODE_TYPE // bash
kubectl delete deployment nginx-deployment
kubectl delete service nginx-service

Conclusion

In this tutorial, you learned how to manage and evaluate container output streams in Kubernetes. You explored using kubectl logs to view logs from running containers, generating traffic to produce log output, and briefly touched on centralized logging solutions. Understanding how to access and analyze container logs is a fundamental skill for debugging and monitoring applications deployed on Kubernetes. Remember that for production deployments, a centralized logging system is crucial for effective log management and analysis.

Next Topic