Skip to content

Kubernetes

Restricting linux capabilities with AppArmor

Introduction

AppArmor (Application Armor) is a Linux kernel security module that provides mandatory access control (MAC) to restrict the capabilities of programs.

It enforces security policies, known as profiles, that define the file system and network resources a program can access. By confining applications, AppArmor reduces the potential impact of security breaches, limiting the damage a compromised application can cause.

It is known for its ease of use and integration with various Linux distributions, providing a robust layer of defense to enhance system security.


Key Concepts

  • Profiles: AppArmor profiles define the permitted and denied actions for an application, enhancing security by restricting programs to a limited set of resources.
  • Modes: AppArmor operates in two modes:
    1. Enforcement: Enforces the rules defined in the profile, blocking any unauthorized actions.
    2. Complain: Logs unauthorized actions but does not block them, useful for developing and testing profiles.

Profile Components

  • Capability Entries: Define allowed capabilities (e.g., network access, raw socket usage).
  • Network Rules: Control access to network resources.
  • File access permissions: Specify file and directory access permissions.
#include <tunables/global>

profile /bin/ping {
  # Include common safe defaults
  #include <abstractions/base>
  #include <abstractions/nameservice>

  # Allow necessary capabilities
  capability net_raw,
  capability setuid,

  # Allow raw network access
  network inet raw,

  # File access permissions
  /bin/ping ixr,
  /etc/modules.conf r,
}

Common Commands

Check Profile Status.
aa-status
Load/Unload Profiles
sudo apparmor_parser -r <profile_file>
Disables a profile
sudo aa-disable <profile_name>
Switches a profile to complain mode.
sudo aa-complain <profile_name>
Switches a profile to enforce mode.
sudo aa-enforce <profile_name>

Best Practices

  • Least Privilege: Ensure profiles grant the minimum necessary permissions to applications.
  • Regular Updates: Keep profiles up to date with application changes and security patches.
  • Testing: Use complain mode to test new or modified profiles before enforcing them.
  • Monitoring: Regularly check logs for denied actions to identify potential issues or required profile adjustments.

Kubernetes Integration

In Kubernetes, you can enhance pod security by specifying AppArmor profiles within the securityContext of a pod or container.

Pod-Level AppArmor Profile:

To apply an AppArmor profile to all containers in a pod, include the securityContext in the pod specification:

apiVersion: v1
kind: Pod
metadata:
  name: apparmor-pod
spec:
  securityContext:
    appArmorProfile:
      type: Localhost
      localhostProfile: my-apparmor-profile
  containers:
    - name: my-container
      image: my-image

Container-Level AppArmor Profile:

To apply an AppArmor profile to a specific container, define the securityContext within the container specification:

apiVersion: v1
kind: Pod
metadata:
  name: apparmor-pod
spec:
  containers:
    - name: my-container
      image: my-image
      securityContext:
        appArmorProfile:
          type: Localhost
          localhostProfile: my-apparmor-profile

Key Points:

  • Profile Types:
  • RuntimeDefault: Uses the container runtime's default AppArmor profile.
  • Localhost: Uses a profile loaded on the host; specify the profile name in localhostProfile.
  • Unconfined: Runs the container without AppArmor confinement.

  • Profile Availability: Ensure the specified AppArmor profiles are loaded on all nodes where the pods might run. You can verify loaded profiles by checking the /sys/kernel/security/apparmor/profiles file on each node.

  • Kubernetes Version Compatibility: The use of securityContext for AppArmor profiles is supported in Kubernetes versions 1.30 and above. For earlier versions, AppArmor profiles are specified through annotations.

By configuring AppArmor profiles within the securityContext, you can effectively manage and enforce security policies for your applications in Kubernetes, enhancing the overall security of your containerized environments.

Scanning Images with Trivy.

Introduction

Trivy is an open-source security scanner that detects vulnerabilities in container images, file systems, and Git repositories. It identifies security issues in both operating system packages and application dependencies within the container. By using a regularly updated vulnerability database, Trivy helps ensure that containers are secure and compliant with security best practices.


Commands

Trivy commands specifically related to image scanning that are useful for the CKS exam:

Basic Image Scan

trivy image <image_name>

Scans a specified container image for vulnerabilities.

Output and Formatting

  • Output in JSON Format:
trivy image -f json -o results.json <image_name>

Scans the image and outputs the results in JSON format to a file.

  • Output in Table Format:
trivy image -f table <image_name>

Scans the image and outputs the results in a table format (default format).

Severity Filtering

  • Filter by Severity:
trivy image --severity HIGH,CRITICAL <image_name>

Scans the image and reports only high and critical severity vulnerabilities.

Cache Management

  • Clear Cache:
trivy image --clear-cache

Clears the local cache used by Trivy before scanning the image.

Ignoring Specific Vulnerabilities

  • Ignore Specific Vulnerabilities:
trivy image --ignorefile .trivyignore <image_name>

Uses a .trivyignore file to specify vulnerabilities to ignore during scanning.

Advanced Options

  • Timeout Setting:
trivy image --timeout 5m <image_name>

Sets a timeout for the scanning process.

  • Ignore Unfixed Vulnerabilities:
trivy image --ignore-unfixed <image_name>

Ignores vulnerabilities that do not have a fix yet.

  • Skip Update:
trivy image --skip-update <image_name>

Skips updating the vulnerability database before scanning.

Comprehensive Scan with All Details

trivy image --severity HIGH,CRITICAL --ignore-unfixed --skip-update -f json -o results.json <image_name>

A comprehensive scan that filters by severity, ignores unfixed vulnerabilities, skips database update, and outputs results in JSON format to a file.


These commands allow you to perform detailed and customizable scans on container images, ensuring you can identify and manage vulnerabilities.

Documentation Guide For CKS

Domains & Competencies

Topic Weightage (%)
Cluster Setup 10
Cluster Hardening 15
System Hardening 15
Minimize Microservice Vulnerabilities 20
Supply Chain Security 20
Monitoring, Logging and Runtime Security 20

Certified Kubernetes Security Specialist Certification Free Courses


1. Cluster Setup


2. Cluster Hardening


3. System Hardening


4. Minimize Microservice Vulnerabilities


5. Supply Chain Security


6. Monitoring, Logging and Runtime Security


Building and Modifying Container Images Using Docker Commands

CKAD

Overview

Building and modifying container images are crucial skills for developers working with Docker and Kubernetes. This guide covers the essential Docker commands for creating and updating container images, especially for Go applications.

Documentation

Docker CLI documentation.


Building a Container Image

Warning

Ensure this Dockerfile is placed in the root of your Go project directory.

  1. Create a Dockerfile:

    Start by writing a Dockerfile for your Go application. This file contains instructions to build the image.

    Example Dockerfile for Go:

    Dockerfile
    FROM golang:1.16-buster
    WORKDIR /app
    COPY go.mod go.sum ./
    RUN go mod download
    COPY *.go ./
    RUN go build -o /myapp
    CMD ["/myapp"]
    
  2. Build the Image:

    Use the docker build command.

    docker build -t my-go-app .
    
  3. Verify the Image:

    Check the newly created image using docker images.


Modifying an Existing Container Image
  1. Update the Dockerfile:

    Make necessary changes to the Dockerfile, such as updating base images or changing build instructions.

  2. Rebuild the Image:

    Use the docker build command with a new tag or version.

    docker build -t my-go-app:v2 .
    
  3. Clean Up Old Images:

    Remove unused or old images to free up space.

    docker image prune
    

Advanced Docker Commands
  1. Tagging Images:

    Use docker tag to assign new tags to existing images for better version control.

    docker tag my-go-app my-go-app:v1
    
  2. Inspecting Images:

    docker inspect provides detailed information about an image's configuration and layers.

    docker inspect my-go-app
    
  3. Pushing to Docker Hub:

    Push your image to a registry like Docker Hub using docker push

    docker push myusername/my-go-app:v1
    

Integration with Kubernetes
  • Once the Docker image is ready, it can be deployed in a Kubernetes cluster using deployment manifests.

Conclusion

Understanding Docker commands for building and modifying container images is vital for Go developers and for CKAD prepration in a Kubernetes environment. This knowledge enables efficient development, testing, and deployment of containerized applications.


Dockerfile Creation for a Go Application

CKAD

Overview

Creating a Dockerfile for a Go application involves defining steps to build a lightweight and efficient Docker image. This includes setting up the Go environment, managing dependencies, and preparing the application for deployment.


Example Dockerfile for Go

This example illustrates a basic Dockerfile setup for a Go application.

# Start from a Debian-based Go image
FROM golang:1.16-buster

# Set the working directory inside the container
WORKDIR /app

# Copy the Go Modules manifests
COPY go.mod go.sum ./

# Download Go modules
RUN go mod download

# Copy the go source files
COPY *.go ./

# Compile the application
RUN go build -o /myapp

# Command to run the executable
CMD ["/myapp"]

Steps to Create a Dockerfile for Go
  1. Use a Go base image like golang:1.16-buster.
  2. Set the working directory with WORKDIR /app.
  3. Copy go.mod and go.sum and run go mod download.
  4. Copy your Go source files into the container.
  5. Compile your app with RUN go build -o /myapp.
  6. Define the command to run the application using CMD ["/myapp"].

Integration with Kubernetes

Deploying the Go application in Kubernetes requires building the Docker image and defining Kubernetes resources like Deployments or Services.


Conclusion

A Dockerfile for a Go application sets up the necessary environment for running Go applications in containers. This setup facilitates easy deployment and scaling within a Kubernetes cluster, leveraging the power of containerization and orchestration.


Deploy a multi-container Pod using sidecar or init container patterns.

CKAD

Deploying a Pod with a Sidecar Container

This example demonstrates deploying a multi-container Pod where one container (the sidecar) reads data written by the main container.

multi-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: multi-pod
spec:
  containers:
    - name: writer
      image: busybox:stable
      command: ['sh', '-c', 'echo "The writer wrote this!" > /output/data.txt; while true; do sleep 5; done']
      volumeMounts:
        - name: shared
          mountPath: /output
    - name: sidecar
      image: busybox:stable
      command: ['sh', '-c', 'while true; do cat /input/data.txt; sleep 5; done']
      volumeMounts:
        - name: shared
          mountPath: /input
  volumes:
    - name: shared
      emptyDir: {}

In this deployment, the writer container writes data to a shared volume, and the sidecar container continuously reads and displays this data from the shared volume.


Deploying a Pod with an Init Container

This example illustrates deploying a Pod with an init container that must complete its task before the main container starts.

init-container.yaml
apiVersion: v1
kind: Pod
metadata:
  name: init-container
spec:
  containers:
    - name: nginx
      image: nginx:stable
  initContainers:
    - name: busybox
      image: busybox:stable
      command: ['sh', '-c', 'sleep 30']

In this setup, the busybox init container runs a simple sleep command for 30 seconds. Once this init container completes its execution, the main nginx container will start.


Conclusion

These examples can be deployed in your Kubernetes environment. They illustrate the use of sidecar and init containers, offering practical insights into their deployment and functionality in a Kubernetes setting.


Work with persistent and ephemeral volumes in Pods.

CKAD

Overview

Understanding how to work with persistent and ephemeral volumes in Kubernetes Pods is crucial for managing data storage and lifecycle. Persistent volumes (PVs) provide long-term storage, while ephemeral volumes are temporary and tied to the Pod's lifecycle.

Documentation

Kubernetes Volumes.


Using Persistent Volumes (PVs)

Persistent Volumes in Kubernetes are used for storing data beyond the lifecycle of a Pod. They are especially important for stateful applications like databases.

Persistent Volume Claim (PVC)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
Pod
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: my-storage
  volumes:
    - name: my-storage
      persistentVolumeClaim:
        claimName: my-pvc

In this example, a PVC is created and then mounted into a Pod. The data stored in /var/www/html will persist even if the Pod is deleted.


Using Ephemeral Volumes

Ephemeral volumes, such as emptyDir, are tied to the lifecycle of a Pod. They are used for temporary data that doesn't need to persist.

Pod with emptyDir Volume
apiVersion: v1
kind: Pod
metadata:
  name: my-temp-pod
spec:
  containers:
    - name: my-container
      image: nginx
      volumeMounts:
      - mountPath: "/tmp"
        name: temp-storage
  volumes:
    - name: temp-storage
      emptyDir: {}

In this setup, an emptyDir volume is created for temporary data storage. The data in /tmp is lost when the Pod is deleted.


Integration with Kubernetes Ecosystem
  • PVs can be backed by various storage systems like NFS, cloud storage, or local storage.
  • Ephemeral volumes are useful for caching, temporary computations, or as a workspace for applications.
  • Kubernetes StatefulSets can be used with PVs for stateful applications requiring stable, persistent storage.

Conclusion

Both persistent and ephemeral volumes play key roles in Kubernetes data management. Understanding their characteristics and use cases helps in effectively architecting and managing containerized applications in Kubernetes.


Monitoring Applications in Kubernetes with kubectl top

CKAD

Overview

Monitoring resource usage in Kubernetes clusters is crucial for ensuring the efficient operation of applications. The Kubernetes CLI tool kubectl top provides real-time views into the resource usage of nodes and pods in the cluster.

Installing Metrics Server

Before using kubectl top, you need to have the Metrics Server installed in your Kubernetes cluster. The Metrics Server collects resource metrics from Kubelets and exposes them via the Kubernetes API server for use by tools like kubectl top.

  1. Install Metrics Server

    You can install the Metrics Server using kubectl with the following command:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    

    This command deploys the Metrics Server in the kube-system namespace.

  2. Verify Installation

    Ensure that the Metrics Server has been deployed successfully:

    kubectl get deployment metrics-server -n kube-system
    

Documentation

For more detailed information and configuration options, visit Metrics Server on GitHub.


Understanding kubectl top

kubectl top displays the current CPU and memory usage for nodes or pods, fetching data from the Metrics Server.

  1. Monitoring Pod Resource Usage

    To view the resource usage of pods, use:

    kubectl top pod
    

    This lists all pods in the default namespace with their CPU and memory usage.

  2. Specifying Namespaces

    Specify a different namespace using the -n flag:

    kubectl top pod -n [namespace]
    
  3. Monitoring Node Resource Usage

    To view resource usage across nodes:

    kubectl top node
    

Best Practices for Monitoring
  1. Regularly check resource usage to prevent issues.
  2. Use kubectl top alongside other monitoring tools.
  3. Monitor both pods and nodes for overall health.

Conclusion

kubectl top is an essential tool for real-time monitoring of resource usage in Kubernetes. With the Metrics Server installed, it becomes a powerful asset for maintaining the health and efficiency of your Kubernetes cluster.


Exploring Admission Control in Kubernetes

CKAD

Overview

Admission Control in Kubernetes refers to a set of plugins that intercept requests to the Kubernetes API server after authentication and authorization. These plugins can modify or validate requests to the API server, ensuring compliance with specific policies or enhancing security.


Admission Control Plugins

There are several admission control plugins available in Kubernetes, each serving a specific purpose.

Some common plugins include:

  • NamespaceLifecycle
  • LimitRanger
  • ServiceAccount
  • NodeRestriction
  • PodSecurityPolicy
  • ResourceQuota

Steps to Configure Admission Control
  1. Identify Required Plugins

    Determine which admission control plugins are necessary for your specific requirements.

  2. Configure kube-apiserver

    Admission control plugins are enabled in the kube-apiserver configuration. Locate the kube-apiserver manifest, typically at /etc/kubernetes/manifests/kube-apiserver.yaml. Add the --enable-admission-plugins flag with a comma-separated list of plugins. Example:

    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount
    
  3. Restart kube-apiserver

    After modifying the kube-apiserver manifest, restart the kube-apiserver process. This is usually handled automatically by Kubernetes when the manifest file is updated.

  4. Verify Plugin Activation

    Ensure that the plugins are active and working as expected by observing the API server logs or testing the functionality of the plugins.


Conclusion

Admission Control is a powerful mechanism in Kubernetes for enforcing policies and enhancing the security of the cluster. Properly configuring admission control plugins can help maintain cluster stability and compliance with organizational standards. It is important to understand the implications of each plugin and configure them according to the specific needs of your Kubernetes environment.


Addressing API Deprecations in Application Code or Configurations

CKAD

Overview

API deprecation in Kubernetes is a critical process where changes to an API are announced well in advance, providing users ample time to update their code and tools. This is particularly important because Kubernetes removes support for deprecated APIs in General Availability (GA) only after 12 months or 3 Kubernetes releases, whichever is longer.


Addressing API Deprecations
  1. Stay Informed

    Regularly review the Kubernetes changelog and deprecation notices to stay ahead of upcoming changes.

  2. Update Code and Configurations

    Modify application code and configurations to adopt the updated API versions. This is crucial for maintaining compatibility and functionality.

  3. Test Changes

    After updating to newer APIs, thoroughly test your application to ensure there are no regressions or compatibility issues.

  4. Monitor for Future Deprecations

    Continuously monitor Kubernetes releases for new deprecations to ensure your application remains compatible with future Kubernetes versions.


Conclusion

Proactively managing API deprecations in Kubernetes is essential for maintaining a stable and efficient application environment. By staying informed and making timely updates, developers can ensure seamless functionality and avoid potential disruptions caused by outdated APIs.