Skip to content

Blog

Using ServiceAccounts in Kubernetes

CKAD

Overview

ServiceAccounts in Kubernetes provide identities for processes running in Pods, enabling them to authenticate with the Kubernetes API server.


Example ServiceAccount Creation

Here's how to create a ServiceAccount:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-serviceaccount
automountServiceAccountToken: true

Steps to Create and Use ServiceAccounts
  1. Create the ServiceAccount

    Define your ServiceAccount in a YAML file as shown above. Save this file as my-serviceaccount.yaml. Apply it with kubectl apply -f my-serviceaccount.yaml.

  2. Assign the ServiceAccount to a Pod

    Specify the ServiceAccount in the Pod's specification. Example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      serviceAccountName: my-serviceaccount
      containers:
      - name: my-container
        image: nginx
    

    Save this as my-pod.yaml and apply it with kubectl apply -f my-pod.yaml.

  3. Location of the Mounted Token

    The ServiceAccount token is automatically mounted at /var/run/secrets/kubernetes.io/serviceaccount in each container.

    This directory contains: - token: The ServiceAccount token. - ca.crt: Certificate for TLS communication with the API server. - namespace: The namespace of the Pod.

  4. Using the Token for API Authentication

    Applications in the container can use the token for Kubernetes API server authentication. The token can be accessed at /var/run/secrets/kubernetes.io/serviceaccount/token.


Accessing the Kubernetes API from a Pod

Here’s how a container might use the token to communicate with the Kubernetes API.

apiVersion: v1
kind: Pod
metadata:
  name: api-communicator-pod
spec:
  serviceAccountName: my-serviceaccount
  containers:
  - name: api-communicator
    image: busybox  
    command: ["sh", "-c", "curl -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc"]
Conclusion

ServiceAccounts in Kubernetes facilitate the secure operation of processes within Pods by providing a means of authenticating with the Kubernetes API server. The automatic mounting of ServiceAccount tokens into Pods simplifies the process of managing secure communications and access controls within a Kubernetes environment.


Detailed Guide on Kubernetes Ingress

CKAD
1. Introduction to Kubernetes Ingress
  • Purpose: Kubernetes Ingress manages external access to applications running within the cluster.
  • Functionality: It routes traffic to one or more Kubernetes Services and can offer additional features like SSL termination.
2. Understanding Ingress in Kubernetes
  • Ingress vs. Service: While Services provide internal routing, Ingress allows external traffic to reach the appropriate Services.
  • Ingress Controller: Essential for implementing the Ingress functionality. The choice of controller affects how the Ingress behaves and is configured.
3. Creating a NodePort Service for Ingress
  • Objective: Set up a NodePort Service that the Ingress will route external traffic to.
  • Service YAML Example:

    apiVersion: v1
    kind: Service
    metadata:
      name: service-for-ingress
    spec:
      type: NodePort
      selector:
        app: web-app
      ports:
        - protocol: TCP
          port: 8080
          targetPort: 80
          nodePort: 30080
    
  • Explanation: This Service, service-for-ingress, is exposed externally on port 30080 and routes traffic to pods labeled app: web-app.

4. Defining an Ingress to Expose the Service
  • Objective: Expose the service-for-ingress externally using an Ingress.
  • Ingress YAML Example:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
    spec:
      rules:
        - http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: service-for-ingress
                    port:
                      number: 8080
    
  • Explanation: The example-ingress directs external HTTP traffic to the service-for-ingress at the specified path /.

5. Verifying Ingress Functionality
  • Testing Access: Use external HTTP requests to the Ingress to ensure it routes traffic correctly to the service-for-ingress.
  • SSL Termination (Optional): Configure SSL termination on the Ingress for secure traffic (if applicable).
6. Summary
  • Effective Use of Ingress: Understanding how to configure and use Ingresses is crucial for managing external access to applications in Kubernetes.

Notes for Network and Services

CKAD

1. Introduction to Network Policies in Kubernetes

Network Policies in Kubernetes allow you to control the flow of traffic at the IP address or port level, which is crucial for ensuring that only authorized services can communicate with each other.

2. Understanding Pod Isolation
  • Non-isolated Pods: By default, pods in Kubernetes can receive traffic from any source. Without any network policies, pods are considered non-isolated.
  • Isolated Pods: When a pod is selected by a network policy, it becomes isolated, and only traffic allowed by the network policies will be permitted.
3. Creating a Front-end and Back-end Pod
  • Scenario: We have a front-end web application and a back-end API service. We want to ensure that only the front-end can communicate with the back-end.
  • Front-end Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: frontend-pod
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend-container
        image: nginx
    
  • Back-end Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: backend-pod
      labels:
        app: backend
    spec:
      containers:
      - name: backend-container
        image: nodejs
    
4. Implementing a Default Deny Network Policy
  • Objective: Create a default deny policy to ensure that no unauthorized communication occurs.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: default-deny-all
      namespace: default
    spec:
      podSelector: {}
      policyTypes:
        - Ingress
    
5. Allowing Traffic from Front-end to Back-end
  • Objective: Allow only the front-end pod to communicate with the back-end pod.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-frontend-to-backend
      namespace: default
    spec:
      podSelector:
        matchLabels:
          app: backend
      policyTypes:
        - Ingress
      ingress:
        - from:
            - podSelector:
                matchLabels:
                  app: frontend
          ports:
            - protocol: TCP
              port: 80
    
  • Explanation: This policy allows ingress traffic to the back-end pod (label app: backend) only from the front-end pod (label app: frontend).

6. Testing and Verifying Network Policies
  • Testing: Use kubectl exec to simulate traffic from the front-end to the back-end and verify that the traffic is allowed. Attempt to access the back-end from a different pod and observe that the traffic is blocked.

Summary

Employing network policies ensures secure communication within your Kubernetes cluster, adhering to the principle of least privilege.


Kubernetes Services with Pod Creation

CKAD

1. Introduction to Kubernetes Services
  • Purpose: Kubernetes Services allow for the exposure of applications running on Pods, both within the cluster and externally.
  • Types of Services: Includes ClusterIP for internal exposure and NodePort for external exposure.
2. Creating a Sample Application Pod
  • Objective: Deploy a simple web application pod to demonstrate service exposure.
  • Pod YAML Example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: web-app-pod
      labels:
        app: web-app
    spec:
      containers:
      - name: nginx-container
        image: nginx
    
  • Explanation: This creates a pod named web-app-pod with an Nginx container, labeled app: web-app, which we will expose using Services.

3. Exposing the Pod with a ClusterIP Service
  • Objective: Expose the web-app-pod within the cluster.
  • Service YAML Example:

    apiVersion: v1
    kind: Service
    metadata:
      name: clusterip-web-service
    spec:
      type: ClusterIP
      selector:
        app: web-app
      ports:
        - protocol: TCP
          port: 8081
          targetPort: 80
    
  • Explanation: The clusterip-web-service exposes the web-app-pod inside the cluster on TCP port 8081.

4. Exposing the Pod with a NodePort Service
  • Objective: Expose the web-app-pod externally, outside the Kubernetes cluster.
  • Service YAML Example:

    apiVersion: v1
    kind: Service
    metadata:
      name: nodeport-web-service
    spec:
      type: NodePort
      selector:
        app: web-app
      ports:
        - protocol: TCP
          port: 8082
          targetPort: 80
          nodePort: 30081
    
  • Explanation: The nodeport-web-service makes the pod accessible externally on TCP port 30081 on each node in the cluster.

5. Verifying Service Exposure
  • ClusterIP Verification: Use kubectl exec to access the web-app-pod from another pod within the cluster using the ClusterIP service.
  • NodePort Verification: Access the web-app-pod externally using <NodeIP>:30081, where NodeIP is the IP address of any node in the cluster.
6. Summary
  • Effective Use of Services: Understanding how to expose pods using ClusterIP and NodePort services is essential for application accessibility in Kubernetes.

Kubernetes Deployment Strategies

CKA

Deploying applications in Kubernetes can be achieved through various strategies, each tailored to different operational requirements and risk tolerances. This document outlines three primary deployment strategies: Canary Deployment, Blue-Green Deployment, and Rolling Update.


Canary Deployment

Canary Deployment involves releasing a new version of the application to a limited subset of users or servers. This strategy is named after the 'canary in a coal mine' concept, where miners would use a canary's sensitivity to dangerous gases as an early warning system.

The primary goal of canary deployments is to reduce the risk associated with releasing new software versions by exposing them to a small, controlled group of users or servers.

  • Minimizes the impact of potential issues in the new version.
  • Allows for real-world testing and feedback.
  • Gradual exposure increases confidence in the new release.

In Kubernetes, canary deployments are managed by incrementally updating pod instances with the new version and routing a small percentage of traffic to them. Monitoring and logging are crucial at this stage to track the performance and stability of the new release.

  • Ideal for high-risk releases or major feature rollouts.
  • Suitable for applications where user feedback is critical before wide release.

Blue-Green Deployment

Blue-Green Deployment involves maintaining two identical production environments, only one of which serves live production traffic at any time. One environment (Blue) runs the current version, while the other (Green) runs the new version.

The primary goal is to switch traffic from Blue to Green with minimal downtime and risk, allowing instant rollback if necessary.

  • Zero downtime deployments.
  • Instant rollback to the previous version if needed.
  • Simplifies the process of switching between versions.

This is achieved in Kubernetes by preparing a parallel environment (Green) with the new release. Once it's ready and tested, the service’s traffic is switched from the Blue environment to the Green one, typically by updating the service selector labels.

  • Best for critical applications where downtime is unacceptable.
  • Useful in production environments where reliability is paramount.

Rolling Update

A Rolling Update method gradually replaces instances of the old version of an application with the new version without downtime.

The key goal is to update an application seamlessly without affecting the availability of the application.

  • Ensures continuous availability during updates.
  • Does not require additional resources unlike Blue-Green Deployment.
  • Offers a balance between speed and safety.

Kubernetes automates rolling updates. When a new deployment is initiated, Kubernetes gradually replaces pods of the previous version of the application with new ones, while maintaining application availability and balancing load.

  • Ideal for standard, routine updates.
  • Suitable for environments where resource optimization is necessary.

Describe advantages of IaC patterns

Advantages of Infrastructure as Code (IaC) Patterns

Infrastructure as Code (IaC) is not just a trend; it's a paradigm shift in how we manage and operate IT infrastructure. By treating infrastructure as if it were software, IaC brings numerous advantages to the table, making it a cornerstone practice in the world of DevOps and cloud computing. Let's delve into some of the key advantages of IaC patterns:


1. Increased Efficiency and Speed:

  • Automated Deployment: IaC allows for the automation of infrastructure deployment, significantly reducing the time and effort required compared to manual processes.
  • Quick Scalability: You can easily scale up or down based on demand, which is particularly beneficial in cloud environments.
  • Faster Time to Market: With rapid deployment, organizations can reduce the time from development to production, accelerating time to market for their products.

2. Consistency and Standardization:

  • Uniform Environments: IaC ensures that every deployment is consistent, eliminating the "it works on my machine" problem. This is crucial for maintaining uniformity across development, staging, and production environments.
  • Reusable Code: IaC allows you to use the same patterns and templates across different environments and projects, ensuring standardization.
  • Error Reduction: Manual errors are significantly reduced as the infrastructure setup is defined in code, which can be tested and validated.

3. Improved Collaboration and Version Control:

  • Better Team Collaboration: IaC enables better collaboration among team members as the code can be shared, reviewed, and edited by multiple people.
  • Version Control Integration: Infrastructure changes can be tracked using version control systems, providing a history of modifications and the ability to revert to previous states if necessary.

4. Cost Management and Optimization:

  • Predictable Costs: With IaC, you can better predict and manage infrastructure costs by defining and controlling the resources being used.
  • Resource Optimization: IaC helps in identifying underutilized resources, allowing for optimization and cost savings.

5. Enhanced Security and Compliance:

  • Security as Code: Security policies can be integrated into the IaC, ensuring that all deployments comply with the necessary security standards.
  • Automated Compliance Checks: Regular compliance checks can be automated, reducing the risk of non-compliance and associated penalties.

6. Disaster Recovery and High Availability:

  • Easy Backup and Restore: IaC makes it easier to back up your infrastructure configuration and restore it in the event of a disaster.
  • High Availability Setup: Ensuring high availability and fault tolerance becomes more manageable with IaC, as you can codify these aspects into the infrastructure.

7. Documented Infrastructure:

  • Self-documenting Code: The code itself acts as documentation, providing insights into the infrastructure setup and changes over time.
  • Improved Knowledge Sharing: New team members can quickly understand the infrastructure setup through the IaC scripts, facilitating better knowledge transfer.

Understanding Infrastructure as Code (IaC)

Introduction

Infrastructure as Code (IaC) has revolutionized the way IT infrastructure is managed and provisioned, offering a systematic, automated approach to handling large-scale, complex systems. This article aims to shed light on the essentials of IaC, with a special focus on its implementation through Terraform, an open-source IaC tool.


1. What is Infrastructure as Code (IaC)?

IaC is a key DevOps practice that involves managing and provisioning computing infrastructure through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. It turns manual tasks into scripts that can be automated, providing a number of benefits:

  • Consistency and Accuracy: By codifying infrastructure, IaC minimizes human errors and ensures consistent configurations across multiple deployments.
  • Speed and Efficiency: Automated processes mean faster deployment and scaling.
  • Documentation: The code itself serves as documentation, showing exactly what's in the environment.
  • Version Control: Infrastructure changes can be versioned, tracked, and rolled back if necessary, using standard version control systems.

2. Terraform: A Primer

Terraform, created by HashiCorp, is an open-source tool that allows you to define, preview, and deploy infrastructure as code. It supports numerous cloud service providers like AWS, Google Cloud, and Microsoft Azure.

  • Defining Infrastructure: Terraform uses HCL (HashiCorp Configuration Language), a declarative language that describes your infrastructure.
  • Immutable Infrastructure: Terraform follows an immutable infrastructure model, meaning every change prompts a rebuild of the infrastructure.
  • State Management: Terraform maintains a state file, enabling it to map real-world resources to your configuration, keep track of metadata, and improve performance for large infrastructures.

Implementing Blue/Green and Canary Deployment Strategies in Kubernetes

CKAD

Overview

Learn how to implement blue/green and canary deployment strategies in Kubernetes. These methods enhance stability and reliability when deploying new versions of applications.

Key Concepts

Blue/Green and Canary deployments are strategies to reduce risks during application updates, allowing gradual and controlled rollouts.

Blue/Green Deployment

What is Blue/Green Deployment?

Blue/Green Deployment involves two identical environments: one active (Blue) and one idle (Green). New versions are deployed to Green and, after testing, traffic is switched from Blue to Green.

Blue Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue-deployment
  labels:
    app: bluegreen-test
    color: blue
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bluegreen-test
      color: blue
  template:
    metadata:
      labels:
        app: bluegreen-test
        color: blue
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
            - containerPort: 80
Green Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: green-deployment
  labels:
    app: bluegreen-test
    color: green
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bluegreen-test
      color: green
  template:
    metadata:
      labels:
        app: bluegreen-test
        color: green
    spec:
      containers:
        - name: nginx
          image: nginx:1.15.8
          ports:
            - containerPort: 80
Service to Switch Traffic
apiVersion: v1
kind: Service
metadata:
  name: bluegreen-test-svc
spec:
  selector:
    app: bluegreen-test
    color: blue  # Change to green to switch traffic
  ports:
    - protocol: TCP
      port: 80

Switching Traffic

Update the color label in the Service from blue to green to direct traffic to the new version.


Canary Deployment

What is Canary Deployment?

Canary Deployment involves rolling out a new version to a small subset of users before deploying it to the entire user base, allowing for gradual and controlled updates.

Main Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: main-deployment
spec:
  replicas: 5  # Main user base
  selector:
    matchLabels:
      app: canary-test
      environment: main
  template:
    metadata:
      labels:
        app: canary-test
        environment: main
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
            - containerPort: 80

Canary Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: canary-deployment
spec:
  replicas: 1  # Subset of users
  selector:
    matchLabels:
      app: canary-test
      environment: main
  template:
    metadata:
      labels:
        app: canary-test
        environment: main
    spec:
      containers:
        - name: nginx
          image: nginx:1.15.8
          ports:
            - containerPort: 80

Service to Direct Traffic:

apiVersion: v1
kind: Service
metadata:
  name: canary-test-svc
spec:
  selector:
    app: canary-test
  ports:
    - protocol: TCP
      port: 80

Managing Traffic

Control user exposure to the new version by adjusting the number of replicas in the canary deployment.


Conclusion

Blue/Green and Canary deployment strategies in Kubernetes offer a methodical approach to manage application updates, reducing risks and ensuring a smoother rollout process.


CKAD kubectl Cheat Sheet

CKAD

0. Context Management

0.1 View Current Context

kubectl config current-context

0.2 List All Contexts

kubectl config get-contexts

0.3 Switch Context

kubectl config use-context my-context
  • my-context: Name of the context to switch to.

1. Pods

1.1 Managing a Pod

Creating a Pod
kubectl run my-pod --image=nginx:latest --restart=Never --env=VAR1=value1
  • --image nginx:latest: Specifies the container image.
  • --restart Never: Controls the restart policy.
  • --env VAR1=value1: Sets environment variables.

Declarative:

  • Generate YAML:
kubectl run my-pod --image=nginx:latest --restart=Never --env=VAR1=value1 --dry-run=client -o yaml > my-pod.yaml
  • Apply YAML:
kubectl apply -f my-pod.yaml
Getting Pods
kubectl get pods -o wide --watch
  • -o wide: Provides more detailed output.
  • --watch: Watches for changes in real-time.
Describing a Pod
kubectl describe pod my-pod

2. Deployments

2.1 Managing Deployments

Creating a Deployment
kubectl create deployment my-deployment --image=nginx:latest --replicas=2
  • --image nginx:latest: Specifies the container image.
  • --replicas 2: Number of desired replicas.

Declarative:

  • Generate YAML:
kubectl create deployment my-deployment --image=nginx:latest --replicas=2 --dry-run=client -o yaml > my-deployment.yaml
  • Apply YAML:
kubectl apply -f my-deployment.yaml
Scaling a Deployment
kubectl scale deployment my-deployment --replicas=5
  • --replicas 5: Sets the number of desired replicas.

Declarative:

  • Update YAML: Adjust replicas in my-deployment.yaml file.
  • Apply YAML:
kubectl apply -f my-deployment.yaml

3. Services

3.1 Creating a Service

kubectl expose deployment my-deployment --port=80 --type=ClusterIP
  • --port 80: Specifies the port number.
  • --type ClusterIP: Defines the type of service.

Declarative:

  • Generate YAML:
kubectl expose deployment my-deployment --port=80 --type=ClusterIP --dry-run=client -o yaml > my-service.yaml
  • Apply YAML:
kubectl apply -f my-service.yaml

4. Namespaces

4.1 Managing Namespaces

Creating a Namespace
kubectl create namespace my-namespace

Declarative:

  • Generate YAML:
kubectl create namespace my-namespace --dry-run=client -o yaml > my-namespace.yaml
  • Apply YAML:
kubectl apply -f my-namespace.yaml
Listing Namespaces
kubectl get namespaces

5. Configuration

5.1 Managing ConfigMaps and Secrets

Creating a ConfigMap
kubectl create configmap my-configmap --from-literal=key1=value1 --from-file=./config-file.txt
  • --from-literal key1=value1: Sets a key-value pair directly.
  • --from-file ./config-file.txt: Creates a ConfigMap from a file.

Declarative:

  • Generate YAML:
kubectl create configmap my-configmap --from-literal=key1=value1 --from-file=./config-file.txt --dry-run=client -o yaml > my-configmap.yaml

-

Apply YAML:

kubectl apply -f my-configmap.yaml
Creating a Secret
kubectl create secret generic my-secret --from-literal=key1=value1 --from-file=./secret-file.txt
  • --from-literal key1=value1: Sets a key-value pair for the secret.
  • --from-file ./secret-file.txt: Creates a Secret from a file.

Declarative:

  • Generate YAML:
kubectl create secret generic my-secret --from-literal=key1=value1 --from-file=./secret-file.txt --dry-run=client -o yaml > my-secret.yaml
  • Apply YAML:
kubectl apply -f my-secret.yaml

6. Monitoring and Logging

6.1 Getting Logs

kubectl logs my-pod -f --since=1h
  • -f: Follow log output in real-time.
  • --since 1h: Show logs since a certain time.

7. Jobs and CronJobs

7.1 Managing Jobs and CronJobs

Creating a Job
kubectl create job my-job --image=busybox
  • --image busybox: Specifies the container image.

Declarative:

  • Generate YAML:
kubectl create job my-job --image=busybox --dry-run=client -o yaml > my-job.yaml
  • Apply YAML:
kubectl apply -f my-job.yaml
Creating a CronJob
kubectl create cronjob my-cronjob --schedule="*/5 * * * *" --image=busybox
  • --schedule "*/5 * * * *": Sets the cron schedule in cron format.

Declarative:

  • Generate YAML:
kubectl create cronjob my-cronjob --schedule="*/5 * * * *" --image=busybox --dry-run=client -o yaml > my-cronjob.yaml
  • Apply YAML:
kubectl apply -f my-cronjob.yaml

8. Rolling Updates and Rollbacks

8.1 Managing Updates and Rollbacks

Updating a Deployment
kubectl set image deployment/my-deployment nginx=nginx:1.9.1

Declarative:

  • Update YAML: Adjust image in my-deployment.yaml.
  • Apply YAML:
kubectl apply -f my-deployment.yaml
Rolling Back a Deployment
kubectl rollout undo deployment/my-deployment

Declarative:

  • Use previous version of my-deployment.yaml.
  • Apply YAML:
kubectl apply -f my-deployment.yaml

9. Resource Management

9.1 Setting Resource Requests and Limits

kubectl set resources deployment/my-deployment --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi
  • --limits cpu=200m,memory=512Mi and --requests cpu=100m,memory=256Mi: Set resource constraints.

Declarative:

  • Update YAML: Adjust resources in my-deployment.yaml.
  • Apply YAML:
kubectl apply -f my-deployment.yaml

10. Debugging

10.1 Diagnosing and Fixing Issues

Executing into a Container
kubectl exec -it my-pod -- /bin/bash

Declarative: Not applicable for exec command.

Port Forwarding
kubectl port-forward my-pod 8080:80
  • 8080:80: Forwards local port 8080 to the Pod's port 80.

Declarative: Not applicable for port-forward command.

Copying Files to/from a Container
kubectl cp /path/on/local/file.txt my-pod:/path/in/container/file.txt

Declarative: Not applicable for cp command.

11. Labels and Selectors

11.1 Managing Labels

Adding Labels to a Pod
kubectl label pods my-pod key1=value1 key2=value2
  • Adds labels key1=value1 and key2=value2 to my-pod.
Updating Labels of a Pod
kubectl label pods my-pod key1=value1 --overwrite
  • Updates the value of key1 to value1 on my-pod, overwriting if it exists.
Removing Labels from a Pod
kubectl label pods my-pod key1-
  • Removes the label key1 from my-pod.
Filtering Resources by Labels
kubectl get pods -l key1=value1,key2=value2
  • Lists all pods with labels key1=value1 and key2=value2.
Using Labels for Resource Management
  • Imperative:
  • Assigning a label:
kubectl label pods my-pod env=dev
  • Selecting resources:
kubectl get pods -l env=dev
  • Declarative:
  • Update YAML: Add labels under metadata.labels in resource definition files.
  • Apply YAML:
kubectl apply -f <resource-definition-file>.yaml

Terraform - HashiCorp Infrastructure Automation Certification

CKA

Exam Objectives

HashiCorp Certified: Terraform Associate

For in-depth information on Terraform, including certification details. visit the HashiCorp Certified: Terraform Associate page.

1. Understand Infrastructure as Code (IaC) Concepts

2. Understand the Purpose of Terraform

  • 2a. Explain multi-cloud and provider-agnostic benefits
  • 2b. Explain the benefits of state

3. Understand Terraform Basics

  • 3a. Install and version Terraform providers
  • 3b. Describe plugin-based architecture
  • 3c. Write Terraform configuration using multiple providers
  • 3d. Describe how Terraform finds and fetches providers

4. Use Terraform Outside of Core Workflow

  • 4a. Describe using terraform import to import existing infrastructure into your Terraform state
  • 4b. Use terraform state to view Terraform state
  • 4c. Describe enabling verbose logging and its value

5. Interact with Terraform Modules

  • 5a. Contrast and use different module source options including the public Terraform Module Registry
  • 5b. Interact with module inputs and outputs
  • 5c. Describe variable scope within modules/child modules
  • 5d. Set module version

6. Use the Core Terraform Workflow

  • 6a. Describe Terraform workflow (Write -> Plan -> Create)
  • 6b. Initialize a Terraform working directory (terraform init)
  • 6c. Validate a Terraform configuration (terraform validate)
  • 6d. Generate and review an execution plan for Terraform (terraform plan)
  • 6e. Execute changes to infrastructure with Terraform (terraform apply)
  • 6f. Destroy Terraform managed infrastructure (terraform destroy)
  • 6g. Apply formatting and style adjustments to a configuration (terraform fmt)

7. Implement and Maintain State

  • 7a. Describe default local backend
  • 7b. Describe state locking
  • 7c. Handle backend and cloud integration authentication methods
  • 7d. Differentiate remote state back end options
  • 7e. Manage resource drift and Terraform state
  • 7f. Describe backend block and cloud integration in configuration
  • 7g. Understand secret management in state files

8. Read, Generate, and Modify Configuration

  • 8a. Demonstrate use of variables and outputs
  • 8b. Describe secure secret injection best practice
  • 8c. Understand the use of collection and structural types
  • 8d. Create and differentiate resource and data configuration
  • 8e. Use resource addressing and resource parameters to connect resources together
  • 8f. Use HCL and Terraform functions to write configuration
  • 8g. Describe built-in dependency management (order of execution based)

9. Understand Terraform Cloud Capabilities

  • 9a. Explain how Terraform Cloud helps to manage infrastructure
  • 9b. Describe how Terraform Cloud enables collaboration and governance