Skip to content

Blog

Educational and Reference Content

Please note that the blog posts on this website are primarily for my personal studies and later reference. In writing these posts, I have referred to various internet sources, including online courses and textbooks. The content is intended to enrich my understanding and skills as a SRE and technology enthusiast.

A resource for revisiting solved challenges and used commands, it's my digital toolbox for quick, efficient reference.

While the blog reflects my current knowledge, I recommend consulting official sources for the latest in technology trends.

Writing My First MCP Server with Claude Code

Building my first Model Context Protocol (MCP) server was an exciting journey into extending Claude's capabilities. The MCP Memory Server allows Claude to store, retrieve, and manage memories across conversations, creating a persistent memory layer that enhances AI interactions.


What is MCP?

The Model Context Protocol (MCP) is an open standard that enables AI assistants like Claude to connect with external tools and data sources securely. It provides a standardized way to extend AI capabilities beyond their built-in knowledge, allowing for real-time data access and tool integration.

MCP enables: - Secure connections to external systems - Real-time data retrieval - Tool invocation and management - Standardized communication protocols


Project Overview

The MCP Memory Server is a lightweight Node.js application that provides Claude with persistent memory capabilities. Unlike traditional conversations that lose context when they end, this server allows Claude to:

  • Store important information for later retrieval
  • Tag memories for better organization
  • Search through stored memories
  • Maintain context across multiple conversations

Key Features

Memory Management Tools

The server implements five core tools that Claude can use:

Tool Description Input Parameters
store_memory Save new memories with optional tags content (string): Memory content
tags (array): Optional categorization labels
retrieve_memories Search and filter stored memories Search criteria and filters
list_memories View all stored memories None
delete_memory Remove specific memories by ID id (string): Memory identifier
clear_memories Delete all stored memories None

Memory Structure

Each memory includes:

  • Unique ID: Auto-generated identifier
  • Content: The actual memory text
  • Tags: Optional categorization labels
  • Timestamp: Creation date and time
{
  "id": "unique-memory-id",
  "content": "The actual memory content",
  "tags": ["tag1", "tag2"],
  "timestamp": "2025-01-12T09:10:00Z"
}

Configuration and Setup

Server Configuration

The MCP Memory Server runs as a standalone Node.js application with configurable options:

// Environment Variables
const PORT = process.env.PORT || 3000;
const LOG_LEVEL = process.env.LOG_LEVEL || 'info';

// Server initialization
const server = new Server({
  name: 'memory-server',
  version: '1.0.0'
});

Claude Integration

Connecting the server to Claude Code is straightforward using the MCP HTTP transport:

# Option 1: Run locally
npm start

# Option 2: Run with Docker
docker build -t mcp-memory-server .
docker run -d -p 3000:3000 mcp-memory-server

# Connect to Claude
claude mcp add memory http://localhost:3000/message --transport http

Development Workflow

For development, the server supports hot reloading:

# Development mode with auto-restart
npm run dev

# Build and test
npm run build
npm test

How It Works

Server Architecture

The MCP Memory Server implements the MCP specification with these core components:

  1. Tool Registry: Defines available memory operations
  2. Message Handler: Processes incoming requests from Claude
  3. Memory Store: In-memory storage for demonstration (extensible to databases)
  4. Response Formatter: Structures responses according to MCP standards

Request Flow

sequenceDiagram
    participant Claude
    participant MCP Server
    participant Memory Store

    Claude->>MCP Server: store_memory request
    MCP Server->>Memory Store: Save memory
    Memory Store-->>MCP Server: Confirmation
    MCP Server-->>Claude: Success response

Memory Persistence

Currently, the server uses in-memory storage for simplicity, but it's designed for easy extension to persistent databases:

// Current implementation (in-memory)
const memories = new Map();

// Future extensions could use:
// - SQLite for local persistence
// - PostgreSQL for production
// - Redis for caching

Implementation Details

Tool Registration

Each tool is registered with the MCP server following the protocol specification:

server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "store_memory",
        description: "Store a new memory with optional tags",
        inputSchema: {
          type: "object",
          properties: {
            content: { type: "string" },
            tags: { type: "array", items: { type: "string" } }
          },
          required: ["content"]
        }
      }
      // ... other tools
    ]
  };
});

Error Handling

The server implements comprehensive error handling:

try {
  // Tool execution logic
  const result = await executeMemoryOperation(args);
  return { content: [{ type: "text", text: result }] };
} catch (error) {
  return {
    content: [{
      type: "text",
      text: `Error: ${error.message}`
    }],
    isError: true
  };
}

Docker Deployment

For production deployment, the project includes Docker support:

FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

Deploy with:

docker build -t mcp-memory-server .
docker run -d -p 3000:3000 mcp-memory-server


Future Enhancements

The current implementation serves as a foundation for more advanced features:

Planned Improvements

  • Database Integration: PostgreSQL or MongoDB for persistent storage
  • Advanced Search: Full-text search and semantic similarity
  • Memory Categories: Hierarchical organization system
  • Web Interface: Browser-based memory management
  • Authentication: Secure multi-user support
  • Memory Expiration: Automatic cleanup of old memories
  • Export/Import: Backup and migration capabilities

Scalability Considerations

  • Horizontal scaling with load balancers
  • Memory partitioning for large datasets
  • Caching layers for improved performance
  • Rate limiting and resource management

Lessons Learned

Building this MCP server taught me several valuable lessons:

MCP Protocol Benefits

  • Standardization: Consistent interface across different tools
  • Security: Built-in authentication and authorization
  • Flexibility: Easy to extend with new capabilities

Development Best Practices

  • Start Simple: Begin with core functionality before adding complexity
  • Error Handling: Robust error management is crucial for reliability
  • Documentation: Clear API documentation improves usability
  • Testing: Comprehensive tests ensure stability

Integration Challenges

  • Protocol Compliance: Strict adherence to MCP specifications
  • Performance: Balancing features with response times
  • User Experience: Making tools intuitive for Claude to use

Conclusion

Creating the MCP Memory Server was an excellent introduction to extending AI capabilities through the Model Context Protocol. The project demonstrates how developers can build powerful tools that enhance AI interactions while maintaining security and standardization.

The server successfully bridges the gap between Claude's conversational abilities and persistent data storage, opening up possibilities for more sophisticated AI workflows. Whether you're building internal tools or exploring AI extensibility, MCP provides a robust foundation for innovation.

For developers interested in MCP development, I encourage exploring the repository and experimenting with your own extensions. The future of AI tooling lies in these kinds of modular, interoperable systems.


Ready to build your own MCP server? Check out the complete source code and start extending Claude Code's capabilities today!

Bash Shortcuts Cheat Sheet

Ensure that your terminal emulator is properly configured to treat the OPTION key as a modifier key

For macOS Terminal:

  1. Open Terminal preferences (Cmd + ,).
  2. Go to the Profiles tab and select your current profile.
  3. Under the Keyboard tab, check the option Use Option as Meta key.
  4. This setting ensures that the OPTION key is used as a modifier for key combinations like OPTION+B.

For iTerm2:

  1. Open iTerm2 preferences (Cmd + ,).
  2. Go to Profiles -> Keys.
  3. Under Left Option Key (or Right Option Key), set it to Esc+ (this makes OPTION act like the Meta key).
  4. Make sure that Send escape sequences is not causing unexpected behavior.

Command Editing Shortcuts

Shortcut Description
CTRL+A Go to the start of the command line
CTRL+E Go to the end of the command line
CTRL+U Delete from cursor to the start of the command line
CTRL+K Delete from cursor to the end of the command line
CTRL+W Delete from cursor to the start of the word
OPTION+D Delete from cursor to the end of the word (whole word if at the boundary)
CTRL+Y Paste the last cut text after the cursor
CTRL+XX Move between the start of the command line and the current cursor position (toggle)
OPTION+B Move backward one word (or go to the start of the current word)
OPTION+F Move forward one word
OPTION+C Capitalize to the end of the current word
OPTION+U Make uppercase from the cursor to the end of the word
OPTION+L Make lowercase from the cursor to the end of the word
OPTION+T Swap the current word with the previous word
CTRL+F Move forward one character
CTRL+B Move backward one character
CTRL+D Delete the character under the cursor
CTRL+H Delete the character before the cursor
CTRL+T Swap the character under the cursor with the previous one

Command Recall Shortcuts

Shortcut Description
CTRL+R Search command history backward
CTRL+J End the history search at the current entry
CTRL+G Escape from history searching mode
CTRL+P Go to the previous command in history
CTRL+N Go to the next command in history
CTRL+_ Undo the last command
OPTION+. Use the last word of the previous command

Command Control Shortcuts

Shortcut Description
CTRL+L Clear the screen
CTRL+S Stop output to the screen (useful for long-running verbose commands)
CTRL+Q Resume output to the screen
CTRL+C Terminate the current command
CTRL+Z Suspend the current command

Bash Bang Shortcuts

Shortcut Description
!! Run the last command
!blah Run the most recent command starting with "blah"
!blah:p Print the command that !blah would run
!$ The last word of the previous command
!$:p Print the last word of the previous command
!* All arguments of the previous command except the first word
!*:p Print what !* would substitute

Restricting linux capabilities with AppArmor

Introduction

AppArmor (Application Armor) is a Linux kernel security module that provides mandatory access control (MAC) to restrict the capabilities of programs.

It enforces security policies, known as profiles, that define the file system and network resources a program can access. By confining applications, AppArmor reduces the potential impact of security breaches, limiting the damage a compromised application can cause.

It is known for its ease of use and integration with various Linux distributions, providing a robust layer of defense to enhance system security.


Key Concepts

  • Profiles: AppArmor profiles define the permitted and denied actions for an application, enhancing security by restricting programs to a limited set of resources.
  • Modes: AppArmor operates in two modes:
    1. Enforcement: Enforces the rules defined in the profile, blocking any unauthorized actions.
    2. Complain: Logs unauthorized actions but does not block them, useful for developing and testing profiles.

Profile Components

  • Capability Entries: Define allowed capabilities (e.g., network access, raw socket usage).
  • Network Rules: Control access to network resources.
  • File access permissions: Specify file and directory access permissions.
#include <tunables/global>

profile /bin/ping {
  # Include common safe defaults
  #include <abstractions/base>
  #include <abstractions/nameservice>

  # Allow necessary capabilities
  capability net_raw,
  capability setuid,

  # Allow raw network access
  network inet raw,

  # File access permissions
  /bin/ping ixr,
  /etc/modules.conf r,
}

Common Commands

Check Profile Status.
aa-status
Load/Unload Profiles
sudo apparmor_parser -r <profile_file>
Disables a profile
sudo aa-disable <profile_name>
Switches a profile to complain mode.
sudo aa-complain <profile_name>
Switches a profile to enforce mode.
sudo aa-enforce <profile_name>

Best Practices

  • Least Privilege: Ensure profiles grant the minimum necessary permissions to applications.
  • Regular Updates: Keep profiles up to date with application changes and security patches.
  • Testing: Use complain mode to test new or modified profiles before enforcing them.
  • Monitoring: Regularly check logs for denied actions to identify potential issues or required profile adjustments.

Kubernetes Integration

In Kubernetes, you can enhance pod security by specifying AppArmor profiles within the securityContext of a pod or container.

Pod-Level AppArmor Profile:

To apply an AppArmor profile to all containers in a pod, include the securityContext in the pod specification:

apiVersion: v1
kind: Pod
metadata:
  name: apparmor-pod
spec:
  securityContext:
    appArmorProfile:
      type: Localhost
      localhostProfile: my-apparmor-profile
  containers:
    - name: my-container
      image: my-image

Container-Level AppArmor Profile:

To apply an AppArmor profile to a specific container, define the securityContext within the container specification:

apiVersion: v1
kind: Pod
metadata:
  name: apparmor-pod
spec:
  containers:
    - name: my-container
      image: my-image
      securityContext:
        appArmorProfile:
          type: Localhost
          localhostProfile: my-apparmor-profile

Key Points:

  • Profile Types:
  • RuntimeDefault: Uses the container runtime's default AppArmor profile.
  • Localhost: Uses a profile loaded on the host; specify the profile name in localhostProfile.
  • Unconfined: Runs the container without AppArmor confinement.

  • Profile Availability: Ensure the specified AppArmor profiles are loaded on all nodes where the pods might run. You can verify loaded profiles by checking the /sys/kernel/security/apparmor/profiles file on each node.

  • Kubernetes Version Compatibility: The use of securityContext for AppArmor profiles is supported in Kubernetes versions 1.30 and above. For earlier versions, AppArmor profiles are specified through annotations.

By configuring AppArmor profiles within the securityContext, you can effectively manage and enforce security policies for your applications in Kubernetes, enhancing the overall security of your containerized environments.

Scanning Images with Trivy.

Introduction

Trivy is an open-source security scanner that detects vulnerabilities in container images, file systems, and Git repositories. It identifies security issues in both operating system packages and application dependencies within the container. By using a regularly updated vulnerability database, Trivy helps ensure that containers are secure and compliant with security best practices.


Commands

Trivy commands specifically related to image scanning that are useful for the CKS exam:

Basic Image Scan

trivy image <image_name>

Scans a specified container image for vulnerabilities.

Output and Formatting

  • Output in JSON Format:
trivy image -f json -o results.json <image_name>

Scans the image and outputs the results in JSON format to a file.

  • Output in Table Format:
trivy image -f table <image_name>

Scans the image and outputs the results in a table format (default format).

Severity Filtering

  • Filter by Severity:
trivy image --severity HIGH,CRITICAL <image_name>

Scans the image and reports only high and critical severity vulnerabilities.

Cache Management

  • Clear Cache:
trivy image --clear-cache

Clears the local cache used by Trivy before scanning the image.

Ignoring Specific Vulnerabilities

  • Ignore Specific Vulnerabilities:
trivy image --ignorefile .trivyignore <image_name>

Uses a .trivyignore file to specify vulnerabilities to ignore during scanning.

Advanced Options

  • Timeout Setting:
trivy image --timeout 5m <image_name>

Sets a timeout for the scanning process.

  • Ignore Unfixed Vulnerabilities:
trivy image --ignore-unfixed <image_name>

Ignores vulnerabilities that do not have a fix yet.

  • Skip Update:
trivy image --skip-update <image_name>

Skips updating the vulnerability database before scanning.

Comprehensive Scan with All Details

trivy image --severity HIGH,CRITICAL --ignore-unfixed --skip-update -f json -o results.json <image_name>

A comprehensive scan that filters by severity, ignores unfixed vulnerabilities, skips database update, and outputs results in JSON format to a file.


These commands allow you to perform detailed and customizable scans on container images, ensuring you can identify and manage vulnerabilities.

Process Management in Linux

Introduction

In Linux, a process is simply a program that is currently running. When you execute a command, it starts a process.

  1. Processes can be categorized into Foreground Processes, which require user input and run in the foreground.
  2. Background processes, which run independently of the user.

Understanding processes is essential for managing and interacting with programs effectively in Linux.


Process States

A process state refers to the current condition or status of a process in its execution lifecycle

graph TD;
    A[Created] --> B[Running]
    B --> C[Sleeping]
    C --> D[Interruptible sleep]
    C --> E[Uninterruptible sleep]
    B --> F[Stopped]
    F --> G[Zombie]

Describing various attributes of a process:

Attribute Description
PID Unique Process ID given to each process.
User Username of the process owner.
PR Priority given to a process while scheduling.
NI 'nice' value of a process.
VIRT Amount of virtual memory used by a process.
RES Amount of physical memory used by a process.
SHR Amount of memory shared with other processes.
S State of the process: 'D' = uninterruptible sleep, 'R' = running, 'S' = sleeping, 'T' = traced or stopped, 'Z' = zombie.
%CPU Percentage of CPU used by the process.
%MEM Percentage of RAM used by the process.
TIME+ Total CPU time consumed by the process.
Command Command used to activate the process.

Documentation Guide For CKS

Domains & Competencies

Topic Weightage (%)
Cluster Setup 10
Cluster Hardening 15
System Hardening 15
Minimize Microservice Vulnerabilities 20
Supply Chain Security 20
Monitoring, Logging and Runtime Security 20

Certified Kubernetes Security Specialist Certification Free Courses


1. Cluster Setup


2. Cluster Hardening


3. System Hardening


4. Minimize Microservice Vulnerabilities


5. Supply Chain Security


6. Monitoring, Logging and Runtime Security


Understanding Arrays - Memory Structure, Use Cases, and Specific Implementations in Go

Arrays are a fundamental data structure in programming, widely used for storing and manipulating collections of data. Understanding their memory structure, use cases, and specific methods is key to effective programming.

Memory Structure of Arrays
  1. Contiguous Memory Allocation: Arrays allocate memory in a contiguous block. This means all elements are stored next to each other in memory, which enables efficient access and manipulation of the array elements.

  2. Fixed Size: In many languages, the size of an array is fixed at the time of creation. This means you need to know the maximum number of elements the array will hold beforehand.

  3. Element Access: Due to contiguous memory allocation, accessing an element in an array by its index is very efficient. The memory location of any element can be calculated directly using the base address of the array and the size of each element.

  4. Homogeneous Data Types: Arrays typically store elements of the same data type, ensuring uniformity in the size of each element.

Use Cases of Arrays
  1. Storing and Accessing Sequential Data: Arrays are ideal for situations where you need to store and access elements in a sequential manner, such as in various sorting and searching algorithms.

  2. Fixed-Size Collections: They are suitable for scenarios where the number of elements to be stored is known in advance and doesn’t change, like storing the RGB values of colors, or fixed configurations.

  3. Performance-Critical Applications: Due to their efficient memory layout and quick access time, arrays are often used in performance-critical applications like graphics rendering, simulations, and algorithm implementations.

  4. Base for More Complex Data Structures: Arrays form the underlying structure for more complex data structures like array lists, heaps, hash tables, and strings.

Specific Implementations in Go: New and With Functions

In the context of your Go package for array manipulation, two functions stand out: New and With.

The New Function
func New(size int) *Array {
    return &Array{
        elements: make([]int, size),
        len:      size,
    }
}
  • Purpose: This function initializes a new Array instance with a specified size.
  • Memory Allocation: It uses Go's make function to allocate a slice of integers, setting up the underlying array with the given size.
  • Fixed Size: The size of the array is set at creation and stored in the len field, reflecting the fixed-size nature of arrays.
  • Return Type: It returns a pointer to the Array instance, allowing for efficient passing of the array structure without copying the entire data.
The With Function
func (a *Array) With(arr []int) *Array {
    a.elements = arr
    return a
}
  • Purpose: This method allows for populating the Array instance with a slice of integers.
  • Flexibility: It provides a way to set or update the elements of the Array after its initialization.
  • Fluent Interface: The function returns a pointer to the Array instance, enabling method chaining. This is a common pattern in Go for enhancing code readability and ease of use.
Conclusion

Arrays are a versatile and essential data structure in programming. They offer efficient data storage and access patterns, making them ideal for a wide range of applications. In Go, the New and With functions within your array package provide convenient ways to initialize and populate arrays, harnessing the power and simplicity of this fundamental data structure.

Data Structures and Algorithms in Golang

Welcome to the Data Structures and Algorithms (DSA) section of my blog. In this space, I'll share insights and implementations of various DSAs using Golang. The related code and examples can be found in my GitHub repository.

Overview

This segment is dedicated to exploring a range of Data Structures and Algorithms, each thoughtfully implemented in Golang. The repository for these DSAs is structured into individual packages, ensuring organized and accessible learning.

Getting Started

To make the most out of this section, ensure you have:

  • Go installed on your machine.
  • A foundational understanding of Data Structures and Algorithms.

Features

  • Structured Learning: Each DSA is encapsulated in its own package, complete with test cases for hands-on learning.
  • Test-Driven Approach: Emphasis on validation through extensive test cases within each package.
  • Continuous Integration: Leveraging GitHub Actions, the codebase is consistently tested upon each push, ensuring reliability and functionality.

Index

Array

Acknowledgments

This initiative was inspired by the Apna College DSA Course, aiming to provide a comprehensive and practical approach to learning DSAs in Golang.


GitOps Principles

GitOps is indeed a relatively new approach to software delivery that emphasizes using Git repositories as the source of truth for infrastructure and application deployment. Here's a summary of the key principles and how GitOps pipelines differ from traditional CI/CD pipelines:


Declarative

Infrastructure and application configuration are defined declaratively in Git using tools like Kubernetes, Docker, and Terraform.

Versioned and Immutable

Everything, including code, config, monitoring, and policy, is stored in version control and described in code. Changes are made through Git and everything is managed from a central Git repository.

Pulled Automatically

Changes to the desired state are automatically applied to the system without manual intervention. This is done programmatically, ensuring that the actual state matches the desired state.

Continuously Reconciled

Software agents continuously monitor the state of systems. When there's a deviation from the desired state, agents take actions to bring the system back to the desired state.


GitOps Pipelines vs. Traditional CI/CD

Traditional CI/CD

Combines code assembly, testing, and delivery in a single workflow that completes with deployment to a target environment. This is a push-mode pipeline where the CI/CD system deploys ready containers directly to a cluster.

GitOps Pipeline

Uses a pull-mode based system with a controller inside a cluster to check infrastructure repositories against changes and implement them with each new commit.

Git is the key element, serving as the single source of truth for code, configuration, and the full stack.

CI services, code assembling, and testing are necessary to create deployable artifacts, but the overall delivery process is coordinated by the automated deployment system triggered by repository updates.


In summary, GitOps simplifies the management of infrastructure by making everything declarative, versioned, and automated. It promotes the use of Git as the source of truth, ensuring that changes are tracked, reviewed, and applied automatically, leading to more reliable and consistent deployments.

Useful CLI Shortcuts

General Navigation

  • Ctrl + A: Move to the beginning of the line. Quickly jumps to the start of the current command line.
  • Ctrl + E: Move to the end of the line. Takes you to the end of the current command line for easy editing.

Editing Commands

  • Ctrl + K: Cut the text from the current cursor position to the end of the line. Useful for quickly removing the latter part of a command.
  • Ctrl + U: Cut the text from the current cursor position to the beginning of the line. Clears the command line up to the current cursor position.

Handling Words

  • Alt + B: Move back one word. Navigates backward through the command line, one word at a time.
  • Alt + F: Move forward one word. Moves the cursor forward by one word, making it easier to navigate longer commands.
  • Ctrl + W: Cut the word before the cursor. Removes the word immediately before the cursor, a quick way to delete a single word.
  • Alt + D: Cut the word after the cursor. Deletes the word immediately after the cursor, useful for quick edits.

Command History

  • Ctrl + R: Search the command history. Allows you to search through previously used commands.
  • Ctrl + G: Exit from the history searching mode. Useful for returning to the normal command line mode.

Process Control

  • Ctrl + C: Kill the current process. Stops the currently running command immediately.
  • Ctrl + Z: Suspend the current process. Pauses the running command, allowing you to resume it later.

Miscellaneous

  • Ctrl + L: Clear the screen. Cleans the terminal window for a fresh start.
  • Tab: Auto-complete files, folders, and command names. Saves time by completing commands and paths automatically.

Note: These shortcuts are commonly used in Unix-like systems and may vary slightly based on the terminal or shell you are using.