Post

๐Ÿ—๏ธ Kubernetes Architecture: A Deep Dive into Master and Worker Nodes

A comprehensive deep dive into Kubernetes architecture, explaining master and worker node components, their interactions, and the core principles behind Kubernetes design. Perfect for DevOps engineers and architects wanting to understand Kubernetes internals.

Understanding Kubernetes Architecture

In this comprehensive guide, weโ€™ll explore the architecture of Kubernetes, breaking down each component and understanding how they work together to create a robust container orchestration platform. Weโ€™ll dive deep into both control plane (master) and worker node components, their responsibilities, and how they interact.

Control Plane Components (Master Node)

1. API Server (kube-apiserver)

The API server is the front door to the Kubernetes control plane. It:

  • Validates and processes REST requests
  • Serves as the frontend for the clusterโ€™s shared state
  • Ensures all components can only communicate through the API server
# Example API server configuration in a static pod manifest apiVersion: v1 kind: Pod metadata: name: kube-apiserver namespace: kube-system spec: containers: - name: kube-apiserver image: k8s.gcr.io/kube-apiserver:v1.24.0 command: - kube-apiserver - --advertise-address=192.168.1.10 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-servers=https://127.0.0.1:2379

2. etcd

The clusterโ€™s distributed key-value store that:

  • Stores all cluster data
  • Implements watch functionality for changes
  • Provides strong consistency guarantees
# Example etcd operations # Get cluster health etcdctl cluster-health # Backup etcd etcdctl snapshot save snapshot.db # View key hierarchy etcdctl get / --prefix --keys-only

3. Controller Manager (kube-controller-manager)

Runs controller processes that regulate the state of the cluster:

  • Node Controller: Monitors node health
  • Replication Controller: Maintains pod count
  • Endpoints Controller: Populates endpoint objects
  • Service Account & Token Controllers: Create accounts and API tokens
# Example controller manager configuration apiVersion: v1 kind: Pod metadata: name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=true - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf - --bind-address=127.0.0.1 - --client-ca-file=/etc/kubernetes/pki/ca.crt - --cluster-cidr=10.244.0.0/16 name: kube-controller-manager image: k8s.gcr.io/kube-controller-manager:v1.24.0

4. Scheduler (kube-scheduler)

Watches for new pods and assigns them to nodes based on various factors:

  • Resource requirements
  • Hardware/software constraints
  • Affinity/anti-affinity specifications
  • Data locality
  • Deadlines
# Example scheduler configuration apiVersion: v1 kind: Pod metadata: name: kube-scheduler namespace: kube-system spec: containers: - command: - kube-scheduler - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf - --bind-address=127.0.0.1 name: kube-scheduler image: k8s.gcr.io/kube-scheduler:v1.24.0

Worker Node Components

1. Kubelet

The primary node agent that:

  • Ensures containers are running in a pod
  • Reports node and pod status to the API server
  • Runs container health checks
# Example kubelet configuration apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 serializeImagePulls: true evictionHard: memory.available: "100Mi" nodefs.available: "10%" nodefs.inodesFree: "5%"

2. Container Runtime

The software responsible for running containers:

  • Docker (via cri-dockerd)
  • containerd
  • CRI-O
# View container runtime status crictl info # List containers crictl ps # View container logs crictl logs <container-id>

3. kube-proxy

Maintains network rules on nodes:

  • Implements Service abstraction
  • Handles cluster networking
  • Manages iptables rules
# Example kube-proxy configuration apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 mode: "ipvs"

Networking Architecture

Container Network Interface (CNI)

Kubernetes networking implementation through plugins:

  • Calico
  • Flannel
  • Cilium
  • Weave Net
# Example Calico CNI configuration apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv4-ippool spec: cidr: 192.168.0.0/16 ipipMode: Always natOutgoing: true

High Availability Setup

For production environments, implement HA with:

  • Multiple master nodes
  • Load balancer for API server
  • Distributed etcd cluster
# Example HAProxy configuration for API server frontend kubernetes-frontend bind *:6443 mode tcp option tcplog default_backend kubernetes-backend backend kubernetes-backend mode tcp option tcp-check balance roundrobin server master1 192.168.1.10:6443 check fall 3 rise 2 server master2 192.168.1.11:6443 check fall 3 rise 2 server master3 192.168.1.12:6443 check fall 3 rise 2

Security Architecture

Authentication

Multiple authentication strategies:

  • X.509 certificates
  • Service accounts
  • OpenID Connect
  • Webhook token authentication
# Example RBAC role and binding apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: read-pods namespace: default subjects: - kind: User name: jane apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io

Best Practices for Production

  1. Control Plane Security
    • Use TLS everywhere
    • Implement RBAC
    • Regular certificate rotation
    • Network policies
  2. Monitoring and Logging
    • Deploy Prometheus and Grafana
    • Implement centralized logging
    • Use audit logging
  3. Backup and Recovery
    • Regular etcd backups
    • Disaster recovery planning
    • Testing restore procedures

Video Resources

Architecture Deep Dives

Security and Best Practices

Additional Resources

This post is licensed under CC BY 4.0 by the author.