Skip to content

Comprehensive Guide to Kubernetes Installation and Configuration

This guide will walk you through the process of installing and configuring a Kubernetes cluster from scratch.

Prerequisites

Before starting, ensure all nodes meet the following requirements:

  • 2 CPUs or more
  • 2GB RAM or more
  • Full network connectivity between all machines
  • Unique hostname, MAC address, and product_uuid for each node
  • Certain ports open on all nodes
  • Swap disabled

Step 1: Prepare All Nodes

Run these commands on all nodes (both master and worker):

bash
# Update the system
sudo apt update && sudo apt upgrade -y

# Install dependencies
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common

# Add Kubernetes apt repository
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Enable required kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# Configure sysctl settings for Kubernetes
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system

Step 2: Install Container Runtime (containerd)

Run on all nodes:

bash
# Install containerd
sudo apt update && sudo apt install -y containerd

# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# Update the configuration to enable SystemdCgroup
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

Step 3: Install Kubernetes Components

Run on all nodes:

bash
# Install kubelet, kubeadm, and kubectl
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Step 4: Initialize the Control Plane (Master Node Only)

Run only on the master node:

bash
# Initialize the Kubernetes control plane
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=stable

# Set up kubeconfig for the current user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After successful initialization, you'll see a command to join worker nodes. Save this command.

Step 5: Install a Pod Network Add-on (Master Node Only)

We'll use Calico as the network plugin:

bash
# Apply Calico network plugin
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# Verify that all pods are running
kubectl get pods -n kube-system

Step 6: Join Worker Nodes

Run the join command (that you got from Step 4) on each worker node:

bash
# Example (your token and hash will be different)
sudo kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef \
    --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef

If you lost the join command, you can regenerate it on the master node:

bash
# Generate a new token
kubeadm token create --print-join-command

Step 7: Verify Your Cluster

From the master node, check the status of your nodes:

bash
kubectl get nodes

All nodes should show as "Ready":

NAME        STATUS   ROLES                  AGE   VERSION
master      Ready    control-plane,master   10m   v1.23.5
worker1     Ready    <none>                 5m    v1.23.5
worker2     Ready    <none>                 5m    v1.23.5

Step 8: Deploy a Test Application

Let's deploy a simple web application to verify the cluster is working:

bash
# Create a deployment
kubectl create deployment nginx --image=nginx

# Expose the deployment
kubectl expose deployment nginx --port=80 --type=NodePort

# Check the service
kubectl get svc nginx

You can access the NGINX web server using the NodePort assigned.

Common Issues and Troubleshooting

Pod Network Issues

If pods can't communicate, check your network configuration:

bash
kubectl describe pod <pod-name>

Node Status NotReady

Check kubelet status:

bash
sudo systemctl status kubelet
sudo journalctl -u kubelet
bash
kubectl proxy --port=8001
curl http://localhost:8001/healthz

Authentication Issues

Verify the admin.conf file is correctly configured:

bash
ls -la $HOME/.kube/config

Next Steps

Now that your Kubernetes cluster is up and running, consider:

  1. Setting up a dashboard
  2. Configuring persistent storage
  3. Setting up monitoring with Prometheus and Grafana
  4. Implementing log aggregation
  5. Configuring Ingress controllers

References